Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Wide angle aerial photograph of a white marble philosopher statue in toga freefalling through bright blue sky, wearing a black blindfold, holding broken scales, one arm raised joyfully, ground visible far below, bold serif text ‘ETHICS’ carved on floating marble tablet, bright daylight, clean composition, humorous tone.
US Supreme Court declines to hear dispute over copyrights for AI-generated material | Reuters https://www.reuters.com/legal/government/us-supreme-court-declines-hear-dispute-over-copyrights-ai-generated-material-2026-03-02/
Labor market impacts of AI: A new measure and early evidence \ Anthropic https://www.anthropic.com/research/labor-market-impacts
Striking image from the new Anthropic labor market impact report.”” https://x.com/andrewcurran_/status/2029655110494929194?s=12
Dang. Not to mention all the GPUs and TPUs Amazon and Google provide to Anthropic.”” https://x.com/bilawalsidhu/status/2027530947051045011
A statement on the comments from Secretary of War Pete Hegseth.”” https://x.com/AnthropicAI/status/2027555481699446918
AI Defense Contractors: The Economics Behind the Pentagon Pivot – philippdubach.com https://philippdubach.com/posts/when-ai-labs-become-defense-contractors/
Anthropic and Alignment Anthropic is in a standoff with the Department of War; while the company’s concerns are legitimate, it position is intolerable and misaligned with reality.”” https://x.com/stratechery/status/2028425096054931921
Anthropic Labeled Supply-Chain Risk, Pentagon Says – Bloomberg https://www.bloomberg.com/news/articles/2026-03-05/pentagon-says-it-s-told-anthropic-the-firm-is-supply-chain-risk
Anthropic Made Pitch in Drone Swarm Contest During Pentagon Feud – Bloomberg https://www.bloomberg.com/news/articles/2026-03-02/anthropic-made-pitch-in-drone-swarm-contest-during-pentagon-feud
Anthropic-Palantir Partnership at Risk After Pentagon Decision — The Information https://www.theinformation.com/articles/anthropic-palantir-partnership-risk-pentagon-ruling
NEW: After the Pentagon threatened to designate Anthropic a “supply chain risk,” Anthropic’s relationship with government contractor Palantir may be at risk. Palantir could stop using Anthropic for federal work, which makes up ~42% of Palantir’s $4.5B in revenue last year.”” https://x.com/srimuppidi/status/2028943303581024412
NVIDIA CFO on the earnings call: “”Physical AI is here, having already contributed north of $6B in NVIDIA Corporation revenue in fiscal year 2026.”” Jensen: “”Now, the thing that is the wave that we are seeing now is the agentic AI inflection, and the next inflection beyond”” https://x.com/TheHumanoidHub/status/2026815968807366703
Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way. This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States.”” https://x.com/deanwball/status/2027515599358730315
One legal point: The DoW “”supply chain risk”” designation applies to DoW *contracts,* not generally. DoW can tell suppliers “”don’t use Anthropic when performing DoW contracts.”” But DoW can’t, legally, tell its contractors “”don’t use Anthropic even in your private contracts.”””” https://x.com/petereharrell/status/2027517998555160645
Scoop: Anthropic’s business partnership with Palantir could be the first casualty of its Pentagon spat”” https://x.com/aaronpholmes/status/2028942999548297464
Sorry if this is woke or whatever but it is FUCKING INSANE that the DoD is explicitly, publicly trying to create an AI powered mass surveillance program of American citizens, attempting to destroy a company for refusing to comply with that directive, and *bragging about doing so*”” https://x.com/quantian1/status/2027537341410160917
Thank you for your attention to this matter. cc: @AnthropicAI @DarioAmodei”” https://x.com/petehegseth/status/2027487514395832410?s=12
The amendment for the DoW-OAI deal may help, but I think it still fails to address key problems. The core surveillance prohibition is limited to “”intentional””/””deliberate”” surveillance. If the DoW says the use is incidental, it’s seemingly permitted, regardless of scale. 🧵”” https://x.com/justanotherlaw/status/2028673906870223286
Think about the power Hegseth is asserting here. He is claiming that the DoD can force all contractors to stop doing business of any kind with arbitrary other companies. In other words, every operating system vendor, every manufacturer of hardware, every hyperscaler, every type”” https://x.com/deanwball/status/2027521251263000765
threats do not change our position: we cannot in good conscience accede to their request.”” @AnthropicAI drawing a moral line against enabling mass domestic surveillance & fully autonomous weapons, and holding it under pressure. Almost unheard of in BigTech. I stand in support.”” https://x.com/mmitchell_ai/status/2027478619430523273
Until a full account of the Anthropic-DoW negotiations eventually comes out in testimony under oath, it’s hard to know for sure how to interpret it. But the supply chain risk decision will be pretty consequential for the AI industry and likely not in a productive way.”” https://x.com/jachiam0/status/2027531473319055581
US government just announced they are looking for a new supplier for their *checks notes* mass domestic surveillance”” https://x.com/janleike/status/2027521943491252501
What the frick: “”I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with”” https://x.com/kimmonismus/status/2027517309120635120
According to OpenAI, their contract with the US DoW locks in current law, “”even if those laws or policies change in the future””. Our legal analysis, with Virgil Law CEO @LukeVerswey, shows that this is almost certainly incorrect.”” https://x.com/jeremyphoward/status/2028556035183759719
Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says | TechCrunch https://techcrunch.com/2026/03/04/anthropic-ceo-dario-amodei-calls-openais-messaging-around-military-deal-straight-up-lies-report-says/
Big loss for OpenAI, big win for Anthropic: Max schwarzer changes teams”” https://x.com/kimmonismus/status/2028952074063331421
Following the bad PR for OpenAI regarding the DoW agreement, GPT-5.4 will probably be released very soon to steer the conversation towards the new model and the improvements.”” https://x.com/kimmonismus/status/2028803185347875207
How OpenAI caved to the Pentagon on AI surveillance | The Verge https://www.theverge.com/ai-artificial-intelligence/887309/openai-anthropic-dod-military-pentagon-contract-sam-altman-hegseth
I have no inside information about what happened with the government, Anthropic & OpenAI yesterday, but AI is only going to get more disruptive & what we saw publicly (sudden escalations, lack of transparency, lack of clarity), was a bad pattern for navigating the decisions ahead”” https://x.com/emollick/status/2027774533587873815
It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance. In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for”” https://x.com/ilyasut/status/2027486969174102261
NEW: When OpenAI announced its Pentagon deal Friday night, people immediately challenged Sam Altman’s claims. Why, they asked, would the DoD suddenly agree to red lines when it had said it would never do so? The answer, sources told me, is that it didn’t.”” https://x.com/haydenfield/status/2028481498781790567
OpenAI wants you to just trust them that the NSA is excluded from their contract. Katrina seems to believe she has been “”very transparent,”” but there are many issues: -Two days ago, you promised “”a clear and more comprehensive explanation shortly”” of how the NSA is excluded.”” https://x.com/sjgadler/status/2028899096283758732
Our agreement with the Department of War | OpenAI https://openai.com/index/our-agreement-with-the-department-of-war/
Read Anthropic CEO’s Memo Attacking OpenAI’s ‘Mendacious’ Pentagon Announcement — The Information https://www.theinformation.com/articles/read-anthropic-ceos-memo-attacking-openais-mendacious-pentagon-announcement
State Department switches to OpenAI as US agencies start phasing out Anthropic | Reuters https://www.reuters.com/business/us-treasury-ending-all-use-anthropic-products-says-bessent-2026-03-02/
(I also would like to share this, which I wrote after thinking a little more.) There is a lot we will talk about in the coming days, but since this is one of the first “”real deal”” decisions we have faced, I wanted to share a few things that have been heavily on my mind the past”” https://x.com/sama/status/2028642231138353299
Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: “”• Consistent with applicable laws,”” https://x.com/sama/status/2028640354912923739
I’d like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.”” https://x.com/sama/status/2027900042720498089
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of”” https://x.com/sama/status/2027578508042723599
Studying potential scheming in today’s models is super important, but easy to get confused – many works use extremely contrived and unrealistic environments that invalidate their results. Designing good environments is really important! In this post we give some advice for how”” https://x.com/NeelNanda5/status/2028600215343943983
Forget public conversations. People unload their inner lives – hopes, wishes, desires, anxieties and worries into their AI assistants. That data is way more sensitive & valuable than anything the govt could record publicly. We built our own panopticon and pay monthly for it.”” https://x.com/bilawalsidhu/status/2027230878397587604
BullshitBench v2 is out! It is one of the few benchmarks where models are generally not getting better (except Claude) and where reasoning isn’t helping. What’s new: 100 new questions, by domain (coding (40 Q’s), medical (15), legal (15), finance (15), physics(15)), 70+ model”” https://x.com/petergostev/status/2028492834693677377
We will challenge any supply chain risk designation in court”” – Anthropic They are saying Department of War cannot restrict customers’ use of Claude outside of Dep of War contract work.”” https://x.com/iScienceLuvr/status/2027556624169381979
This isn’t just about preventing things that everyone is against with regards to the fully autonomous weapons before Claude is ready (which is likely sooner than people expect) or domestic mass surveillance. Yes these details are materially important to this exact situation, but”” https://x.com/kipperrii/status/2027566727790473290
Under @POTUS leadership, the biggest AI companies in the world are committing to the Ratepayer Protection Pledge. Data centers are the foundation of the internet and next generation technologies, supporting the U.S. economy and national security. Although electricity demand is”” https://x.com/WHOSTP47/status/2029297529301475705?s=20
@nabla_theta Also nearly all mass domestic surveillance in the US historically has been described as “”incidental””. So not “”intentional””. Once communications are “”incidentally”” collected, agencies including the FBI can query that database using US person identifiers without a warrant.”” https://x.com/jeremyphoward/status/2028805970214912125
All Lawful Use””: Much More Than You Wanted To Know https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you
Can a Contract Freeze the Law on Autonomous Weapons? – Answer.AI https://www.answer.ai/posts/2026-03-02-oai-dow-contract.html
Content before 2022 is the Roman lead or the Scapa Flow steel of human information, anything afterwards could be influenced by AI: directly written by AI, as a result of co-work with AIs, or just as a result of ambient contamination as AI style slips unconsciously into our work.”” https://x.com/emollick/status/2029249228858335632
I know it is a small thing, but, in these dying days of the open web, it is lovely that such a large proportion of famous poetry is online, mostly due to a $100M gift from Ruth Lily, who loved poetry (even though she never got any of her own published)”” https://x.com/emollick/status/2028247531902189974
I would really appreciate if independent legal counsel could redteam this contract modification language”” https://x.com/j_asminewang/status/2028648242666496092
I wrote about this in my book, but you see it play out on X: once people first have an “”aha moment”” with AI for the next few weeks they are often sent into a spiral of anxiety/excitement that can be quite intense After a bit, though, they can often see the jagged frontier again”” https://x.com/emollick/status/2027237925113463162
If you consider the combination of very fast improvements in AI, a lack of knowledge about abilities, high uncertainty about the future, the fact that guardrails are decided by AI labs, & that AI has very wide impact … expect mostly reactive, ad hoc & scattered policy responses.”” https://x.com/emollick/status/2027382925105222003
It is so clear that the important fissure in AI politics right now is not “liberal vs. conservative,” “Democrat vs. Republican,” “e/acc vs. EA,” or “safety vs. anti-safety,” but instead “takes advanced AI seriously as a concept vs. does not take advanced AI seriously.””” https://x.com/deanwball/status/2028619280774828114
LLMs can unmask pseudonymous users at scale with surprising accuracy – Ars Technica https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/
They Made The Viral Video Before Building The App (60sec) https://share.snipd.com/snip/c5d7d588-636d-4970-a035-8b5ee115d0be
This is a key moment for modeling positive institutional change as a result of AI: CEOs who brag about how they are using it to expand, rather than just cut headcount; governments that work with AI systems to expand access to education or healthcare, etc. People need examples!”” https://x.com/emollick/status/2028650521419034883
“If the news is fake, imagine history” – @AmuseChimp”” https://x.com/naval/status/1322646025811554304?lang=en
AI Safety Has 12 Months Left – by Michael Dempsey https://mhdempsey.substack.com/p/ai-safety-has-12-months-left
Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead. – WSJ https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit-cc46c5f7
Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage | TechCrunch https://techcrunch.com/2026/03/05/meta-sued-over-ai-smartglasses-privacy-concerns-after-workers-reviewed-nudity-sex-and-other-footage/
Today, OpenAI is launching the Deployment Safety Hub — a new site that turns our system cards from static PDFs into something you can easily search, browse, and share. https://t.co/qXWFVbw7Sa System cards are the most detailed window we provide into the technical work behind”” https://x.com/dgrobinson/status/2027458289517068511
@sama Key takeaways: 1. Until I see actual contract language, and get real experts to review it, I trust this exactly nil. Always get the language in natsec contracting. Always, always, always. 2. You really do think that the most classic dodge in intelligence oversight,”” https://x.com/David_Kasten/status/2028649586349228284
I think it is entirely possible that there will be no new frontier open weights models at some point in the near future. Counting on the Chinese AI labs to keep making their models free forever doesn’t make sense as model costs rise & the value of having a frontier model goes up”” https://x.com/emollick/status/2029016175674265873
This is good empirical evidence backing up the intuition that the major Chinese open weights models are quite fragile, good at some narrow areas but much less capable in general tasks or out-of-distribution work than the frontier closed models.”” https://x.com/emollick/status/2028619793322918228
BullshitBench v2, created by Peter Gostev, is a benchmark that does something refreshingly different: it tests whether AI models can detect and reject nonsensical prompts instead of confidently rolling with them. Only Anthropic’s Claude models and Alibaba’s Qwen 3.5 score”” https://x.com/kimmonismus/status/2029230388028358726





Leave a Reply