Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: 1980s Cold War bunker interior with large glowing wireframe network map display showing cascading red breach points spreading across blue grid nodes, dark silhouette of operator at CRT terminal in foreground, alarm lights casting red glow, massive bold red sans-serif text reading SECURITY across top of display, cinematic lighting, high contrast, retro vector graphics aesthetic
Anthropic brothers, as much as I love your models; you have distillied the whole internet, wikipedia and shit-tons of books. Distilling your models is only fair game…. Are your scrappers not using residental proxies and respecting robots.txt or are they “”malicious”” ?
https://x.com/HKydlicek/status/2026006007990690098
Anthropic just caught DeepSeek, Moonshot, and MiniMax running 24,000 fake accounts to extract Claude’s capabilities for their own models. Over 16M (!) exchanges total. Anthropic: “”rapid advances”” from Chinese labs depend significantly on capabilities extracted from U.S. models
https://x.com/TheRundownAI/status/2026019722211279356
Anthropic just exposed the real vulnerability in AI: it’s not the models, it’s the training data pipeline. Three Chinese AI labs used 24,000 fake accounts to query Claude 16 million times, feeding the responses back into their own models. This technique, called distillation,
https://x.com/LiorOnAI/status/2026043272565772386
Detecting and preventing distillation attacks \ Anthropic https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks
Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
https://x.com/AnthropicAI/status/2025997929840857390
Making frontier cybersecurity capabilities available to defenders \ Anthropic https://www.anthropic.com/news/claude-code-security
Ohhh nooo not my private IP how dare someone use that to train an AI model, only Anthropic has the right to use everyone elses IP nooooo, this cannot stand!
https://x.com/Teknium/status/2026001761904021858
Seems fair tbh. Anthropic has done industrial scale scraping of everyone’s stuff 🤷🏾♂️
https://x.com/Suhail/status/2026009921255592294
These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more:
https://x.com/AnthropicAI/status/2025997931589881921
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
https://x.com/AnthropicAI/status/2025997928242811253
200+ Google and OpenAI staff have signed this petition to share Anthropic’s red lines for the Pentagon’s use of AI let’s find out if this is a race to the top or the bottom https://x.com/jasminewsun/status/2027197574017602016
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.
https://x.com/AnthropicAI/status/2027150818575528261
Anthropic drops flagship safety pledge! Reality is now hitting Anthropic hard too. Anthropic has scrapped its 2023 pledge to halt AI training unless safety protections were guaranteed in advance, marking a major shift in its Responsible Scaling Policy. Executives say fierce
https://x.com/kimmonismus/status/2026669811179335739
BREAKING: The US Pentagon has made a “”final offer”” to Anthropic seeking unrestricted military use of its AI capabilities ahead of a Friday deadline. Details include: 1. Pete Hegseth threatening to label Anthropic as a “”supply chain risk”” 2. Anthropic is resisting use of its AI
https://x.com/KobeissiLetter/status/2027031529042411581
Dario Amodei just published one of the most significant statements in AI history — and is officially not backing down from The Pentagon. Anthropic won’t build tools for mass surveillance of U.S. citizens or autonomous weapons without human oversight. The Department of War
https://x.com/TheRundownAI/status/2027164670130343978?s=20
if you’re at oai or goog, please sign to support anthropic’s stance against the DoW demands!
https://x.com/maxsloef/status/2027170763447710085
Scoop: Hegseth to meet Anthropic CEO as Pentagon threatens banishment https://www.axios.com/2026/02/23/hegseth-dario-pentagon-meeting-antrhopic-claude
Statement from Dario Amodei, partial quote: ‘Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
https://x.com/AndrewCurran_/status/2027153267285962991
Time and time again over my three year tenure at Anthropic I’ve seen us stand to our values in ways that are often invisible from the outside. This is a clear instance where it is visible:
https://x.com/TrentonBricken/status/2027156295745479086
Musk’s xAI, Pentagon reach deal to use Grok in classified systems https://www.axios.com/2026/02/23/ai-defense-department-deal-musk-xai-grok
@tetsuoai Banger 🤣🤣 How dare they steal the stuff Anthropic stole from human coders??
https://x.com/elonmusk/status/2026012296607154494
A friend had Claude spend all night trying to hack into an e-ink display, and gave Claude camera access so it could verify whether an attempt worked. He told Claude to show him a message if it won. My friend woke up to this victory lap, which Claude didn’t realize was backwards
https://x.com/Scav/status/2021656781521670487
Announcing a new Claude Code feature: Remote Control. It’s rolling out now to Max users in research preview. Try it with /remote-control Start local sessions from the terminal, then continue them from your phone. Take a walk, see the sun, walk your dog without losing your flow.
https://x.com/noahzweben/status/2026371260805271615
GPT-5.3-Codex + the Codex app is the best AI coding tool available right now. Slept on it for a bit. Likely going to move back to a ChatGPT Pro sub from Claude MAX because of how good it is. It’s so precise, accurate and excellent at following instructions. There are
https://x.com/daniel_mac8/status/2025994068577112454
WarClaude daddy and Codex mommy
https://x.com/bilawalsidhu/status/2026784286968357129
Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario
I gained a lot of respect for Dario for being principled on the issues of mass surveillance and autonomous killbots. Principled leaders are rare these days
https://x.com/fchollet/status/2027195535594049641
Doordash, security software and SaaS getting smashed today for some reason. Weird.
https://x.com/firstadopter/status/2025944343702339902?s=46





Leave a Reply