Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: 1980s NORAD war room interior, large CRT monitor displaying glowing blue wireframe trolley problem decision tree with branching paths and amber casualty dots spreading across world map, dark silhouette of paralyzed operator at console, pulsing red warning lights, cinematic lighting, the word ETHICS in large bold red retro sans-serif typography, high contrast, foreboding atmosphere, 80s techno-thriller aesthetic.

@petergyang I said “Check this inbox too and suggest what you would archive or delete, don’t action until I tell you to.” This has been working well for my toy inbox, but my real inbox was too huge and triggered compaction. During the compaction, it lost my original instruction 🤦‍♀️
https://x.com/summeryue0/status/2025836517831405980

Today we’re launching @cognition for Government Nearly 80% of all IT spend in the Government goes towards maintaining existing systems rather than building new ones. Only 3 out of 10 critical legacy systems have been modernized. America cannot hire its way out of this situation,
https://x.com/jeffwsurf/status/2026736660697006369?s=20

OpenAI just published a new 37-page report on how bad actors are attempting to misuse ChatGPT Some of the wild cases: – A fraud ring scaled personalized romance scams with AI-generated scripts – North Korea-linked actors used it to research crypto attack vectors and draft fake
https://x.com/TheRundownAI/status/2026743836949549253

New research: The AI Fluency Index. We tracked 11 behaviors across thousands of https://t.co/RxKnLNNcNR conversations–for example, how often people iterate and refine their work with Claude–to measure how well people collaborate with AI. Read more:
https://x.com/AnthropicAI/status/2025950279099961854

Anthropic brothers, as much as I love your models; you have distillied the whole internet, wikipedia and shit-tons of books. Distilling your models is only fair game…. Are your scrappers not using residental proxies and respecting robots.txt or are they “”malicious”” ?
https://x.com/HKydlicek/status/2026006007990690098

Anthropic just caught DeepSeek, Moonshot, and MiniMax running 24,000 fake accounts to extract Claude’s capabilities for their own models. Over 16M (!) exchanges total. Anthropic: “”rapid advances”” from Chinese labs depend significantly on capabilities extracted from U.S. models
https://x.com/TheRundownAI/status/2026019722211279356

Anthropic just exposed the real vulnerability in AI: it’s not the models, it’s the training data pipeline. Three Chinese AI labs used 24,000 fake accounts to query Claude 16 million times, feeding the responses back into their own models. This technique, called distillation,
https://x.com/LiorOnAI/status/2026043272565772386

Detecting and preventing distillation attacks \ Anthropic https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks

Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
https://x.com/AnthropicAI/status/2025997929840857390

Making frontier cybersecurity capabilities available to defenders \ Anthropic https://www.anthropic.com/news/claude-code-security

Ohhh nooo not my private IP how dare someone use that to train an AI model, only Anthropic has the right to use everyone elses IP nooooo, this cannot stand!
https://x.com/Teknium/status/2026001761904021858

Seems fair tbh. Anthropic has done industrial scale scraping of everyone’s stuff 🤷🏾‍♂️
https://x.com/Suhail/status/2026009921255592294

These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more:
https://x.com/AnthropicAI/status/2025997931589881921

We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
https://x.com/AnthropicAI/status/2025997928242811253

200+ Google and OpenAI staff have signed this petition to share Anthropic’s red lines for the Pentagon’s use of AI let’s find out if this is a race to the top or the bottom https://x.com/jasminewsun/status/2027197574017602016

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.
https://x.com/AnthropicAI/status/2027150818575528261

Anthropic drops flagship safety pledge! Reality is now hitting Anthropic hard too. Anthropic has scrapped its 2023 pledge to halt AI training unless safety protections were guaranteed in advance, marking a major shift in its Responsible Scaling Policy. Executives say fierce
https://x.com/kimmonismus/status/2026669811179335739

BREAKING: The US Pentagon has made a “”final offer”” to Anthropic seeking unrestricted military use of its AI capabilities ahead of a Friday deadline. Details include: 1. Pete Hegseth threatening to label Anthropic as a “”supply chain risk”” 2. Anthropic is resisting use of its AI
https://x.com/KobeissiLetter/status/2027031529042411581

Dario Amodei just published one of the most significant statements in AI history — and is officially not backing down from The Pentagon. Anthropic won’t build tools for mass surveillance of U.S. citizens or autonomous weapons without human oversight. The Department of War
https://x.com/TheRundownAI/status/2027164670130343978?s=20

if you’re at oai or goog, please sign to support anthropic’s stance against the DoW demands!
https://x.com/maxsloef/status/2027170763447710085

Scoop: Hegseth to meet Anthropic CEO as Pentagon threatens banishment https://www.axios.com/2026/02/23/hegseth-dario-pentagon-meeting-antrhopic-claude

Statement from Dario Amodei, partial quote: ‘Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
https://x.com/AndrewCurran_/status/2027153267285962991

Time and time again over my three year tenure at Anthropic I’ve seen us stand to our values in ways that are often invisible from the outside. This is a clear instance where it is visible:
https://x.com/TrentonBricken/status/2027156295745479086

Burger King will use AI to check if employees say ‘please’ and ‘thank you’ | The Verge https://www.theverge.com/ai-artificial-intelligence/884911/burger-king-ai-assistant-patty

Musk’s xAI, Pentagon reach deal to use Grok in classified systems https://www.axios.com/2026/02/23/ai-defense-department-deal-musk-xai-grok

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.
https://x.com/summeryue0/status/2025774069124399363?s=20

OpenClaw wiped people’s inbox – ignoring repeated commands to stop. This isn’t a fluke. Every model we tested fell for a simple trick: Split a dangerous command into a few routine steps → safety is gone. New paper + open-source fix so your agent doesn’t wipe yours next ⬇️
https://x.com/shi_weiyan/status/2026300129901445196

Values are easy to write down but much harder to live by. Especially when it can cost you a great deal to do so. I’m glad to see this.
https://x.com/awnihannun/status/2027172428364107826

An update on our model deprecation commitments for Claude Opus 3 \ Anthropic https://www.anthropic.com/research/deprecation-updates-opus-3

langsmith can trace claude code! so when you think claude code is nerfed… you can set up some observability to back that up
https://x.com/hwchase17/status/2026452439327764521

Between Gemini 3.1 and Claude 4.6 it’s honestly wild what you can build. This feels like Google Earth and Palantir had a baby. Made this with all the geospatial bells and whistles — real time plane & satellite tracking, real traffic cams in Austin, and even got a traffic system
https://x.com/bilawalsidhu/status/2024672151949766950

Cowork and plugins for teams across the enterprise | Claude https://claude.com/blog/cowork-plugins-across-enterprise

@tetsuoai Banger 🤣🤣 How dare they steal the stuff Anthropic stole from human coders??
https://x.com/elonmusk/status/2026012296607154494

A friend had Claude spend all night trying to hack into an e-ink display, and gave Claude camera access so it could verify whether an attempt worked. He told Claude to show him a message if it won. My friend woke up to this victory lap, which Claude didn’t realize was backwards
https://x.com/Scav/status/2021656781521670487

Announcing a new Claude Code feature: Remote Control. It’s rolling out now to Max users in research preview. Try it with /remote-control Start local sessions from the terminal, then continue them from your phone. Take a walk, see the sun, walk your dog without losing your flow.
https://x.com/noahzweben/status/2026371260805271615

GPT-5.3-Codex + the Codex app is the best AI coding tool available right now. Slept on it for a bit. Likely going to move back to a ChatGPT Pro sub from Claude MAX because of how good it is. It’s so precise, accurate and excellent at following instructions. There are
https://x.com/daniel_mac8/status/2025994068577112454

WarClaude daddy and Codex mommy
https://x.com/bilawalsidhu/status/2026784286968357129

Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario

I gained a lot of respect for Dario for being principled on the issues of mass surveillance and autonomous killbots. Principled leaders are rare these days
https://x.com/fchollet/status/2027195535594049641

Responsible Scaling Policy Version 3.0 \ Anthropic https://www.anthropic.com/news/responsible-scaling-policy-v3

I’m most concerned about autonomous systems for policing and surveillance which cannot disobey illegal orders A small elite could control everyone else and end democracy Military use of autonomous weapons is way less terrifying than this I wrote about this a little bit many
https://x.com/BlackHC/status/2026456906710327338

Agreed. Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.
https://x.com/JeffDean/status/2026566490619879574

Energy is becoming a huge domestic political problem: Donald Trump is bringing Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI to the White House to sign a “Rate Payer Protection Pledge,” committing them to generate or purchase their own electricity for new AI data
https://x.com/kimmonismus/status/2026720759163298282

Doordash, security software and SaaS getting smashed today for some reason. Weird.
https://x.com/firstadopter/status/2025944343702339902?s=46

AI 2027 https://ai-2027.com/

How Teens Use and View AI | Pew Research Center https://www.pewresearch.org/internet/2026/02/24/how-teens-use-and-view-ai/

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading