Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: 1980s NORAD war room interior with large glowing CRT monitor wall displaying wireframe world map, bright amber search tendrils spreading aggressively across continents from origin point, dark silhouette of military operator in foreground, large bold red sans-serif text ‘DeepSeek’ prominently displayed across screens, deep black background, high contrast neon blue and amber wireframe graphics, cinematic lighting, foreboding techno-thriller atmosphere

Anthropic brothers, as much as I love your models; you have distillied the whole internet, wikipedia and shit-tons of books. Distilling your models is only fair game…. Are your scrappers not using residental proxies and respecting robots.txt or are they “”malicious”” ?
https://x.com/HKydlicek/status/2026006007990690098

Anthropic just caught DeepSeek, Moonshot, and MiniMax running 24,000 fake accounts to extract Claude’s capabilities for their own models. Over 16M (!) exchanges total. Anthropic: “”rapid advances”” from Chinese labs depend significantly on capabilities extracted from U.S. models
https://x.com/TheRundownAI/status/2026019722211279356

Anthropic just exposed the real vulnerability in AI: it’s not the models, it’s the training data pipeline. Three Chinese AI labs used 24,000 fake accounts to query Claude 16 million times, feeding the responses back into their own models. This technique, called distillation,
https://x.com/LiorOnAI/status/2026043272565772386

Detecting and preventing distillation attacks \ Anthropic https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks

Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
https://x.com/AnthropicAI/status/2025997929840857390

Making frontier cybersecurity capabilities available to defenders \ Anthropic https://www.anthropic.com/news/claude-code-security

Ohhh nooo not my private IP how dare someone use that to train an AI model, only Anthropic has the right to use everyone elses IP nooooo, this cannot stand!
https://x.com/Teknium/status/2026001761904021858

Seems fair tbh. Anthropic has done industrial scale scraping of everyone’s stuff 🤷🏾‍♂️
https://x.com/Suhail/status/2026009921255592294

These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more:
https://x.com/AnthropicAI/status/2025997931589881921

We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
https://x.com/AnthropicAI/status/2025997928242811253

@tetsuoai Banger 🤣🤣 How dare they steal the stuff Anthropic stole from human coders??
https://x.com/elonmusk/status/2026012296607154494

A friend had Claude spend all night trying to hack into an e-ink display, and gave Claude camera access so it could verify whether an attempt worked. He told Claude to show him a message if it won. My friend woke up to this victory lap, which Claude didn’t realize was backwards
https://x.com/Scav/status/2021656781521670487

Announcing a new Claude Code feature: Remote Control. It’s rolling out now to Max users in research preview. Try it with /remote-control Start local sessions from the terminal, then continue them from your phone. Take a walk, see the sun, walk your dog without losing your flow.
https://x.com/noahzweben/status/2026371260805271615

GPT-5.3-Codex + the Codex app is the best AI coding tool available right now. Slept on it for a bit. Likely going to move back to a ChatGPT Pro sub from Claude MAX because of how good it is. It’s so precise, accurate and excellent at following instructions. There are
https://x.com/daniel_mac8/status/2025994068577112454

WarClaude daddy and Codex mommy
https://x.com/bilawalsidhu/status/2026784286968357129

DeepSeek is reportedly preparing to launch its new V4 AI model – release immenent, via CNBC. The market is pricing in potential crashes, and the NASDAQ is under pressure. Against this backdrop, Anthropic’s post could certainly be interpreted as accusing Chinese AI companies of
https://x.com/kimmonismus/status/2026040919162822776

DeepSeek is serious about inference support on diverse hardware.
https://x.com/teortaxesTex/status/2026976510360322534

cool idea from DeepSeek in their DualPath paper! instead of loading all KV’s directly onto GPUs from local NVMe (or DRAM) and bottlenecking on the local PCIe bus, they can stage the KV’s in the DRAM on the decode GPU servers, and then transfer the KV’s to the prefill GPUs via
https://x.com/JordanNanos/status/2027126010576298469

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading