Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Wide-angle observational composition of a worn concrete security checkpoint at an industrial zone edge, rusty metal gate half-open, faded Chinese warning signs, single uniformed guard on plastic stool smoking, a real fire horse standing calmly beside the barrier in middle distance, overcast flat daylight, desaturated palette of concrete gray and rust, documentary realism style, large white Chinese-style text overlay reading SECURITY, patient Jia Zhangke long-take aesthetic
Pentagon threatens to cut off Anthropic in AI safeguards dispute https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro
Anthropic is prepared to loosen its current terms of use, but wants to ensure its tools aren’t used to spy on Americans en masse, or to develop weapons that fire with no human involvement. The Pentagon has aid, that Anthropic will “”pay a price”” for that behavior. Within this”” https://x.com/kimmonismus/status/2023419652378955809
Measuring AI agent autonomy in practice \ Anthropic https://www.anthropic.com/research/measuring-agent-autonomy
Most agent actions on our API are low risk. 73% of tool calls appear to have a human in the loop, and only 0.8% are irreversible. But at the frontier, we see agents acting on security systems, financial transactions, and production deployments (though some may be evals).”” https://x.com/AnthropicAI/status/2024210050718585017
New Anthropic research: Measuring AI agent autonomy in practice. We analyzed millions of interactions across Claude Code and our API to understand how much autonomy people grant to agents, where they’re deployed, and what risks they may pose. Read more:”” https://x.com/AnthropicAI/status/2024210035480678724
NEW: Pentagon is so furious with Anthropic for insisting on limiting use of AI for domestic surveillance + autonomous weapons they’re threatening to label the company a “supply chain risk,” forcing vendors to cut ties. With @m_ccuri and @mikeallen”” https://x.com/DavidLawler10/status/2023425130148626767
Pentagon threatens to cut off Anthropic in AI safeguards dispute https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro?amp%3Butm_medium=newsletter&%3Butm_campaign=ai-s-new-physics-discovery&%3B_bhlid=147fc2fb115d35bbc6b2211e9bcebfff031af136
Software engineering makes up ~50% of agentic tool calls on our API, but we see emerging use in other industries. As the frontier of risk and autonomy expands, post-deployment monitoring becomes essential. We encourage other model developers to extend this research.”” https://x.com/AnthropicAI/status/2024210053369385192
Something strange is happening with AI agents that this new Anthropic research quietly surfaces. The agents are asking us for help more than we’re stepping in to correct *them*. Anthropic analyzed data from Claude Code and their public API to measure how autonomous AI agents”” https://x.com/omarsar0/status/2024864635120451588
Opus4.6 found 500+ vulnerabilities in open-source code and we’ve begun reporting them and contributing patches quick excerpts from some of them 🧵”” https://x.com/trq212/status/2024937919937741290
News Alert: Today, the #FBI arrested three Silicon Valley engineers who are facing charges of conspiring to commit trade secret theft from Google and other leading technology companies, theft and attempted theft of trade secrets, and obstruction of justice. Samaneh Ghandali, 41,”” https://x.com/FBISanFrancisco/status/2024670479974363376
We’re committing $7.5M to @AISecurityInst’s Alignment Project to fund independent research on mitigations for safety and security risks from misaligned AI.”” https://x.com/OpenAINewsroom/status/2024546609485533442
Introducing Lockdown Mode and Elevated Risk labels in ChatGPT | OpenAI https://openai.com/index/introducing-lockdown-mode-and-elevated-risk-labels-in-chatgpt/
Introducing Lockdown Mode for ChatGPT. Lockdown mode is an advanced, optional security setting for higher-risk users, businesses, and enterprises. Lockdown Mode disables certain tools and capabilities in ChatGPT that an adversary could attempt to exploit to exfiltrate sensitive”” https://x.com/cryps1s/status/2023441322838028362
It’s extremely unreasonable to say a company is a “”supply chain risk”” because it wants terms that prevent using the AI for mass domestic surveillance and lethal autonomous weapons. (Insofar as this is the situation.) 1/”” https://x.com/RyanPGreenblatt/status/2023524096592802207
Introducing Claude Code Security, now in limited research preview. It scans codebases for vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix issues that traditional tools often miss. Learn more: https://x.com/claudeai/status/2024907535145468326
This is false If no sanctions were present, then the gap would either be super small or non-existent DeepSeek bros (@zheanxu & @chenggang_zhao) would absolutely cook on Rubin & Blackwells”” https://x.com/zephyr_z9/status/2024437158988353630
SpaceX to Compete in Pentagon Contest for Autonomous Drone Tech – Bloomberg https://www.bloomberg.com/news/articles/2026-02-16/spacex-to-compete-in-pentagon-contest-for-autonomous-drone-tech?srnd=phx-technology





Leave a Reply