Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Create a 16:9 cinematic split-screen poster. LEFT SIDE (40% width): – A desk with a sturdy padlock and key resting next to printed network diagrams and simple risk checklists, representing cybersecurity in a calm, practical way. – The background is a turquoise / teal abstract field made of stylized blue rods or data fibers, indicating data protection. – Use neutral, soft lighting. Avoid red alarm tones, glowing warnings, and neon. RIGHT SIDE (60% width): – A green-toned abstract aerial forest canopy texture, symbolizing safety in a living environment. – Two clean rounded rectangles stacked vertically near the center-right. – The TOP rectangle contains the text: “Security”. – The BOTTOM rectangle contains the text: “2025/10/10”. – Clean sans-serif font, dark green or charcoal. OVERALL STYLE: – Vigilant yet reassuring. – No extra alert text. – Maintain the turquoise/forest split-screen.

CodeMender is a new AI agent from @GoogleDeepMind research that automates code security using Gemini Deep Think. > Upstreamed 72 security fixes to open source projects. > Patches codebases as large as 4.5 million lines. > Proactively rewrites code, eliminating entire classes of https://x.com/_philschmid/status/1975372666862510260

Excited to share early results about CodeMender, our new AI agent that automatically fixes critical software vulnerabilities. AI could be a huge boost for developer productivity and security. Amazing work from the team – congrats!”” / X https://x.com/demishassabis/status/1975551657514791272

I am proud to share the announcement about our CodeMender project at @GoogleDeepMind, an agent that can automatically fix a range of code security vulnerabilities. From only a modest-compute run, our agent submitted 72 high-quality fixes to vulnerable code in popular codebases, https://x.com/ralucaadapopa/status/1975242772467822738

Introducing CodeMender: an AI agent for code security – Google DeepMind https://deepmind.google/discover/blog/introducing-codemender-an-ai-agent-for-code-security/

Software vulnerabilities can be notoriously time-consuming for developers to find and fix. Today, we’re sharing details about CodeMender: our new AI agent that uses Gemini Deep Think to automatically patch critical software vulnerabilities. 🧵 https://x.com/GoogleDeepMind/status/1975185557593448704

New research with the UK @AISecurityInst and the @turinginst: We found that just a few malicious documents can produce vulnerabilities in an LLM—regardless of the size of the model or its training data. Data-poisoning attacks might be more practical than previously believed. https://x.com/AnthropicAI/status/1976323781938626905

A small number of samples can poison LLMs of any size \ Anthropic https://www.anthropic.com/research/small-samples-poison

5 things: Nvidia’s Huang on the state of the AI race with China https://www.cnbc.com/2025/10/08/nvidia-huang-ai-race-china-us-trump.html

Building AI for cyber defenders \ Anthropic https://www.anthropic.com/research/building-ai-cyber-defenders

Last week we released Claude Sonnet 4.5. As part of our alignment testing, we used a new tool to run automated audits for behaviors like sycophancy and deception. Now we’re open-sourcing the tool to run those audits. https://x.com/AnthropicAI/status/1975248654609875208

Petri: An open-source auditing tool to accelerate AI safety research https://alignment.anthropic.com/2025/petri/

AI-Generated Tests are Lying to You | David Adamo Jr. https://davidadamojr.com/ai-generated-tests-are-lying-to-you/

My infant year as an AI researcher — Moving from physics to AI https://alfredyao.github.io/posts/2025-10-06.html

One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn’t want to distract from SB 53, but Newsom just signed the bill so… here’s what happened: 🧵 https://x.com/_NathanCalvin/status/1976649051396620514

OpenAI Guardrails Documentation https://guardrails.openai.com/docs/

There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one”” / X https://x.com/jasonkwon/status/1976762546041634878

Strategic collaboration with Japan’s Digital Agency to bring OpenAI-powered tools to Japanese government employees: https://x.com/gdb/status/1973619271239700631

With the US falling behind on open source models, one startup has a bold idea for democratizing AI: let anyone run reinforcement learning. https://x.com/WIRED/status/1975993813995774448

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading