About This Week’s Covers

This week’s cover was inspired by one of the most meaningful weekends I’ve had in a very long time. My good friend Brett Hurt (friends for 20 years!) flew out to spend three days with me. We used the time to be together as friends, and our side mission was to talk rigorously about artificial intelligence. We planned to record the fourth episode of Brett’s podcast, Love Conquers Fear, where we would discuss whether AI might help humanity reach a place of abundance for everyone.

I wanted to find a grounding backdrop where we could have those heavy conversations, especially because these topics can easily feel cold or sterile. My friend Kathleen let us stay at her vacation house, Surfhorse Camp, on the water near Assateague, giving Brett and me a backdrop of nature while we talked about technology. Because the house could sleep several people, I invited my mom, my wife, and my daughter to stay with us, adding an important element of humanity, so we didn’t stray too far from reality.

I also invited my lifelong friend Kevin and his wife Heidi to visit us by boat. Kevin and Heidi baked Kevin’s grandmother’s homemade scone recipe (included at bottom of this post!) and brought them fresh from the oven, wrapped up on the boat, with butter and jellies.

“Love doesn’t just sit there, like a stone, it has to be made, like bread; remade all the time, made new.” ― Ursula K. Le Guin, The Lathe of Heaven

We ate breakfast and recorded our podcast, which was incredibly special, but more importantly, we simply got to spend time together and be good friends.

Brett and I talked a lot about technology and the fate of humanity, but along the way we also went to the beach, made a bonfire, and had s’mores under the stars with the wild horses with my wife and daughter. We walked the boardwalk and rode the Haunted House.

Sunset on the beach with horses, a bonfire, and smores! Note that Brett is a huge fan of New Order and is wearing a super cool vintage Obvious Corp shirt with the Joy Division Unknown Pleasures inspired logo.

I’m deeply thankful for my friendship with Brett over the years. He is a remarkable human being, full of integrity, character, heart, and courage. He has been an unconditional friend of mine and wind in my sails throughout my life. Some people just always see the best in you, and Brett is one of them, and visa versa

Hanging with Brett in Austin, TX

Some people always see the best in you, and Brett is one of them.

For this week’s cover, I took Brett’s template that he uses for his podcast, Love Conquers Fear. I replaced the image of me from our episode with the new Figure 3 robot. Notably, I swapped out the Figure logo for a heart, representing the theme of love winning.

For the category covers, I took a list of my 53 categories and ran it through a Python script, which automates the image creation using Claude to write prompts and Gemini 2.5 to generate the images. The results came out OK…and given that I was recreating a very specific image using only a fairly vague rubric, it did a solid job. My favorite six are below:


For this week’s Humanities reading, I wanted to reference the reading materials that we Brett and talked about in our podcast episode. However, I couldn’t remember the names of all of the readings.

This was a great opportunity to showcase the power of Google Gemini’s multimodality. I gave Gemini a link to the YouTube episode and asked it to tell me all the books that were mentioned. My only prompt was literally “Can you tell me all the books mentioned in this episode?”

Here’s a link to conversation, so you can see it in action, and here’s an excerpt of the results: https://gemini.google.com/share/8905971e9ec4

Based on the episode transcript, here are the books mentioned, along with their authors and the timestamps where they appear:

I’ve put the readings excerpts at the end of the executive summary but wanted to showcase Gemini’s cool YouTube abilities! Now to the news of the week…

This Week By The Numbers

Total Organized Headlines: 523

This Week’s Executive Summaries

This week has 523 headlines, and 90 of them are going to inform our executive summary. We’ll start with Business updates, followed by Agent news, Ethics and Alignment, Science, Video, Robotics, and wrap up with Open Source.

This Week’s Top Story

But first, this week has a major milestone that could mark the beginning of the end of the traditional internet as we know it, and the end of page views and browsers.

Years ago, when Amazon released the Echo, publishers were given the ability to create Skills. An example could be a subscription-based platform like Spotify: all you had to do was ask your Amazon Alexa to connect your Spotify account. You would authenticate between Amazon and Spotify, and from then on, you could simply ask Alexa to play music… and your entire Spotify library was integrated into the speaker.

This is happening with internet content now, as this week OpenAI released an app development kit for third parties to integrate services directly into ChatGPT.

This could be news from your favorite source or the ability to search for and buy products from a store. Whether it’s content or commerce, the more we can accomplish within the conversational chat, the less we need to browse the internet.

Amazon Echo has made it possible so I don’t have to pick up my phone to play music or hear the news. I can talk into the room that I want to hear a song and it happens. I can ask, “What’s the latest news from NPR?” and I hear it.

Now, for those times when we do have to sit down and type, we don’t need to navigate the web. We can just speak to ChatGPT and get real time, integrated information.

I wrote about this in a November 2023 article for Delaware Lawyer. Incredibly, the title references The Adjacent Possible, a concept that Brett and I discussed in the podcast!

The chat bots are so good at banter, that we mistake the banter for the data. But the banter is just the interface… to data, content, and soon to embodied robots.

An important point from my article in November 2023:

ChatGPT is so proficient at conversation that people mistake it for an expert, when in fact, it’s just a dialog tool. A language model could be compared to a charismatic dilettante at a cocktail party. This person comes across as a genius – a multilingual polymath – because they are adroit at making small talk.

However, the ChatGPT most people “met” in 2023 was winging it – improvising convincingly and often effectively. In this analogy, common critiques such as ‘it makes things up’ or ‘it struggles with math and word counting’ are understandable. The free version of ChatGPT is a basic example of a large language model, an intuitive name for a ‘smooth talker.”

The world’s best small talker becomes startlingly powerful when given resources, an internet connection, self-reflection, and time. Agency is when an AI can do things on our behalf — browsing the web, making a purchase, booking a flight, replying to an email, or making a restaurant reservation.

“The AI Future: Exploring the Adjacent Possible with Emerging AI Solutions”

Whether this is good or bad isn’t the question. The question is: if this becomes the new platform approach, what does that mean for the browser’s 30-year run?

There was a time before the internet browser. There was a time before page views and impressions. And I imagine there will be a time afterward. I think this week marks a very big shift toward that change.

Wearing my Netscape sweatshirt in 1995. I emailed them and they mailed it to me for free. #woodpaneling

Business

Google released staggering usage numbers for September.
Google Gemini had over 1 billion visits in September, marking their ninth consecutive month of growth. Google processed over 1.3 quadrillion tokens in September. That’s 500 million tokens per second.

OpenAI held their Dev Day and announced usage numbers too.
In 2023, OpenAI held their first Dev Day with 2 million developers and 100 million weekly ChatGPT users. Back then they were processing 300 million tokens per minute on their API. Now they have 4 million developers and over 800 million users, with 6 billion tokens processed per minute.

OpenAI acquired a company called Roi, an AI-powered finance app.
This was an acquihire, meaning they hired the people more than bought the product.

OpenAI partnered with AMD for a five-year GPU supply to deploy 6 gigawatts of GPUs.
TechCrunch reports that even after the Stargate, Oracle, NVIDIA, and AMD news, OpenAI has even more large deals on the way.

Elon Musk’s xAI has raised more than initially planned.
NVIDIA has added almost $2 billion to the equity portion, bringing the funding round to $20 billion.

Agents

Open AI Apps
Returning to the top story, OpenAI’s Dev Day included a live demo of apps inside ChatGPT. At launch, the app partners include Booking.com, Canva, Coursera, Expedia, Figma, Spotify, and Zillow. I’m proud of myself for calling Spotify two years ago. They are clearly a predictable launch partner!
https://openai.com/index/introducing-apps-in-chatgpt/

This will push the dialogue about how publishers view getting paid for their content. Historically, publishers were upset that they were “training the chatbot,” whereas I’ve always felt that the chatbot is simply a language interface and not meant to be an answer-bot.

There’s a very big difference between asking a chatbot to do something…and getting lost in the sauce because LLMs are so good at general knowledge that we think they should riff facts as part of next token prediction. It’s insane that it works at all for answering questions… We called the predictable gap in chatter prediction… hallucinations.

In reality, there’s no need for hallucinations If we know what it’s meant to do, it’s meant to understand our question… and then it can go get the answer with tool use and integrations. We’ve been dunking on six-year-old LeBron. Dunning Kruger at its finest. Now LeBron’s getting competitive.

The more we think about chatbots as interfaces for agents… which then go out and do work, the better we’re all going to be. That shifts the conversation away from publishers arguing that they should be paid to train a chatbot to speak, and instead toward the idea that their content should be paywalled so that if you want real-time access within a chat, the chatbot has to verify that you’re paying for it… or at least subscribing in some way.

The more we think about chatbots as interfaces for agents… which then go out and do work, the better we’re all going to be.

The big conundrum now will be the combination of web browsing and agents that can find free content as an alternative to paid content. A lot of content is plentiful, not copyrighted, and easily discoverable, for example, sports scores, the weather, a recipe for chili… these may hard to put behind a paywall.

However, for the things we rely on, where we can’t risk hallucinations and we require confidence security and structure, like commerce , finance, real estate, or legal interactions, the app model will work really well. The impact on page views and browsers will be a ring on the tree of history… and could mark their end over the next few years.

Open AI Agent Tools
As if connecting two ends of a tunnel, OpenAI also released their agent development toolkit… that allows developers and companies to build and optimize agents. These are very robust tools for building production-level, complex agents with pipelines, tuning, and front-end design. It might even be better to think of these as enterprise applications. Agent Builder includes a visual development canvas where you can drag and drop logic and connect tools. It’s very similar to n8n, if you’re familiar with that. In fact, it basically competes directly with a lot of these development tools. https://openai.com/index/introducing-agentkit/

So, as you can tell, we’ve got commerce and apps being integrated into chats, and then offline from the chats we’ve got agents that can use the engine to accomplish tasks. As they say, it’s difficult to fight a two-front war, and that’s what’s happening right now with the entire internet legacy infrastructure.

I haven’t had a chance to play with the app agent developer kit, but it seems really fun, especially for simple things. Even if you’re not a developer, there are things you can do to take routine tasks, automate them, and have some fun.

Quick example of Gemini’s ability to digest and display large amounts of information.
I pasted the previous two sections (verbatim) into Gemini’s iamge tool and prompted “Please read this overview and then make a illustration or cartoon that shows the two sides of the equation getting chipped away like the tunnel analogy”

I pasted the previous two sections (verbatim) into Gemini’s iamge tool and prompted “Please read this overview and then make a illustration or cartoon that shows the two sides of the equation getting chipped away like the tunnel analogy”

OpenAI Turning ChatGPT into An Operating System
If we consider ChatGPT as a platform supporting third-party apps, their ecosystem could become quite robust.

During chats, we can now tag apps and interact with them, and it starts to feel like a Discord conversation or Slack channel…except instead of talking to people, we’re interacting with apps, commerce, and news. It’s a very rich environment that’s fair to call an operating system.

TechCrunch interviewed Nick Turley, who is in charge of commercializing ChatGPT, and to put it bluntly, Turley said he’s taking his cues from web browsers. Nick points out that over the last decade, browsers and cloud-based software have effectively emerged as a new type of operating system and they’ve become the place people go to work.
https://techcrunch.com/2025/10/08/openais-nick-turley-on-transforming-chatgpt-into-an-operating-system/

OpenAI ChatBots for Third Parties
The last big piece of news this week is that OpenAI announced a developer kit to create chatbots that leverage ChatGPT on third-party sites. For example, a human resources chatbot, a customer service chatbot, or any kind of front-end interface that takes all of our data or content and makes it “chattable,” for lack of a better term, this is a pretty our tool for those who want to go in that direction.

Of course, there are a ton of open-source options for “grounding local conversations”. Almost everything OpenAI has been releasing has an open-source counterpoint. One way to think about it might be brand-name versus generic drugs… it really depends on your style, which brand you prefer, price, and flexibility.

If one reflected on history, the next phase of all of this would be corporate alignments and partnerships. And even though that’s usually how things go, in this case it feels like everything is becoming ubiquitous, rather than defined by business relationships, the way publishing houses or movie studios behaved in the past. Everyone seems to be operating without much loyalty.

Google Gemini Computer Use
The hits keep on coming, because Google announced Gemini 2.5 computer use. This allows Gemini to control a user interface through vision understanding. If you’ve been following along with my newsletter, you’ve heard of multimodal AI, where artificial intelligence can understand the complete context of an image…all of the objects and everything inside it. It turns out computer interfaces are remarkably more predictable than pictures of crowds or fishing boats, because they’ve been designed to be interfaces.

Google’s been getting ready for this for a long time… here’s a slide from my presentation “AI Trends and The Internet’s Future in 40 Minutes“.

This example is from April 2022! Google’s been getting ready for this for a long time.

That means interfaces can be learned fairly easily by AI and then navigated by looking at screenshots. The Gemini computer-use agent can look at screenshots and decide what it needs to do to get you to the goal. It integrates with your operating system to use the computer. After it takes an action, it captures another image to see how screen changed. Through this loop of reasoning, action, images, and responses, it can navigate your computer.

What’s interesting is that visual understanding isn’t really the hard part… it’s the agentic reasoning capability that has only recently come into play.
https://blog.google/technology/google-deepmind/gemini-computer-use-model/

Most of the functionality is through an API, which makes it hard for most people to see it in action. That said, there are plenty of videos, and there’s also a sandbox example where you can try it out in a web browser with an emulator. https://gemini.browserbase.com/

Google Launches Agent That Finds and Fixes Vulnerabilities In Code From Google: “CodeMender helps solve this problem by taking a comprehensive approach to code security that’s both reactive, instantly patching new vulnerabilities, and proactive, rewriting and securing existing code and eliminating entire classes of vulnerabilities in the process. Over the past six months that we’ve been building CodeMender, we have already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code.”
https://deepmind.google/blog/introducing-codemender-an-ai-agent-for-code-security/

BrowserBase Trains AI on Browser Use in Parallel – 54x Faster
“Instead of running the agents on a single browser and waiting for a task to complete before moving on to the next, it’s possible to branch off and train and evaluate in parallel—all at the same time. “Thanks to Browserbase’s concurrent browser infrastructure, we were able to condense ~18 browser hours into 20 minutes of total runtime.”

Amazon Post on Browser Use
Amazon published a report titled “What makes browser use hard for AI Agents?”
https://labs.amazon.science/blog/what-makes-browser-use-hard-for-ai-agents

Gemini Integrates Stripe and Third Parties in the CommandLine
Google introduced extensions for the command-line interface, that allow you to connect to libraries from third-parties directly via the command line. As you write code, you can reference those third-party tools, and the command line can insert code that integrates with approved services, without hallucinations and with sanctioned tool usage.
https://blog.google/technology/developers/gemini-cli-extensions/

For example, with Stripe, Gemini can create payment links inside an application. It can build invoices, list customers and subscriptions, create refunds, and search Stripe knowledge documents. Google is using Anthropic’s MCP protocol to enable these extensions. https://blog.google/technology/developers/gemini-cli-extensions/ https://stripe.com/blog/introducing-our-agentic-commerce-solutions

Claude Code Plugins
Along the same lines of apps and agent integrations, but slightly less exciting, Anthropic announced Claude Code Plugins. These are essentially functions or sub-agents that you can call within your code. They’re mostly behind-the-scenes tools, but what’s neat is that they allow people to share tools that others can use. If you’re managing a team, you could develop plugins that you require your team to use for security purposes, and similar needs. It’s basically a standard structure that allows people to augment their code.
https://claude.com/blog/claude-code-plugins

Ethics, Security, and Alignment

Deloitte Makes Fools of Themselves
Deloitte is in an embarrassing position after the Australian government paid the firm $290,000 for a report examining automated penalties in Australia’s welfare system. The report included fabricated quotes attributed to judges, as well as non-existent reports attributed to law and software engineering experts. There was a non-existent book with a title so absurd that caught one reviewer’s eye and tipped them off that it was hallucinated.

This report was not supposed to be a showcase for AI use. It was simply a routine Deloitte assigment that the Australian government commissioned. But Deloitte clearly misused AI and got caught. That’s incredibly disturbing.

Are there any adults in the room anymore, anywhere? Good reminder to be the adults ourselves.
https://apnews.com/article/australia-ai-errors-deloitte-ab54858680ffc4ae6555b31c8fb987f3

AI Beats Most Humans At Forecasting – On Track To Beat All By 2026
Ethan Mollick posts: “From the team tracking human forecasting abilities for years: Current models already beat most humans and ‘A linear extrapolation of state-of-the-art LLM forecasting performance suggests LLMs will match superforecasters in November 2026.’”
https://x.com/emollick/status/1976026718801736012

“A year ago, when we first released ForecastBench, the median forecast from a group of members of the public sat at #2 in our leaderboard—trailing behind only superforecasters. Today, the median public forecast is beaten by multiple LLMs, putting it at #22 in our new Tournament leaderboard.”

“A year ago, when we first released ForecastBench, the median forecast from a group of members of the public sat at #2 in our leaderboard—trailing behind only superforecasters. Today, the median public forecast is beaten by multiple LLMs, putting it at #22 in our new Tournament leaderboard.”

The list of questions and methods are actually very interesting.
https://forecastingresearch.substack.com/p/ai-llm-forecasting-model-forecastbench-benchmark

OpenAI Launches Benchmarks re Political Bias
From Natalie Staud, “ChatGPT shouldn’t have political bias in any direction. Today, we’re sharing new research that defines what political bias means in LLMs, and we introduce a new evaluation framework to measure and reduce it. This has been the most meaningful work I’ve done at OpenAI, and I say that as someone who got to be part of the ChatGPT launch!!” https://x.com/nataliestaud/status/1976382637104300329 https://openai.com/index/defining-and-evaluating-political-bias-in-llms/

Anthropic: Tiny Amounts of Data Can Poison an LLM
“In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a “backdoor” vulnerability in a large language model—regardless of model size or training data volume. Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents.” https://www.anthropic.com/research/small-samples-poison

Science

Google Chief Scientist Wins Nobel Prize in Physics
“This morning, Michel Devoret, currently at Google as Chief Scientist of Quantum Hardware on the Quantum AI team, has been awarded the 2025 Nobel Prize in Physics. The prize is for his work on macroscopic quantum effects that laid the foundation for modern superconducting qubit-based quantum computing. Michel shares the honor with John Martinis, former hardware leader at Google Quantum AI, and John Clarke of the University of California, Berkeley.

Our Quantum AI team is incredibly proud to see Michel and John recognized for their pioneering work, and it’s another exciting moment at Google. They join a distinguished group of now 5 Nobel-winning Googlers and alumni, including 2024 winners Demis Hassabis, John Jumper and Geoffrey Hinton.” https://blog.google/inside-google/company-announcements/googler-michel-devoret-awarded-the-nobel-prize-in-physics/

AstraZeneca, Algen Biotechnologies pen $555M AI pact for immunology targets
“The pact will see Algen use its so-called ‘AlgenBrain’ platform to drive early-stage drug discovery for AstraZeneca. The hope is to find a series of next-generation immunology therapies ‘using advanced CRISPR gene modulation and AI-driven drug discovery,’ according to a joint statement. The deal allows AstraZeneca exclusive rights to develop and sell therapies ‘against a defined set of targets identified and selected through the partnership,’ the company said in a statement.” https://www.fiercebiotech.com/biotech/astrazeneca-algen-biotechnolgies-pen-555m-ai-pact-immunology-targets

Video

OpenAI Sora Buzz Continues
OpenAI’s Sora video model continues to generate reactions, and the numbers are starting to come in. The Sora app hit 1 million downloads in five days, which is even faster than ChatGPT did, despite being invite-only and limited to North America. The addition of the Cameo feature — where you can upload your face and allow others to make remixes — was rolled out in a viral move with Jake Paul, who racked up 1 billion views in six days.

The Sora app hit 1 million downloads in five days, which is even faster than ChatGPT did, despite being invite-only and limited to North America.

At the same time, watermark-removal software has been flooding the internet, allowing people to strip off the Sora watermark. https://www.404media.co/sora-2-watermark-removers-flood-the-web/

Ethan Mollick demonstrated two strengths of Sora. One is its ability to jam-pack a prompt into ten seconds. In one example, he asked it to create the most over-the-top Hallmark movie clip possible in ten seconds!

Ethan also tested how much the model could handle by giving it an extremely complicated prompt. “Ethan Mollick parachuting into a volcano, explains the three forms of legitimation from DiMaggio, Paul; Powell, Walter. (April 1983). “The iron cage revisited: institutional isomorphism and collective rationality in organizational fields”

Robots

Figure Debuts the Figure 03 Humanoid
Figure released the third generation of their humanoid robot, Figure 3, with four primary features/goals:
https://www.figure.ai/news/introducing-figure-03

First, it’s built to accommodate a locally hosted onboard vision + reasoning + action model called Helix, which allows vision, language, and action to work together inside the robot. Clearly, the brain is an important part of making a robot effective, and Figure 3 is an upgrade designed specifically to support the proprietary Figure brain, Helix. Over a year ago, Figure abandoned its partnership to co-develop the brain with OpenAI, and go it alone.

Second, the robot is designed to handle tasks in a home environment, including folding laundry, working with food, and dealing with delicate materials like eggs, with the goal of integrating into everyday life.

Third, Figure 03 emphasizes manufacturability at scale, so they can build a supply chain and produce these robots in large numbers.

And lastly, the company is targeting commercial applications, where Figure could work in factories and other industrial settings.

Figure 3 was the inspiration for my cover image this week, but I swapped the Figure logo for a heart, in the spirit of “Love Conquers Fear”.

Figure also announced “This week, Figure has passed 5 months running on the BMW X3 body shop production line. We have been running 10 hours per day, every single day of production! It is believed that Figure and BMW are the first in the world to do this with humanoid robots” https://x.com/adcock_brett/status/1975197913178587172

Walmart Starts Selling A $21,600 Chinese Robot
“Walmart is now shipping the Unitree G1 humanoid robot directly within the US. Only the basic trim is available, priced at $21,600. Free shipping, and you can order a batch of six in one shot.” https://x.com/TheHumanoidHub/status/1975637835643535648

Open Source

Reflection Raises $2 Billion for Open Source LLM Training
“Today we’re sharing the next phase of Reflection. We’re building frontier open intelligence accessible to all. We’ve assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion… We are thankful for the support of our investors including B Capital, Citi, CRV, Disruptive, DST, Eric Schmidt, Eric Yuan, Lightspeed, NVIDIA, Sequoia, 1789 and others.” https://reflection.ai/blog/frontier-open-intelligence

This Week’s Humanities Reading

For this week’s Humanities reading, I wanted to reference materials that I talked about with Brett in our interview. I gave Gemini a link to the YouTube episode and simply asked it to tell me all the books that were mentioned. https://gemini.google.com/share/8905971e9ec4 https://www.youtube.com/watch?v=HS34S7iihII

I decided to go with a quote from Steven Johnson about the adjacent possible, along with a few quotes from Ursula K. Le Guin, two from The Left Hand of Darkness.

The first quote from Urula Le Guin is a nod to Kevin and Heidi’s scones. But first, I have obtained the scanned original, yellowed, scone recipe from the 1970s!

The original copy of Grandma Gilmore’s (aka World’s Best) Scone Recipe – Use unbleached flour

“Love doesn’t just sit there, like a stone, it has to be made, like bread; remade all the time, made new.” ― Ursula K. Le Guin, The Lathe of Heaven

I gave Gemini the recipe and asked it to clean it up and add Kev and Heidi on their boat.

“To learn which questions are unanswerable, and not to answer them: this skill is most needful in times of stress and darkness.” ― Ursula K. Le Guin, The Left Hand of Darkness

“The only thing that makes life possible is permanent, intolerable uncertainty: not knowing what comes next.” ― Ursula K. Le Guin, The Left Hand of Darkness

“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself. Yet is it not an infinite space, or a totally open playing field. The number of potential first-order reactions is vast, but it is a finite number, and it excludes most of the forms that now populate the biosphere. What the adjacent possible tells us is that at any moment the world is capable of extraordinary change, but only certain changes can happen.” ― Steven Johnson, Where Good Ideas Come From

Also relevant, is the humanities reading from August 15, 2025. John Milton’s Paradise Lost:

“The mind is its own place and, in itself can make a heaven of hell or a hell of heaven.”

Full Executive Summaries with Links, Generated by Claude Sonnet 4

Google Gemini surpasses one billion monthly visits for first time
Gemini’s milestone reflects AI chatbots entering mainstream adoption, with 285% year-over-year growth marking the fastest expansion among major AI platforms. Separately, one AI company processed 1.3 quadrillion tokens monthly, demonstrating the massive computational scale now required to serve growing user demand.

Google Gemini in September 2025: → 1.057B visits. → First time surpassing 1 billion visits. → 9th consecutive month of growth. → +285.07% YoY. → +46.24% MoM. https://x.com/Similarweb/status/1976206499191062758

We processed over 1.3 Quadrillion tokens last month – that’s 1,300,000,000,000,000 tokens! or to put it another way that’s 500M tokens a second or 1.8 Trillion tokens an hour… 🤯 https://x.com/demishassabis/status/1976712484657475691

OpenAI reports 4 million developers and 800 million users, up from 2 million and 100 million in 2023
The company’s API now processes 6 billion tokens per minute—a 20x increase from 300 million last year—indicating massive enterprise adoption beyond consumer ChatGPT usage. This growth suggests AI tools are moving from experimental to production-scale business applications across industries.

I still remember our first Dev Day in 2023. Back then, we had 2 million developers and 100 million weekly ChatGPT users, and we were processing about 300 million tokens per minute on our API. Today, 4 million developers have built with OpenAI, more than 800 million people use”” / X https://x.com/nickaturley/status/1975262699270656224

These numbers are 🤯”” 4M Devs and 6B Tokens processed per minute .. / X https://x.com/fidjissimo/status/1975256964528808284

OpenAI acquires personal finance app Roi in talent-focused deal
OpenAI bought AI finance startup Roi for its CEO’s personalization expertise, shutting down the app while continuing a pattern of acqui-hires targeting consumer AI talent. This signals OpenAI’s push beyond enterprise APIs into personalized consumer applications, as the company seeks new revenue streams while burning billions on infrastructure costs.

With its latest acqui-hire, OpenAI is doubling down on personalized consumer AI  | OpenAI has acquired Roi, an AI-powered personal finance app. In keeping with a recent trend in the AI industry, only the CEO is making the jump. TechCrunch https://techcrunch.com/2025/10/03/with-its-latest-acqui-hire-openai-is-doubling-down-on-personalized-consumer-ai/

OpenAI strikes unusual deal paying for AMD chips with AMD stock
OpenAI will receive up to 160 million AMD shares worth potentially $100 billion to purchase 6 gigawatts of AMD’s AI chips over multiple years, with the final stock tranche requiring AMD’s market cap to hit $1 trillion. This reverse-financing arrangement lets AMD fund OpenAI’s purchases while gaining validation that its chips can compete with Nvidia’s dominance in AI computing. The deal represents a new model where chip companies essentially underwrite their customers’ massive infrastructure needs through equity stakes.

🚨Breaking: OpenAI has partnered with AMD in a massive five-year GPU supply agreement to deploy 6 Gigawatts of AMD GPUs over multiple years. Deal Highlights: – The deal gives AMD a seat at the table as one of OpenAI’s core compute suppliers – AMD issued OpenAI a warrant for up https://x.com/TheRundownAI/status/1975205711715181022

AMD and OpenAI announce strategic partnership to deploy 6 gigawatts of AMD GPUs | OpenAI https://openai.com/index/openai-amd-strategic-partnership/

Exciting day today! Thrilled to partner with @OpenAI to deploy 6GWs of AMD Instinct GPUs. The world needs more AI compute. Together, we’re bringing the best of both companies to accelerate the global AI infrastructure buildout. Thanks @sama @gdb for the trust and partnership.”” / X https://x.com/LisaSu/status/1975210493796385233

Wall Street analysts explain how AMD’s own stock will pay for OpenAI’s billions in chip purchases  | TechCrunch https://techcrunch.com/2025/10/07/wall-street-analysts-explain-how-amds-own-stock-will-pay-for-openais-billions-in-chip-purchases/

Even after Stargate, Oracle, Nvidia, and AMD, OpenAI has more big deals coming soon, Sam Altman says | TechCrunch https://techcrunch.com/2025/10/08/even-after-stargate-oracle-nvidia-and-amd-openai-has-more-big-deals-coming-soon-sam-altman-says/

Musk’s xAI raises $20 billion with Nvidia as direct investor
This marks the first time Nvidia has made a major equity investment in an AI startup rather than just selling chips, with $2 billion going toward xAI’s massive data center expansion. The deal structures $12.5 billion in debt specifically tied to purchasing Nvidia processors, creating an unprecedented financing model that directly links AI funding to chip procurement.

Musk’s xAI nears $20 billion capital raise tied to Nvidia chips, Bloomberg News reports https://finance.yahoo.com/news/musks-xai-nears-20-billion-232913241.html

OpenAI launches Apps SDK letting users interact with web apps inside ChatGPT
OpenAI released an Apps SDK that embeds interactive web applications like Spotify, Figma, and Canva directly within ChatGPT conversations. This transforms ChatGPT from a text interface into a conversational gateway for controlling third-party services, potentially making it a central hub for internet interactions. The SDK enters preview today, marking OpenAI’s boldest move yet to position ChatGPT as the primary interface between users and digital tools.

🤯 OpenAI just announced the Apps SDK. It allows you to use web apps directly in ChatGPT. ChatGPT will become the front door to the internet. Incredibly bold ambition. https://x.com/daniel_mac8/status/1975250574519271470

Apps coming to ChatGPT is a big deal. There’s something really magical about having a conversation with apps to guide the actions you want to take. We can’t even imagine what devs will create with a capability like this. It will be fun to see.”” / X https://x.com/fidjissimo/status/1975259473678901368

devday live demo of interactive apps in chatgpt. apps sdk in preview starting today. https://x.com/gdb/status/1975251201425084609

Introducing apps in ChatGPT and the new Apps SDK | OpenAI https://openai.com/index/introducing-apps-in-chatgpt/

OpenAI launches apps inside ChatGPT: You can now interact with your favorite apps directly in ChatGPT. Starting Monday, users can access interactive apps like Spotify, Figma, Coursera, and Canva. Exciting stuff—can’t wait to try these built-in apps! #OpenAI https://x.com/techspace96/status/1975289126191825396

You can now chat with apps in ChatGPT. https://x.com/OpenAI/status/1975261587280961675

OpenAI launches AgentKit to simplify AI agent development for businesses
OpenAI released AgentKit, a comprehensive toolkit featuring visual workflow builders, embeddable chat interfaces, and safety guardrails to help companies deploy AI agents without extensive technical expertise. The platform addresses widespread corporate demand for agent deployment while reducing the complexity barrier that has prevented many organizations from adopting agentic AI systems. Early demonstrations show functional agents can be built in under 10 minutes using the visual interface.

And we launched the Agentic Commerce Protocol and Shared Payment Tokens, new building blocks for agentic commerce: https://x.com/emilygsands/status/1975951440586899573

BREAKING 🚨: OpenAI is planning to announce Agent Builder on DevDay. Agent builder will let users build their agentic workflows, connect MCPs, ChatKit widgets and other tools. This is one of the smoothest Agent builder canvases I’ve used so far. The year of Agents 🤖 https://x.com/testingcatalog/status/1974934915474137554

Every company wants to deploy agents. Every company finds it cumbersome. We’re addressing that with AgentKit!”” / X https://x.com/fidjissimo/status/1975270839592628618

Introducing AgentKit | OpenAI https://openai.com/index/introducing-agentkit/

Introducing AgentKit—build, deploy, and optimize agentic workflows. 💬 ChatKit: Embeddable, customizable chat UI 👷 Agent Builder: WYSIWYG workflow creator 🛤️ Guardrails: Safety screening for inputs/outputs ⚖️ Evals: Datasets, trace grading, auto-prompt optimization https://x.com/OpenAIDevs/status/1975269388195631492

introducing agentkit: build a high-quality agent for any vertical with our visual builder, evals, guardrails, and other tools. live demo of building a working agent in 8 minutes. https://x.com/gdb/status/1975253703180623921

Low-code is nice, but if I had to bet on a future, it’s code-based orchestration + coding agents to let anyone bridge that gap. OpenAI’s AgentKit (left img) lets you get started building various flows, like comparing docs, or a basic assistant. Once you need to encode more https://x.com/jerryjliu0/status/1975590066274902424

OpenAI Agent Embeds | OpenAI Agent Embeds https://openai.github.io/chatkit-js/

OpenAI’s new agent toolkit looks slick, but it feels like a step sideways, not forward. At first glance, it’s polished: a visual way to chain actions, decisions, and agents. Under the hood, though, it’s the same deterministic flowchart logic we’ve seen for years in tools like https://x.com/assaf_elovic/status/1975470718725890060

OpenAI’s prompt optimizer in AgentKit is GEPA. It’s gonna be everywhere…”” / X https://x.com/dbreunig/status/1975310659735986420

OpenAI plans to transform ChatGPT into an operating system with third-party apps
ChatGPT head Nick Turley revealed plans to evolve the 800-million-user platform into an app-based operating system, drawing inspiration from how web browsers became computing platforms over the past decade. This represents a major shift from ChatGPT’s current text-based interface to a more structured environment where users can access specialized applications for writing, coding, and commerce. The move could create new revenue streams through e-commerce partnerships with companies like Uber and DoorDash while giving developers direct access to ChatGPT’s massive user base.

OpenAI’s Nick Turley on transforming ChatGPT into an operating system “To turn ChatGPT into an operating system, Turley tells me he’s drawing inspiration from web browsers.” | TechCrunch https://techcrunch.com/2025/10/08/openais-nick-turley-on-transforming-chatgpt-into-an-operating-system/

OpenAI releases ChatKit Python SDK for easier API integration
OpenAI launched ChatKit, a Python software development kit that simplifies how developers build applications using OpenAI’s AI models. The SDK streamlines the technical process of connecting to OpenAI’s services, potentially accelerating adoption among Python developers who represent a significant portion of the AI development community. This developer-focused tool could lower barriers to AI integration across web applications and data science projects.

ChatKit – OpenAI API https://platform.openai.com/docs/guides/chatkit

Chatkit Python SDK https://openai.github.io/chatkit-python/

OpenAI makes Codex code-writing AI broadly available to developers
OpenAI released Codex, an AI system that converts natural language descriptions into working code, marking the first major commercial deployment of AI that can write software programs. This represents a significant shift from experimental AI tools to practical programming assistance, potentially transforming how software gets built by making coding accessible to non-programmers and accelerating development workflows.

Codex is now generally available | OpenAI https://openai.com/index/codex-now-generally-available/

Google launches Gemini 2.5 Computer Use model for automated web browsing
This specialized AI model can click, type, and scroll through websites like a human user, outperforming competitors on web control benchmarks while operating at lower latency. The model represents a significant step toward general-purpose AI agents that can complete complex digital tasks by directly manipulating user interfaces rather than relying on structured APIs, with Google already deploying it for internal UI testing and powering features like Project Mariner.

Introducing Gemini 2.5 Computer Use 🖥️🤖 – Control UIs based with vision understanding and reasoning – Use for web and Android control – Try it now with Browserbase or locally I’m super excited about high-impact use cases this model unlocks. Share what you build with us! https://x.com/osanseviero/status/1975652741642096708

Introducing our new SOTA Gemini 2.5 Computer Use model, trained to take 13 different actions and navigate a browser. This is just the first step in the Gemini computer use story : ) https://x.com/OfficialLoganK/status/1975657080439906547

Introducing the Gemini 2.5 Computer Use model https://blog.google/technology/google-deepmind/gemini-computer-use-model/

Our new Gemini 2.5 Computer Use model can navigate browsers just like you do. 🌐 It builds on Gemini’s visual understanding and reasoning capabilities to power agents that can click, scroll and type for you online – setting a new standard on multiple benchmarks, with faster https://x.com/GoogleDeepMind/status/1975648789911224793

Stripe integrates payment processing directly into Google’s Gemini AI assistant
Google’s Gemini can now handle real transactions through Stripe’s payment system, marking a shift from AI assistants that only provide information to ones that can complete purchases. This integration allows users to buy products or services through conversational commands, potentially transforming how consumers interact with e-commerce by eliminating the need to navigate separate websites or apps.

And this just launched today — Stripe inside Gemini: https://x.com/emilygsands/status/1976031929184198868

Google’s CodeMender AI agent automatically fixes software security vulnerabilities at scale
Google DeepMind’s CodeMender uses AI to autonomously identify and patch security flaws in code, successfully submitting 72 fixes to open-source projects and handling codebases up to 4.5 million lines. This represents a shift from AI merely assisting developers to actually performing complex security work independently, potentially addressing the chronic shortage of cybersecurity expertise. The tool proactively rewrites vulnerable code rather than just flagging issues, demonstrating AI’s growing capability to handle specialized technical tasks that traditionally require deep human expertise.

CodeMender is a new AI agent from @GoogleDeepMind research that automates code security using Gemini Deep Think. > Upstreamed 72 security fixes to open source projects. > Patches codebases as large as 4.5 million lines. > Proactively rewrites code, eliminating entire classes of https://x.com/_philschmid/status/1975372666862510260

Excited to share early results about CodeMender, our new AI agent that automatically fixes critical software vulnerabilities. AI could be a huge boost for developer productivity and security. Amazing work from the team – congrats!”” / X https://x.com/demishassabis/status/1975551657514791272

I am proud to share the announcement about our CodeMender project at @GoogleDeepMind, an agent that can automatically fix a range of code security vulnerabilities. From only a modest-compute run, our agent submitted 72 high-quality fixes to vulnerable code in popular codebases, https://x.com/ralucaadapopa/status/1975242772467822738

Introducing CodeMender: an AI agent for code security – Google DeepMind https://deepmind.google/discover/blog/introducing-codemender-an-ai-agent-for-code-security/

Software vulnerabilities can be notoriously time-consuming for developers to find and fix. Today, we’re sharing details about CodeMender: our new AI agent that uses Gemini Deep Think to automatically patch critical software vulnerabilities. 🧵 https://x.com/GoogleDeepMind/status/1975185557593448704

Google DeepMind develops AI agents that can navigate web browsers autonomously
DeepMind created AI systems that can control web browsers to complete tasks like filling forms and clicking buttons, representing a significant step toward AI assistants that can handle complex digital workflows. This differs from chatbots by giving AI the ability to actually interact with websites and applications rather than just providing text responses. The research addresses a key challenge in making AI practically useful for everyday computer tasks.

Training & Evaluating Browser Agents – Our Journey with Google Deepmind https://www.browserbase.com/blog/evaluating-browser-agents

Amazon’s AI team achieves 90% reliability in browser automation for enterprise use
Amazon’s AGI Lab solved a critical problem preventing AI agents from reliably using websites by breaking browser interactions into atomic components and switching to open-source tools like Playwright. The breakthrough matters because previous AI browser agents failed too often for enterprise use—where a 10% failure rate across thousands of daily automated tasks creates massive business risk. Their approach revealed that simple actions like clicking buttons actually require complex sequences of events that must fire in precise order, and that human psychology around trust is as important as technical reliability.

What makes browser use hard for AI Agents? https://labs.amazon.science/blog/what-makes-browser-use-hard-for-ai-agents

Anthropic launches plugin system for Claude Code development environment
Anthropic introduced a plugin marketplace for Claude Code that lets developers package and share custom commands, agents, and integrations with a single install command. This marks a shift toward standardizing AI coding environments around shared workflows, addressing enterprise needs for consistent development practices while enabling community-driven customization of AI development tools.

Customize Claude Code with plugins | Claude https://claude.com/blog/claude-code-plugins

AI models now outperform most humans at predicting future events
Large language models have improved their forecasting accuracy by 23% in under two years, with GPT-4.5 achieving substantially better prediction scores than its predecessor. Researchers who track human forecasting abilities project that AI will match elite “superforecasters” by November 2026, marking a significant milestone in machines’ ability to anticipate real-world outcomes.

From the team tracking human forecasting abilities for years: Current models already beat most humans and “A linear extrapolation of state-of-the-art LLM forecasting performance suggests LLMs will match superforecasters in November 2026.””” / X https://x.com/emollick/status/1976026718801736012

LLMs’ forecasting abilities are steadily improving. GPT-4 (released March 2023) achieved a difficulty-adjusted Brier score of 0.131. Nearly two years later, GPT-4.5 (released Feb 2025) scored 0.101—a substantial improvement. A linear extrapolation of state-of-the-art LLM forecasting performance suggests LLMs will match superforecasters in November 2026. https://x.com/Research_FRI/status/1975909516777537614

OpenAI releases framework to measure and reduce political bias in ChatGPT
The company published new research defining political bias in AI systems and created evaluation methods to address it, marking a significant step toward more neutral AI assistants as these tools become increasingly influential in public discourse and decision-making.

ChatGPT shouldn’t have political bias in any direction. Today, we’re sharing new research that defines what political bias means in LLMs, and we introduce a new evaluation framework to measure and reduce it. This has been the most meaningful work I’ve done at OpenAI, and I say”” / X https://x.com/nataliestaud/status/1976382637104300329

Just 250 malicious documents can backdoor any large language model
Anthropic researchers found that attackers need only a fixed number of poisoned training documents—not a percentage of total data—to create backdoors in AI models from 600 million to 13 billion parameters. This challenges assumptions that larger models are harder to attack and suggests data poisoning may be more accessible to bad actors than previously thought. The study used 72 different model configurations to demonstrate that absolute document count, not relative proportion of training data, determines attack success.

A small number of samples can poison LLMs of any size \ Anthropic https://www.anthropic.com/research/small-samples-poison

New research with the UK @AISecurityInst and the @turinginst: We found that just a few malicious documents can produce vulnerabilities in an LLM—regardless of the size of the model or its training data. Data-poisoning attacks might be more practical than previously believed. https://x.com/AnthropicAI/status/1976323781938626905

Google quantum scientists win Nobel Prize for quantum computing breakthrough
Michel Devoret and John Martinis, current and former Google Quantum AI researchers, shared the 2025 Nobel Prize in Physics for proving quantum mechanics works in large electrical circuits—the foundation of today’s quantum computers. Their 1980s experiments with superconducting circuits enabled Google’s recent quantum breakthroughs, including the Willow chip that solved problems impossible for classical computers. Google now has six Nobel laureates, with three prizes awarded in just two years.

BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2025 #NobelPrize in Physics to John Clarke, Michel H. Devoret and John M. Martinis “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit.” https://x.com/NobelPrize/status/1975498493218394168

Congrats to Michel Devoret, John Martinis, and John Clarke on the Nobel Prize in Physics. 🔬🥼 Michel is chief scientist of hardware at our Quantum AI lab and John Martinis led the hardware team for many years. Their pioneering work in quantum mechanics in the 1980s made recent”” / X https://x.com/sundarpichai/status/1975590130690781463

Congratulations to Michel Devoret, Google Quantum AI’s Chief Scientist of Quantum Hardware, who was awarded the 2025 Nobel Prize in Physics today. Google now has five Nobel laureates among our ranks, including three prizes in the past two years. https://blog.google/inside-google/company-announcements/googler-michel-devoret-awarded-the-nobel-prize-in-physics/

AstraZeneca partners with Algen Biotechnologies in $555 million AI drug discovery deal
The pharmaceutical giant will use Algen’s AI platform to identify new drug targets and accelerate development timelines, marking one of the largest AI partnerships in drug discovery. This deal signals growing confidence that AI can meaningfully compress the typical 10-15 year drug development process, with AstraZeneca betting hundreds of millions on computational biology over traditional lab-based approaches.

AstraZeneca, Algen Biotechnologies pen $555M AI pact https://www.fiercebiotech.com/biotech/astrazeneca-algen-biotechnolgies-pen-555m-ai-pact-immunology-targets

OpenAI’s Sora video app hits 1 million downloads faster than ChatGPT
Despite being invite-only and limited to North America, Sora reached the milestone in under five days compared to ChatGPT’s seven-day record. The app’s rapid adoption signals massive consumer appetite for AI video generation, though OpenAI is already implementing content restrictions and revenue-sharing with rights holders as creators flood social media with deepfakes and character-based videos.

3 quick thoughts on the Sora app 1. OpenAI was smart to feature their team in the Sora launch video memes. High-fidelity output + showing that you buy what you sell + having a sense of humor deflects some slop-dealer hate. 2. “Cameo” feature does what the Ghibli template did:”” / X https://x.com/anuatluru/status/1973125101047451830

New cameo controls are rolling out in today’s Sora update: the restrictions section gives you agency in blocking generations that you don’t want 🍅 https://x.com/turtlesoupy/status/1974969525566415295

sora hit 1M app downloads in <5 days, even faster than chatgpt did (despite the invite flow and only targeting north america!)! team working hard to keep up with surging growth. more features and fixes to overmoderation on the way!”” / X https://x.com/billpeeb/status/1976099194407616641

Sora hit 1M downloads faster than ChatGPT | TechCrunch https://techcrunch.com/2025/10/09/sora-hit-1m-downloads-faster-than-chatgpt/

Sora update #1 – Sam Altman https://blog.samaltman.com/sora-update-number-1

Video generation with Sora – OpenAI API https://platform.openai.com/docs/guides/video-generation

I think people are still unprepared for a world where you cannot trust any video content, despite years of warning. Even when Google & OpenAI include watermarks, those can be easily removed, and open weights AI video models without guardrails are coming.. https://x.com/emollick/status/1976004133296685165

The challenge: create the most over-the-top Hallmark movie clip that can fit into 10 seconds. I managed to cram in a humble neighborhood baker, a prince, a cruel rival princess, and the holiday season in this one. https://x.com/emollick/status/1974589833520771146

The nerfing of Sora was inevitable. IP holders just needed time to respond. But now the job is done; Sora is #1 on the iOS app store, flying past Gemini. On the plus side, AI cameos with your social graph is still a lot of fun. https://x.com/bilawalsidhu/status/1974459828308467866

Huh, Sora 2 knows a lot of things: “Ethan Mollick parachuting into a volcano, explains the three forms of legitimation from DiMaggio, Paul; Powell, Walter. (April 1983). “”The iron cage revisited: institutional isomorphism and collective rationality in organizational fields”” https://x.com/emollick/status/1974203274342641880

Sora 2 is incredibly impressive as a video generator but pushed into a narrow niche: 1) Optimized for viral short form video, both in UX & output 2) Built to be one-and-done, when most video gen is selecting among variants 3) Makes fun stuff the first time, at the cost of control https://x.com/emollick/status/1973939293803733074

openai-cookbook/examples/sora/sora2_prompting_guide.ipynb at 16686d05abf16db88aef8815ebde5c46c9a1282a · openai/openai-cookbook https://github.com/openai/openai-cookbook/blob/16686d05abf16db88aef8815ebde5c46c9a1282a/examples/sora/sora2_prompting_guide.ipynb#L7

People have been asking me “Jake why are there all these AI videos of you on the internet?!” There’s a method to the madness… I’m a proud OpenAI investor @antifundvc @geoffreywoo and have been advising Sora team @markchen90 @billpeeb this year and agreed to become the first https://x.com/jakepaul/status/1976411343025487977

Sam presenting the grand plan for Sora 2 https://x.com/bilawalsidhu/status/1974547020640919952

Sora 2 cameos feel like worldcoin lite https://x.com/bilawalsidhu/status/1974172685350678962

Sora 2 in the API. Used by Mattel for instant sketch to toy concept. https://x.com/gdb/status/1975262920931217552

Figure unveils third-generation humanoid robot designed for mass production
Figure 03 introduces major hardware advances including palm-mounted cameras, tactile sensors detecting three grams of pressure, and wireless charging through foot coils, while being engineered specifically for high-volume manufacturing at up to 12,000 units annually. The robot represents a shift from prototype to commercial product, with safety features like soft textiles and foam padding for home use, plus a new manufacturing facility called BotQ designed to scale production to 100,000 robots over four years.

Introducing Figure 03 https://www.figure.ai/news/introducing-figure-03

Introducing Figure 03 https://x.com/Figure_robot/status/1976272678618308864

Figure 03 has a bunch of optional looks. https://x.com/TheHumanoidHub/status/1975606356767089079

Figure 03 has articulated toes similar to Optimus. It also has inductive charging in the feet. https://x.com/TheHumanoidHub/status/1975590713309229429

I’m so proud of the Figure team – everyone’s been completely locked in this past month and cranking like never before One of the hardest challenges, yet no whining; everyone is aggressively optimistic”” / X https://x.com/adcock_brett/status/1975054634214703443

The Figure 03 hand has a camera in the palm. It’s unclear whether the fingers have tactile sensing. https://x.com/TheHumanoidHub/status/1975601561985548781

What I’m seeing at Figure looks straight out of a sci-fi movie What’s coming feels like the year 2050 This week, everything changes – it’s the week you’ve all been waiting for”” / X https://x.com/adcock_brett/status/1975321542948233245

Figure’s humanoid robots hit five months on BMW production line
Figure’s robots have operated 10 hours daily for over five months on BMW’s X3 assembly line in South Carolina, marking what the companies claim is the world’s first sustained deployment of humanoid robots in automotive manufacturing. This milestone demonstrates that humanoid robots can handle the physical demands and consistency requirements of real factory work, potentially opening the door for broader industrial adoption beyond traditional stationary robotic arms.

This week marks over 5 months of Figure’s humanoid robots operating on BMW’s X3 body shop production line in Spartanburg, SC., running 10 hours daily on every production day. https://x.com/TheHumanoidHub/status/1975222489635860550

This week, Figure has passed 5 months running on the BMW X3 body shop production line We have been running 10 hours per day, every single day of production! It is believed that Figure and BMW are the first in the world to do this with humanoid robots https://x.com/adcock_brett/status/1975197913178587172

Walmart now ships $21,600 humanoid robots with free delivery
The retail giant’s direct sales of Unitree’s G1 robots marks mainstream commercialization of humanoid technology, making advanced robotics accessible to businesses and consumers through familiar retail channels rather than specialized industrial suppliers.

Walmart is now shipping the Unitree G1 humanoid robot directly within the US. Only the basic trim is available, priced at $21,600. Free shipping, and you can order a batch of six in one shot. https://x.com/TheHumanoidHub/status/1975637835643535648

Reflection AI raises $2 billion to build open-source frontier models
The startup aims to create powerful AI systems that match leading proprietary models like GPT-4 but remain freely accessible to developers and researchers. This represents one of the largest funding rounds specifically dedicated to open-source AI development, potentially challenging the dominance of closed systems from OpenAI and Google.

Today we’re sharing the next phase of Reflection. We’re building frontier open intelligence accessible to all. We’ve assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific”” / X https://x.com/reflection_ai/status/1976304405369520242

Deloitte refunds Australian government after AI errors plague $290,000 report
Deloitte Australia will partially refund a government contract after delivering a report filled with AI-generated fabrications, including fake court quotes and nonexistent academic citations. The incident highlights growing risks as consulting firms increasingly use AI tools without adequate oversight, potentially compromising the accuracy of high-stakes government advisory work that shapes policy decisions.

Deloitte Australia to partially refund $290,000 report filled with suspected AI-generated errors | AP News https://apnews.com/article/australia-ai-errors-deloitte-ab54858680ffc4ae6555b31c8fb987f3

0 AI Visuals and Charts: Week Ending October 10, 2025

No entries found.

Top 15 Links of The Week – Organized by Category

AgentsCopilots

Zendesk says its new AI agent can solve 80% of support issues | TechCrunch https://techcrunch.com/2025/10/08/zendesk-says-its-new-ai-agent-can-solve-80-of-support-issues/

How it works https://www.qawolf.com/how-it-works

Anthropic

Today we’re announcing Claude Code plugins! https://x.com/The_Whole_Daisy/status/1976332882378641737

New on the Anthropic Engineering Blog: Most developers have heard of prompt engineering. But to get the most out of AI agents, you need context engineering. We explain how it works: https://x.com/AnthropicAI/status/1973098580060631341

🎓 @ShunyuYao14 (姚顺宇), Special Prize winner at Tsinghua Physics, has left Anthropic and joined Google DeepMind. He joined Anthropic on Oct 1, 2024, and worked on the model later known as Claude 3.7 Sonnet — a pivotal point in his journey from physics to AI. 💡 Why leave? His https://x.com/ZhihuFrontier/status/1975871339383660594

Chinese researcher Shunyu Yao left Anthropic to join DeepMind. “~40% of the reason: I strongly disagree with the anti-china statements Anthropic has made.” Dario’s first experience with AI was at Baidu. He learned about scaling laws there. Strange that he made those wild public https://x.com/Yuchenj_UW/status/1975969899102208103

Building AI for cyber defenders \ Anthropic https://www.anthropic.com/research/building-ai-cyber-defenders

Last week we released Claude Sonnet 4.5. As part of our alignment testing, we used a new tool to run automated audits for behaviors like sycophancy and deception. Now we’re open-sourcing the tool to run those audits. https://x.com/AnthropicAI/status/1975248654609875208

Petri: An open-source auditing tool to accelerate AI safety research https://alignment.anthropic.com/2025/petri/

Audio

Agent Workflows | ElevenLabs Documentation https://elevenlabs.io/docs/agents-platform/customization/agent-workflows

Google

We’re proud to announce that Genie 3 has been named one of @TIME’s Best Inventions of 2025. Genie 3 is our groundbreaking world model capable of generating interactive, playable environments from text or image prompts. Find out more → https://x.com/GoogleDeepMind/status/1976311787013480758

InternationalAI

Z.ai Chat – Free AI powered by GLM-4.6 & GLM-4.5 https://chat.z.ai/

OpenAI

Model – OpenAI API https://platform.openai.com/docs/models/gpt-5-pro

GPT-5 Pro for catching subtle errors in academic work:”” / X https://x.com/gdb/status/1974018754657837083

@DSPyOSS I used DSPy and GEPA to optimize one workflow and switched it from GPT-5 to Grok-4-fast. I now save 20x in API costs with the same results. Without GEPA, Grok wasn’t able to do the workload.”” / X https://x.com/JacksonAtkinsX/status/1976248661081501766

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading