About This Week’s Covers
This week’s cover is inspired by the life and tragic death of chess grandmaster Daniel Naroditsky. Daniel is the reason I got into chess, after I read Can’t Hurt Me by David Goggins.
Goggins inspired me to run ultra-marathons- alone at night year-round, unassisted – which sounds cool, however, Goggins also emphasizes it’s important to do things that scare us. Most people interpret this as a call to understake some sort of physical feat of strength or mentally tough exercise, but the thing I was most scared of was chess. Fear of failure.
First, this is a lot of steps…. quite possibly the most I’ll ever do in one day.
72,142 steps!

When I was trying to figure out how to play chess, I started watching YouTube videos, and that’s how I found Daniel Naroditsky. Daniel had this incredible talent, similar to Neil deGrasse Tyson, where he was one of the best in the world yet still could translate his thought process into something accessible to laypeople. It made watching him feel almost miraculous.
Daniel would play games where he starts with a fresh Chess.com account, no ELO rating, and climbs all the way back up to expert level. These are called Speed Runs. Along the way, he explains his thinking and the moves he is making. He would even go back through the games afterward and show alternate paths the game could have taken, explaining why each position was different and how those differences led to potentially dramatic outcomes. Here’s a playlist of the first series that I watched… that gave me a love of chess.
Perhaps the most important trait about Daniel, however, was his kindness and patience. He was a generous and humble person who was beloved by everyone. He played against gamers and all sorts of personalities online, and he always brought humility and gentleness to the games, which touched people deeply. Even the most hardcore first-person shooter gamers, or rough around the edges online trolls, who normally talked a lot of smack… would be gentle around Daniel, wouldn’t curse, and displayed their best behavior. Naroditsky simply elevated everyone around him to be a better person.
I started playing chess around several years ago, and I play bullet chess. Bullet is the nickname for two minute games (one minute per side). I usually play in between sets when I’m at the gym. It’s basically the perfect rest amount!
I’m averaging ten bullet games of chess per day… which after many years, is a staggering 11,401 bullet games since I joined Chess.com.
Feel free to scroll past these tributes, but if you’re interested in learning more about Daniel, I’ve embedded some videos from his peers:
In honor of Daniel, I did not create an AI-generated image for the cover. I also couldn’t figure out the font he used for his YouTube thumbnails, so I gave Gemini a screenshot of one of his title cards and asked it to change the text for me. I then brought that into Photoshop to create the main cover.

It’s amazing how easily Gemini 2.5 was able to make my graphic. I tried asking both GPT and Gemini the name of the font.. but they struggled. Rather than do it in Photoshop and figure out the fonts and add the noise/speckles… it’s now just a prompt!

For the category cover images, I used my Python script and intentionally kept the prompt very loose. I simply told it I wanted a chess theme. Sometimes AI imagery is more creative when you don’t give it many constraints. Overall, I think these are strong images. Some of them look a little stereotypically “AI,” but others came out really well, and I’m sharing my favorite six below.
Because there’s always so much news, my weekly humanities reading often gets buried. I’m going to share a small snippet of this week’s humanities reading here, in memory of Grandmaster Daniel Naroditsky, who honestly changed my life!
A excerpt from this week’s humanities reading, Chess, by Jorge Luis Borges, which has fun connections to chess, life, and even diffusion model prompting (who prompts the prompter!?)
Chess
Weak king, biased bishop, embittered queen, straight tower and wily pawn, over the black and white of the road they seek and wage armed battle.
They do not know that the appointed hand of the player governs their fate, they do not know that an adamantine rigor subjects their will and their journey.
The player too is prisoner (the sentence is Omar’s) of that other board, the black nights and the white days.
God moves the player and the player moves the piece What God behind God began the weaving of dust and time and dream and the throes of death?
-Jorge Luis Borges
Sound familiar…?
Top six category cover images:
- HuggingFace: is a community, and the image shows an outdoor group of players in a chess club
- Finance: the board is a stock ticker
- Agents: the knight is moving with wires of light almost like a spider
- Amazon: the factory floor is the board and the robot pickers are the pieces
- Benchmarks: I gave it no direction, yet it generated a leaderboard complete with real chess engine names!
- AI Inn of Court: The courtroom is a chess board.






This Week By The Numbers
Total Organized Headlines: 529
- AGI: 8 stories
- AI Inn of Court: 38 stories
- Accounting and Finance: 5 stories
- Agents and Copilots: 175 stories
- Alibaba: 13 stories
- Alignment: 17 stories
- Amazon: 10 stories
- Anthropic: 66 stories
- Apple: 8 stories
- Audio: 7 stories
- Augmented Reality (AR/VR): 22 stories
- Autonomous Vehicles: 4 stories
- Benchmarks: 25 stories
- Business and Enterprise: 48 stories
- Chips and Hardware: 35 stories
- Cohere: 1 story
- DeepSeek: 28 stories
- Education: 11 stories
- Ethics/Legal/Security: 49 stories
- Figure: 12 stories
- Google: 49 stories
- HuggingFace: 11 stories
- Images: 11 stories
- International: 50 stories
- Internet: 30 stories
- Law: 4 stories
- Llama: 1 story
- Locally Run: 1 story
- Manus: 1 story
- Meta: 13 stories
- Microsoft: 12 stories
- Mistral: 1 story
- Moonshot: 3 stories
- Multimodal: 68 stories
- NVIDIA: 10 stories
- Open Source: 85 stories
- OpenAI: 52 stories
- Perplexity: 5 stories
- Podcasts/YouTube: 12 stories
- Publishing: 38 stories
- Qwen: 12 stories
- RAG: 1 story
- Robotics Embodiment: 58 stories
- Science and Medicine: 20 stories
- Security: 10 stories
- Technical and Dev: 162 stories
- Video: 47 stories
- X: 11 stories
- Zai: 9 stories
This Week’s Executive Summaries
This week includes 528 links, 58 of which contribute to the executive summaries. We’ll start with five top stories, and then go through the news by category, starting with Agents, then Science, Business, Chips and Servers, Education, Ethics and Alignment, Google, OpenAI, an interesting podcast, Quinn, Robots, Video, and Amazon.

This Week’s Top Stories

OpenAI Launches Web Browser
The top story this week is that OpenAI released a browser called ChatGPT Atlas. This is a web browser with ChatGPT built in. It’s meant to be an always-on component of web browsing that’s available within the window at any moment to understand what you’re trying to achieve and help you complete tasks without having to leave the website or do any copying or pasting.
By default, your ChatGPT memory is built in, so conversations can refer to your previous chats and remember details as you go. There’s also a privacy mode when you want it. One powerful feature is that your browser history is kept with you as you use it more and more, so you can conversationally search your history by describing what you want to find.
Atlas also has an agent mode that can allow it to take over and automate tasks while you’re browsing. If there’s complicated information on a website, you can begin a dialogue with GPT to interact, and GPT can reference the website to ground your conversation. ChatGPT can also jump in as you work and help you answer emails.
The agent integration has the potential to be pretty strong. For example, ChatGPT describes a scenario where you want to plan a dinner party and already have a recipe in mind. You can give the recipe to ChatGPT, and it will go find a grocery store, add all the ingredients to your cart, and have them delivered to your house. GPT will ask in advance before it opens new tabs or clicks within your browser.
There’s a lot of potential here. However, I’m interested to see whether people are simply addicted to the chat window itself. Personally, I would rather have the chat window become more of an operating interface, as opposed to opening a web browser separate from the chat. I assume that over time all of these things will merge together. But right now, Atlas is a standalone application that’s separate from ChatGPT’s traditional web-based interaction or its app.
https://openai.com/index/introducing-chatgpt-atlas/ https://chatgpt.com/atlas
Google Achieves First Quantum Computing Algorithm
The second top story comes out of Google and its quantum computing division. Using their Willow quantum chip, Google was able to demonstrate the first-ever algorithm to achieve “verifiable quantum advantage on hardware.”
This marks the first time a quantum computer has successfully run a verifiable algorithm on hardware, meaning the result can be repeated on the same quantum computer and consistently land on the same answer. The algorithm runs 13,000 times faster than a classical supercomputer and was able to compute the structure of a molecule. Google published its results in Nature, and they’ve named the algorithm Quantum Echoes. https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/
Meta Lays Off 600 Employees From Its AI Unit
The third top story is that Meta laid off 600 employees within its AI unit. There are still almost 3,000 people in the division. Recently, Meta paid $14.3 billion for Scale AI as part of an ‘acquihire’ for their new chief AI officer, Alexandr Wang.
This isn’t a reflection that Meta is not taking AI seriously, but rather a reflection of its drive for efficiency. https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html
Anthropic Announces Major Investment in Google TPUs
The fourth top story is an announcement by Anthropic that they plan to expand their use of Google’s proprietary chip, the TPU, through Google’s cloud services. Anthropic plans to scale up to one million TPUs, representing tens of billions of dollars in investment and over a gigawatt of capacity. Anthropic has diversified servers and currently uses Google TPUs, Amazon Tranium, and NVIDIA GPUs.
https://www.anthropic.com/news/expanding-our-use-of-google-cloud-tpus-and-services
Google VEO Video Demostrations
First, Ethan Mollick uses Google Gemini to mine information from within a video using multimodal reasoning and discovery. For example, Mollick asks when a clock appears in the video, and Gemini gives him the timestamps. He then asks whether there are moments when the people in the video seem happy, and Gemini again provides timestamps.
I demonstrated this two weeks ago, when I gave Gemini a video of my podcast appearance on Love Conquers Fear and asked it to tell me all the different books that appeared or were discussed, and when they appeared in the video. Gemini gave me the names, the timestamps, and the context of the discussion.
Google shared a demonstration of their Veo model, showing precision editing driven entirely through prompts. A woman is alone in a video, and the user is able to add another person, add a cat, add steampunk goggles on the woman, or have the woman wear a wig made of flowers. It’s an incredible demonstration for anyone who has used Premiere or Photoshop. Imagine how difficult this kind of work would have been in the past.
Mollick posted an example of Veo understanding physics, even though it’s never been trained on physics. His prompt describes three toy ships: one made of iron, one of wood, and one made of loosely packed sugar cubes, all falling into a pool of water. Veo is able to render the three toy boats in a very convincing way.
Mollick also shared perhaps the most incredible example of Veo’s ability to handle complexity: he prompts Veo to display a marble statue of a toucan, with honey pouring down over the beak, until the beak cracks and falls off. It’s spectacular.
Agents
Microsoft Windows 11 AI OS: Continued Hype and Copilot Announcements Last week, Microsoft announced that Windows 11 would be their first step toward an AI operating system. All of the large action-model–type interfaces are starting to come into play, but last week it initially looked more like an Amazon Alexa–style experience, where you can talk to it and ask questions. Even with a wake up phrase “Hey CoPilot”.
That said, it was a really cool resource. For example, if you’re watching a YouTube video of a podcaster, you could ask what microphone the podcaster is using and where you can buy it.
This week, the new Copilot unveiled agentic features. The operating system can take a task and open a separate desktop environment that you can watch while it works, or you can minimize it and keep doing what you want to do.
Hitting close to Brett Hurt‘s theme of Love Conquers Fear, Microsoft AI CEO Mustafa Suleyman says he is “betting on optimism in a time of cynicism”. The product release included an essay by Suleyman that reads more like philosophy than a product announcement.
“We’re betting on optimism in a time of cynicism. Instead of tech that demands more attention, we’re making tech that gives you back time for the things that matter. Instead of AI that replaces human judgment, we’re building AI that empowers your own – helping you make better decisions, spark your creativity, deepen your connections.
Here’s the simple idea I keep coming back to: technology should work in service of people. Not the other way around. Ever.” -Microsoft AI CEO Mustafa Suleyman
Copilot Groups allows multiple people to collaborate in real time for brainstorming, collaborating, and studying, with up to 32 people at once.
A new Clippy-style character, Mico, is a nickname derived from Microsoft Copilot.
Copilot now has long-term memory, and you can connect it to services like Outlook, Gmail, Google Drive, and Google Calendar.
There’s also a health feature. Copilot can help you find doctors, analyze your health, and connect with grounded sources like Harvard Health.
Copilot Learn Live is an education feature where Copilot acts as a voice-enabled tutor.
Copilot Mode in Edge is an AI browser that I’m never going to use… just kidding…but also not kidding.
Human-centered AI
Anthropic Announces Claude Code on The Web
Claude Code on the web lets you code without opening the command line or the terminal. You can connect your GitHub repositories, describe what you need, and Claude Code handles the implementation.
Sessions run in isolated environments, and you can track progress and steer Claude along the way, adjusting the course as it works through a task.
Because it’s in the cloud, you can run multiple tasks at once across different repositories from a single interface. Using the cloud integration, you can also ask a lot of questions about how your projects work and how repositories are mapped.
https://claude.com/blog/claude-code-on-the-web
Claude Crushes Software Engineering Benchmarks
From Scale AI: “We launched SWE-Bench Pro last month to incredible feedback, and we’ve now updated the leaderboard with the latest models and no cost caps.
SoTA models now break 40% pass rate. Congrats to @Anthropic for sweeping the top spots!
🥇Claude 4.5 Sonnet 🥈Claude 4 Sonnet 🥉Claude 4.5 Haiku”
https://scale.com/leaderboard/swe_bench_pro_public
Y Combinator Announces Agentic Point of Sale System for Restaurants and Retail
From Y Combinator: “Zavo is building the first agentic point of sale for restaurants and retail. Payments, POS, and AI agents in one platform to build the future of autonomous commerce. Over 400 businesses already use Zavo to accept payments and manage operations.”
https://www.ycombinator.com/launches/OcF-zavo-agentic-point-of-sale-for-restaurants-and-retail
https://www.zavopay.com/
Chips and Servers
NVIDIA Celebrates the First Blackwell Wafer Manufactured in the US NVIDIA unveiled its first Blackwell wafer manufactured in the United States. The wafer is the base material for NVIDIA’s AI chips and was produced at TSMC’s semiconductor manufacturing facility in Phoenix, Arizona. It’s part of the Blackwell platform, which is not only more powerful, but also consumes less energy and costs 25x less.
This move helps protect NVIDIA from volatile tariffs and geopolitics. It’s also cool that it’s made in America.
https://www.engadget.com/big-tech/nvidia-shows-off-its-first-blackwell-wafer-manufactured-in-the-us-192836249.html
Education
Incredible example from Ethan Mollick:
“I took the surviving syllabus of W. H. Auden’s 1941 “Hardest Class in the Humanities” (6,000 pages of reading, memorization of poems, etc.) & turned it into an annotated site with all the readings. (Would have taken hours, instead it was 4 prompts)”
Here’s the website the AI built (must see link!)
https://68f4202753e83cc5fbf8172e–tiny-tarsier-c997a9.netlify.app/
https://www.theparisreview.org/blog/2018/04/11/a-homework-assignment-from-w-h-auden/
Ethics, Security, and Alignment
Strong, Complex Statement from Anthropic’s CEO Dario Amodei
Lately, without cause, Anthropic has been picked on (by tech bros and the government) for being decelerationist and too protective.
Anthropic has a strong commitment to ethics, alignment, and safety, and merely for talking about lately… Anthropic has taken a lot of heat for not being fully on board with the accelerationist movement.
Anthropic CEO Dario Amodei put out a statement that Anthropic is built on one foundation: AI should be a force for human progress, not peril.
He offered some soft rhetorical diplomacy toward JD Vance, saying that he agrees we should advance applications that help people, like medicine and disease prevention, while minimizing harmful ones.
Dario goes on to say that Anthropic is the fastest-growing software company in history, which I did not know, with revenue growing from $1 trillion to $7 billion over the last nine months. He then says there are products they will not build and risks they will not take, even if those choices would make them money.
Moving straight back to the diplomacy issue, Dario hits it head-on and says that, despite Anthropic’s track record of trying to communicate often and transparently, there’s been an increase in inaccurate claims about Anthropic’s policies.
Dario goes on to say that Anthropic is the fastest-growing software company in history, which I did not know, with revenue growing from $1 trillion to $7 billion over the last nine months. He then says there are products they will not build and risks they will not take, even if those choices would make them money.
Over the months, as I compile these newsletter links, accelerationists have been dogpiling on Anthropic, even though I’ve never seen Anthropic as decelerationist. They simply announce their findings whenever they run security tests, and at the same time, they keep releasing the best models.
We just saw this (above this week) with the Scale AI benchmarking, where Claude swept the entire software engineering benchmark tests.
We saw this with OpenAI’s GDPVal benchmark, where Claude beat GPT (the third time this year on an OpenAI benchmark)….
Dario starts “setting the record straight” by listing four key points about Anthropic’s alignment with the Trump administration around AI policy:
1) Anthropic works closely and often with the federal government.
2) Anthropic publicly praised President Trump’s Action AI plan.
3) Anthropic has hired a bipartisan group of policy experts to help guide them, including a bipartisan advisory council.
4) Anthropic disagreed with the proposed amendment in the One Big Beautiful Bill, which would have imposed a 10-year moratorium on state-level AI laws. That provision was ultimately voted down by Republicans and Democrats in a 99-to-1 bipartisan vote in the Senate.
Dario advocates his preference for a national AI standard, saying that his first choice is a federal standard. However, he pivots and hedges that Congress is taking too long to act. Anthropic therefore supports a bill being designed in California, because most of the leading AI labs are headquartered there.
If you remember, almost exactly a year ago, Gavin Newsom vetoed a bill that was seen as a little draconian:
FLASHBACK JULY 2024: California’s AI Regulation Bill Sparks Controversy Amid Tech Industry Pushback
California State Senator Scott Wiener is at the center of the AI regulation debate with his “Safe and Secure Innovation for Frontier Artificial Intelligence Models” bill (SB 1047). The bill mandates safety testing for AI models costing over $100 million and requires the ability to shut down these models in case of safety concerns. Notably, Meta’s Llama 3.1 was released this week, is an open-source model, and exceeds this cost threshold While tech giants like Andreessen-Horowitz and Y Combinator have criticized the bill for potentially stifling innovation, Wiener argues that it is a necessary step to ensure responsible AI development without imposing overly stringent controls. He emphasizes that the bill does not require licensing or strict liability but aims to mitigate catastrophic risks. The bill has passed California’s state assembly and now awaits Governor Gavin Newsom’s approval. Despite opposition from Silicon Valley, the legislation enjoys broad support across California, reflecting a cautious but optimistic approach to balancing AI innovation with public safety. A touchy subject, to say the least.
https://ethanbholland.com/2024/07/27/ai-news-43-week-ending-07-26-2024-with-executive-summary-top-97-links-and-helpful-visuals/
FLASHBACK AUGUST 2024: California’s AI Regulation Bill Weakened After Amendments from Anthropic and Industry Pressure
California’s bill aimed at regulating AI to prevent large-scale disasters (SB 1047), has softened after pushback from AI firms, including Anthropic. For example, the Attorney General can no longer sue companies for unsafe practices before a catastrophe, and AI developers are required only to submit safety statements instead of legally binding certifications. The bill eliminates plans for a new government agency but expands the Board of Frontier Models’ role within the existing Government Operations Agency. Despite these changes, the bill still holds AI developers liable for catastrophic damages. The revised bill will face a final vote in California’s Assembly before potentially reaching Governor Newsom’s desk for approval. Here’s a summary of Anthropic’s key points:Pros: Encourages safety protocols, deters downstream harm through liability clarification, advances the science of AI risk reduction. Cons: Pre-harm enforcement through auditing creates potential overreach, the attorney general can enforce the bill before harm occurs creating more overreach, notice periods for incidents reports are too short. Suggestions: include flexibility and avoid being overly prescriptive, prioritize preventing catastrophe, avoid restrictions which inadvertently hinder safety. https://ethanbholland.com/2024/08/24/ai-news-47-week-ending-08-23-2024-with-executive-summary-top-xxx-links-and-helpful-visuals/
FLASHBACK SEPTEMBER 2024:
California Gov. Newsom vetoes AI safety bill that divided Silicon Valley
https://www.npr.org/2024/09/20/nx-s1-5119792/newsom-ai-bill-california-sb1047-tech
This new bill led by Anthropic’s suggestions is positioned as a more balanced approach than these previous ones.
The new bill requires frontier model developers to publicly disclose their safety protocols, but it also exempts any company with revenue below $500 million, focusing only on the largest frontier companies.
Dario pushes back on the idea that he’s trying to slow down the AI startup ecosystem and goes further on the offense saying that the biggest risk to AI leadership is that the United States is giving China access to U.S.-made chips that China can’t manufacture itself.
Dario says Anthropic is the only AI company that restricts selling AI services to China-controlled companies, and that the company has lost significant short-term revenue in an effort to avoid providing powerful AI platforms that could benefit China’s military and intelligence services.
Anthropic is the only AI company that restricts selling AI services to China-controlled companies, and the company has lost significant short-term revenue in an effort to avoid providing powerful AI platforms that could benefit China’s military and intelligence services.
Last, Amodei cites benchmarks that show Anthropic’s models are less politically biased than most other models.
https://www.anthropic.com/news/statement-dario-amodei-american-ai-leadership
SAG-AFTRA, OpenAI, Bryan Cranston Collaborate to Ensure Voice and Likeness Protections in Sora 2
“Actor Bryan Cranston’s voice and likeness were able to be generated in some outputs without consent or compensation when OpenAI’s Sora 2 was initially launched in an invite-only release two weeks ago. While from the start it was OpenAI’s policy to require opt-in for the use of voice and likeness, OpenAI expressed regret for these unintentional generations. OpenAI has strengthened guardrails around replication of voice and likeness when individuals do not opt-in. ”
https://www.sagaftra.org/sag-aftra-openai-bryan-cranston-collaborate-ensure-voice-and-likeness-protections-sora-2
TechCrunch: “Netflix goes ‘all in’ on generative AI as entertainment industry remains divided”
“In its quarterly earnings report released on Tuesday afternoon, Netflix wrote in its letter to investors that it is “very well positioned to effectively leverage ongoing advances in AI.”
Netflix isn’t planning to use generative AI as the backbone of its content but believes the technology has potential as a tool to make creatives more efficient.
“’t takes a great artist to make something great,’ Netflix CEO Ted Sarandos said on Tuesday’s earnings call. ‘AI can give creatives better tools to enhance their overall TV/movie experience for our members, but it doesn’t automatically make you a great storyteller if you’re not.’ https://techcrunch.com/2025/10/21/netflix-goes-all-in-on-generative-ai-as-entertainment-industry-remains-divided/
Google Maps Integration with the Gemini API
Google announced a Google Maps integration with the Gemini API, which allows you to connect maps and search together in a conversational experience. There’s a web demo, as well as an interactive demo inside Google AI Studio, that’s worth checking out.
https://blog.google/technology/developers/grounding-google-maps-gemini-api/
https://aistudio.google.com/apps/bundled/chat_with_maps_live?showPreview=true&showAssistant=true
OpenAI
OpenAI has an army of ex-investment bankers making financial models to train ChatGPT
“Bloomberg is reporting that OpenAI is beefing up ChatGPT’s financial chops to target the deep pockets of the banking industry. According to the report, “Project Mercury” has lined up over 100 former investment bankers getting paid $150 an hour to help teach OpenAI’s models how to do the grueling work of junior bankers, including tweaking PowerPoint slides and building financial models in Microsoft.
https://sherwood.news/tech/openai-has-an-army-of-ex-investment-bankers-making-financial-models-to-train/ https://www.bloomberg.com/news/articles/2025-10-21/openai-looks-to-replace-the-drudgery-of-junior-bankers-workload
Qwen
Computer Use Through Watching Videos At Scale
“It makes perfect sense to let agents understand, imitate, and learn how humans use computers from videos! We present VideoAgentTrek, which builds strong computer-use agents through video pretraining and agentic tuning. This approach has already proven effective in the training of Qwen3-VL.”
VideoAgentTrek: Computer Use Pretraining from Unlabeled Videos https://videoagenttrek.github.io/
Airbnb CEO Brian Chesky Is Bullish on Qwen
“We’re relying a lot on Alibaba’s Qwen model. It’s very good. It’s also fast and cheap… We use OpenAI’s latest models, but we typically don’t use them that much in production because there are faster and cheaper models.” https://x.com/natolambert/status/1980657338726887662
Robotics
Figure Working on Real-Time Speech-to-Speech
“We’re building real-time speech-to-speech, which will be the default UI between humans and robots F.03 has a 4x more powerful speaker with improved microphone for performance and clarity”
“I think we’ll be able to do general-purpose work with a humanoid by just thru speech and have it do everything you want it to do in unseen places, like a home it’s never been in, next year.” https://x.com/TheHumanoidHub/status/1978865452777423114
China Unveils Grenade Drone (slightly older story)
“It can carry grenades, fold into a backpack, and integrate with standard infantry gear…High-lift propellers for heavy payloads…Foldable arms for portability…Swarm compatibility and remote targeting support… This is not a prototype or a one-off experiment. It is part of a mass-production initiative for light combat drones.” https://x.com/IlirAliu_/status/1980673433546682574
Amazon
Amazon Creating Special AR/VR Glasses for Delivery Drivers
Amazon is developing smart glasses allowing delivery drivers to work hands-free https://www.aboutamazon.com/news/transportation/smart-glasses-amazon-delivery-drivers
Podcasts and Media
Andrej Karpathy on Dwarkesh Podcast: “We’re summoning ghosts, not building animals”
Andrej Karpathy was recently on the Dwarkesh podcast, in part as a response to the recent Richard Sutton interview, which was also spectacular. Andrej is so important to me that I’m going to include his own notes from the appearance here in my summary, because I think he’s one of the few people who is simply worth paying attention to whenever he speaks.
Karpathy initial response: https://karpathy.bearblog.dev/animals-vs-ghosts/
Deep dive of the two: https://ethanbholland.com/2025/10/04/ai-news-105-week-ending-october-03-2025-with-65-executive-summaries-top-20-links-and-2-helpful-visuals/
Dwarkesh Summary of Karpathy:
*** Culture: > “Why can’t an LLM write a book for the other LLMs? Why can’t other LLMs read this LLM’s book and be inspired by it, or shocked by it?”
Self play: > “It’s extremely powerful. Evolution has a lot of competition driving intelligence and evolution. AlphaGo is playing against itself and that’s how it learns to get really good at Go. There’s no equivalent of self-play in LLMs. Why can’t an LLM, for example, create a bunch of problems that another LLM is learning to solve? Then the LLM is always trying to serve more and more difficult problems.”
I asked Karpathy why LLMs still aren’t yet able to build up culture the way humans do.
> “The dumber models remarkably resemble a kindergarten student. [The smartest models still feel like] elementary school students though. Somehow, we still haven’t graduated enough where [these models] can take over. My Claude Code or Codex, they still feel like this elementary-grade student. I know that they can take PhD quizzes, but they still cognitively feel like a kindergarten.”
> “I don’t think they can create culture because they’re still kids. They’re savant kids. They have perfect memory. They can convincingly create all kinds of slop that looks really good. But I still think they don’t really know what they’re doing. They don’t really have the cognition across all these little checkboxes that we still have to collect.” https://x.com/dwarkesh_sp/status/1979259041013731752 ***
Karpathy on Karpathy:
My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good.
I re-watched the pod just now too. First of all, yes I know, and I’m sorry that I speak so fast :). It’s to my detriment because sometimes my speaking thread out-executes my thinking thread, so I think I botched a few explanations due to that, and sometimes I was also nervous that I’m going too much on a tangent or too deep into something relatively spurious. Anyway, a few notes/pointers:
AGI timelines. My comments on AGI timelines looks to be the most trending part of the early response. This is the “decade of agents” is a reference to this earlier tweet https://x.com/karpathy/status/1882544526033924438 Basically my AI timelines are about 5-10X pessimistic w.r.t. what you’ll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics. The apparent conflict is not: imo we simultaneously 1) saw a huge amount of progress in recent years with LLMs while 2) there is still a lot of work remaining (grunt work, integration work, sensors and actuators to the physical world, societal work, safety and security work (jailbreaks, poisoning, etc.)) and also research to get done before we have an entity that you’d prefer to hire over a person for an arbitrary job in the world. I think that overall, 10 years should otherwise be a very bullish timeline for AGI, it’s only in contrast to present hype that it doesn’t feel that way.
Animals vs Ghosts. My earlier writeup on Sutton’s podcast https://x.com/karpathy/status/1973435013875314729 . I am suspicious that there is a single simple algorithm you can let loose on the world and it learns everything from scratch. If someone builds such a thing, I will be wrong and it will be the most incredible breakthrough in AI. In my mind, animals are not an example of this at all – they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we’re not going to redo evolution. But with LLMs we have stumbled by an alternative approach to “prepackage” a ton of intelligence in a neural network – not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that’s what a lot of frontier work is about.
On RL. I’ve critiqued RL a few times already, e.g. https://x.com/karpathy/status/1944435412489171119 . First, you’re “sucking supervision through a straw”, so I think the signal/flop is very bad. RL is also very noisy because a completion might have lots of errors that might get encourages (if you happen to stumble to the right answer), and conversely brilliant insight tokens that might get discouraged (if you happen to screw up later). Process supervision and LLM judges have issues too. I think we’ll see alternative learning paradigms. I am long “agentic interaction” but short “reinforcement learning” https://x.com/karpathy/status/1960803117689397543. I’ve seen a number of papers pop up recently that are imo barking up the right tree along the lines of what I called “system prompt learning” https://x.com/karpathy/status/1921368644069765486 , but I think there is also a gap between ideas on arxiv and actual, at scale implementation at an LLM frontier lab that works in a general way. I am overall quite optimistic that we’ll see good progress on this dimension of remaining work quite soon, and e.g. I’d even say ChatGPT memory and so on are primordial deployed examples of new learning paradigms.
Cognitive core. My earlier post on “cognitive core”: https://x.com/karpathy/status/1938626382248149433 , the idea of stripping down LLMs, of making it harder for them to memorize, or actively stripping away their memory, to make them better at generalization. Otherwise they lean too hard on what they’ve memorized. Humans can’t memorize so easily, which now looks more like a feature than a bug by contrast. Maybe the inability to memorize is a kind of regularization. Also my post from a while back on how the trend in model size is “backwards” and why “the models have to first get larger before they can get smaller” https://x.com/karpathy/status/1814038096218083497
Time travel to Yann LeCun 1989. This is the post that I did a very hasty/bad job of describing on the pod: https://x.com/karpathy/status/1503394811188973569 . Basically – how much could you improve Yann LeCun’s results with the knowledge of 33 years of algorithmic progress? How constrained were the results by each of algorithms, data, and compute? Case study there of
nanochat. My end-to-end implementation of the ChatGPT training/inference pipeline (the bare essentials) https://x.com/karpathy/status/1977755427569111362
On LLM agents. My critique of the industry is more in overshooting the tooling w.r.t. present capability. I live in what I view as an intermediate world where I want to collaborate with LLMs and where our pros/cons are matched up. The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless. For example, I don’t want an Agent that goes off for 20 minutes and comes back with 1,000 lines of code. I certainly don’t feel ready to supervise a team of 10 of them. I’d like to go in chunks that I can keep in my head, where an LLM explains the code that it is writing. I’d like it to prove to me that what it did is correct, I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something. I want to learn along the way and become better as a programmer, not just get served mountains of code that I’m told works. I just think the tools should be more realistic w.r.t. their capability and how they fit into the industry today, and I fear that if this isn’t done well we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities, security breaches and etc. https://x.com/karpathy/status/1915581920022585597
Job automation. How the radiologists are doing great https://x.com/karpathy/status/1971220449515516391 and what jobs are more susceptible to automation and why.
Physics. Children should learn physics in early education not because they go on to do physics, but because it is the subject that best boots up a brain. Physicists are the intellectual embryonic stem cell https://x.com/karpathy/status/1929699637063307286 I have a longer post that has been half-written in my drafts for ~year, which I hope to finish soon. https://x.com/karpathy/status/1979644538185752935
This Week’s Humanities Reading
In memory and honor of Daniel Naroditsky, this week’s reading is Chess by Jorge Luis Borges:
Chess
I
In their serious corner the players rule their slow pieces. The board delays them till dawn in their strict ambit, where two colors hate each other.
Within, magical severities infuse the figures: homeric tower, light horse, armed queen, last king, oblique bishop and assailant pawns.
When the players have gone, when time has eaten them, the rite has certainly not stopped.
This war was lit in the East, whose amphitheater today is all the world. And as the other, this game is infinite.
II
Weak king, biased bishop, embittered queen, straight tower and wily pawn, over the black and white of the road they seek and wage armed battle.
They do not know that the appointed hand of the player governs their fate, they do not know that an adamantine rigor subjects their will and their journey.
The player too is prisoner (the sentence is Omar’s) of that other board, the black nights and the white days.
God moves the player and the player moves the piece What God behind God began the weaving of dust and time and dream and the throes of death?
Full Executive Summaries with Links, Generated by Claude Sonnet 4.5
OpenAI launches ChatGPT Atlas browser to challenge Google Chrome
OpenAI released Atlas, an AI-powered web browser that integrates ChatGPT directly into browsing and can remember user activity to provide contextual assistance. The move directly challenges Google Chrome’s dominance by potentially capturing user data that currently fuels Google’s advertising business, with Atlas featuring an AI agent mode that can complete tasks automatically while users browse. This represents OpenAI’s shift from chatbot maker to operating system competitor, targeting Chrome’s 3+ billion users with ChatGPT’s 500 million weekly active user base.
1/ The browser just changed. Meet ChatGPT Atlas – a new way to bring ChatGPT wherever you go online. Rolling out today on macOS: https://x.com/nickaturley/status/1980694337643315475
Agent mode in Atlas completes tasks faster as you browse the web. Available in preview for Plus, Pro, and Business users. https://x.com/OpenAI/status/1980685612538822814
Atlas is an early experience, and we’ll be listening closely to your feedback to guide what comes next. Rolling out today to everyone on MacOS. Windows, iOS, and Android are coming soon. https://x.com/OpenAI/status/1980685615340614032
ChatGPT Atlas can remember what you’ve searched, visited, and asked about — giving ChatGPT better context for more accurate answers. You can also ask it to open, close, or revisit any of your tabs anytime. https://x.com/OpenAI/status/1981782134655520991
ChatGPT Atlas https://chatgpt.com/atlas
ChatGPT Atlas is here! Our new browser has ChatGPT built in so it can help you across the web and, if you want, remember what you’ve done online and use that context for future requests. More of my thoughts on why we built this here: https://x.com/fidjissimo/status/1980682244185608392
Exclusive: OpenAI to release web browser in challenge to Google Chrome | Reuters https://www.reuters.com/business/media-telecom/openai-release-web-browser-challenge-google-chrome-2025-07-09/
Introducing ChatGPT Atlas | OpenAI https://openai.com/index/introducing-chatgpt-atlas/
Meet our new browser—ChatGPT Atlas. Available today on macOS: https://x.com/OpenAI/status/1980685602384441368
Open “Ask ChatGPT” and ChatGPT can see the page you’re on to give instant, accurate answers—no tab-switching required. https://x.com/OpenAI/status/1981098271901962439
Our new AI-first web browser, ChatGPT Atlas, is here for macOS. Please send feedback! Availability on other platforms to follow.”” / X https://x.com/sama/status/1980690768391201180
The AI browser war has officially begun with OpenAI’s release of Atlas. Chrome: ~4B users. ChatGPT: 800M weekly. Every AI lab wants to own the interface, to be the default intelligence. The browser is basically an operating system. The shift from “chatbot” to “OS” is here.”” / X https://x.com/Yuchenj_UW/status/1980685683707842974
the chatgpt browser, Atlas, is here:”” / X https://x.com/gdb/status/1980700967030124730
Yesterday we launched ChatGPT Atlas, our new web browser. In Atlas, ChatGPT agent can get things done for you. We’re excited to see how this feature makes work and day-to-day life more efficient and effective for people. ChatGPT agent is powerful and helpful, and designed to be”” / X https://x.com/cryps1s/status/1981037851279278414
You can also use incognito mode when you don’t want ChatGPT to remember what you are doing in the browser. https://x.com/omarsar0/status/1980688230904144086
Google’s Willow chip achieves first verifiable quantum computing breakthrough
Google’s quantum computer ran the “Quantum Echoes” algorithm 13,000 times faster than classical supercomputers, marking the first time any quantum system has delivered verifiable, repeatable results that surpass traditional computing. The breakthrough enables precise measurement of molecular structures for drug discovery and materials science, moving quantum computing from theoretical demonstrations to practical applications.
New breakthrough quantum algorithm published in @Nature today: Our Willow chip has achieved the first-ever verifiable quantum advantage. Willow ran the algorithm – which we’ve named Quantum Echoes – 13,000x faster than the best classical algorithm on one of the world’s fastest https://x.com/sundarpichai/status/1981013746698100811
The Quantum Echoes algorithm breakthrough https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/
Today in @Nature, we published a breakthrough demonstration of verifiable quantum advantage using a measurement known as out-of-time-order correlator (OTOC), or Quantum Echoes. Performed on our Willow chip, it paves a path toward real-world applications → https://x.com/GoogleQuantumAI/status/1981016219340648778
Meta cuts 600 AI jobs while protecting expensive new hires
Meta laid off 600 employees from its AI division while sparing workers at TBD Labs, the unit led by newly hired chief AI officer Alexandr Wang. The cuts highlight CEO Mark Zuckerberg’s bet on expensive external talent over legacy employees, as the company streamlines what insiders called a “bloated” AI operation. This restructuring follows Meta’s $14.3 billion investment in Scale AI and comes as the company pours billions into AI infrastructure to compete with OpenAI and Google.
Meta lays off 600 from ‘bloated’ AI unit as Wang cements leadership https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html
Meta’s Alexandr Wang reorgs superintelligence lab https://www.axios.com/2025/10/22/meta-superintelligence-tbd-ai-reorg
Anthropic secures massive Google Cloud deal worth tens of billions
Anthropic will access up to one million Google TPUs in a deal worth tens of billions, bringing over a gigawatt of computing power online in 2026. This represents one of the largest cloud computing agreements in AI history, reflecting Anthropic’s explosive growth to 300,000 business customers and the massive infrastructure needed to train frontier AI models. The deal helps Anthropic diversify beyond Amazon while maintaining partnerships across multiple chip providers.
Anthropic, Google in Talks on Multibillion-Dollar Cloud Deal – Bloomberg https://www.bloomberg.com/news/articles/2025-10-21/anthropic-google-in-talks-on-cloud-deal-worth-tens-of-billions
Expanding our use of Google Cloud TPUs and Services \ Anthropic https://www.anthropic.com/news/expanding-our-use-of-google-cloud-tpus-and-services
The expansion, worth tens of billions of dollars, dramatically increases our compute resources as we push the boundaries of AI research and product development. Read more: https://x.com/AnthropicAI/status/1981460119742533848
Today, we announced that we plan to expand our use of Google TPUs, securing approximately one million TPUs and more than a gigawatt of capacity in 2026.”” / X https://x.com/AnthropicAI/status/1981460118354219180
AI systems can now analyze video content and identify emotions directly
Major AI models like Gemini can answer questions about visual elements in videos that aren’t captured in transcripts, including reading facial expressions and body language. This capability remains surprisingly underutilized despite opening possibilities for automated content moderation, accessibility tools, and video analysis applications across industries.
I am continually surprised about how few applications take advantage of the fact that AI systems can work with video. For example, I can ask Gemini questions about what happens in a video (and not mentioned in a transcript) and get coherent answers including identifying emotion https://x.com/emollick/status/1980695990790418889
Google’s Veo adds precise video editing with object insertion and removal
Google enhanced its AI video generator with surgical editing tools that can add or remove specific objects while keeping the rest of the video unchanged. This addresses a key limitation of AI video tools, which typically require generating entirely new clips for any modifications. The precision editing capability could make AI video creation more practical for professional workflows where iterative changes are essential.
Veo is getting new precision editing capabilities that let you easily add or remove elements from a scene – all while preserving the integrity of your original video. 🎥 https://x.com/GoogleDeepMind/status/1980261047836508213
Google’s Veo 3.1 generates videos simulating complex physics scenarios
The AI video model can create realistic footage of different materials behaving distinctly in water—iron sinking, wood floating, sugar dissolving—suggesting these systems understand physical properties beyond simple visual patterns. This demonstrates that video AI may be developing rudimentary physics reasoning, not just copying existing footage, though the simulations remain imperfect.
AI video models may not be complete world models, but they are oddly capable of fairly sophisticated (if flawed) “”simulations”” of novel situations. Veo 3.1: “”three toy ships, one made of iron, the other of wood, and one out of loosely packed sugar, fall into a pool of water”” https://x.com/emollick/status/1980126684306424155
AI video models now generate complex physics interactions with surprising accuracy
OpenAI’s latest video generation model demonstrates unprecedented understanding of physical dynamics, accurately simulating how honey would flow down a marble statue and cause realistic cracking effects. This represents a major leap beyond previous AI video tools that struggled with basic physics, suggesting these systems are developing intuitive understanding of how materials interact in the real world rather than just copying visual patterns.
Ethan Mollick on X: “This is not perfect, but it also doesn’t seem like a model trained on video should be able to get so many details of the dynamics right: “honey pours down a marble statue of a toucan, the nose cracks and falls off” https://t.co/ei0I5ecmom” / X https://x.com/emollick/status/1980128284294938661
Microsoft launches AI-powered Windows 11 with voice control and screen vision
Microsoft’s Copilot Fall Release transforms Windows 11 PCs into AI-powered computers that respond to “Hey Copilot” voice commands and can see what’s on your screen to provide guided assistance. The update introduces collaborative AI features like shared group chats, personalized memory that recalls your preferences across conversations, and AI agents that can take actions like booking hotels or creating websites from local files. This represents a shift from traditional point-and-click computing to conversational interaction, with Microsoft positioning it as making every Windows 11 device an “AI PC” rather than requiring specialized hardware.
(7) Copilot on Windows 11 | Meet the Computer You Can Talk To – YouTube https://www.youtube.com/watch?v=7Nbf1fqxcCM
All of today’s @Copilot announcements boil down to one core idea: we’re betting on humanist AI. An AI that always puts humans first. – Copilot Groups – AI browser – our new character Mico – memory updates – Copilot for health + more in this morning’s event https://x.com/mustafasuleyman/status/1981390345578697199
Human-centered AI | Microsoft Copilot Blog https://www.microsoft.com/en-us/microsoft-copilot/blog/2025/10/23/human-centered-ai/
Making every Windows 11 PC an AI PC | Windows Experience Blog https://blogs.windows.com/windowsexperience/2025/10/16/making-every-windows-11-pc-an-ai-pc/
This is Microsoft’s new agentic Copilot feature for Windows 11 in action. It will take your task and complete it in a separate desktop environment, and you can watch it while it works or minimize and get on with your own task. https://x.com/zacbowden/status/1978822883217461388
Today, we’re one step closer to AI as an operating system. A computer you can talk to, that can see what you see, and take action – all with your permission, all more intuitive than ever. Vision now GA globally + more on today’s @Windows blog: https://x.com/mustafasuleyman/status/1978808627008847997
Until now, browsers have required you to do all the work—type, click, juggle tabs. Today, Copilot Mode in Edge, your AI browser changes that. We’re taking the next step in browsing by introducing even more AI innovation—so your browser can anticipate, assist, and accelerate what”” / X https://x.com/yusuf_i_mehdi/status/1981426387958583717
Anthropic launches Claude Code on the web for cloud-based programming tasks
Anthropic released Claude Code on the web, allowing developers to assign coding tasks that run on cloud infrastructure instead of local machines. The service enables parallel task execution across multiple GitHub repositories, automatic pull request creation, and mobile coding through iOS, targeting routine fixes and backend development where isolated cloud environments provide security advantages over local execution.
Claude Code on the web \ Anthropic https://www.anthropic.com/news/claude-code-on-the-web
We’re so excited to launch Claude Code on the web and iOS app today! This has become a daily driver for many of us on the Claude Code team. Here are a few of our favorite ways to use it:”” / X https://x.com/_catwu/status/1980338889958257106
Claude models dominate coding benchmark with 40% success rate
Anthropic’s Claude AI swept the top three positions on SWE-Bench Pro, a challenging test where AI systems must solve real software engineering problems from GitHub repositories. The 40% pass rate represents a significant milestone for AI coding capabilities, suggesting these models can now successfully tackle nearly half of authentic programming tasks that human developers face in practice.
We launched SWE-Bench Pro last month to incredible feedback, and we’ve now updated the leaderboard with the latest models and no cost caps. SoTA models now break 40% pass rate. Congrats to @Anthropic for sweeping the top spots! 🥇Claude 4.5 Sonnet 🥈Claude 4 Sonnet 🥉Claude 4.5 https://x.com/scale_AI/status/1980685992987431368
Zavo launches AI-powered point of sale system for restaurants
Y Combinator startup Zavo combines payments, point-of-sale, and AI agents into one platform for restaurants and retail businesses. The system includes AI agents that handle finance, operations, and customer management tasks automatically. Over 400 businesses already use the platform, which aims to reduce the complexity of managing multiple disconnected business tools.
Zavo (@zavopay) is building the first agentic point of sale for restaurants and retail. Payments, POS, and AI agents in one platform to build the future of autonomous commerce. Over 400 businesses already use Zavo to accept payments and manage operations. https://www.ycombinator.com/launches/OcF-zavo-agentic-point-of-sale-for-restaurants-and-retail?x=84
NVIDIA produces first advanced AI chip wafers on US soil
NVIDIA manufactured its first Blackwell AI chip wafers at TSMC’s Arizona facility, marking a historic shift toward domestic production of America’s most critical semiconductors. This milestone reduces dependence on overseas manufacturing amid rising geopolitical tensions and tariffs. The Blackwell chips promise 25x better cost and energy efficiency than previous generations, with major tech companies like Amazon and Google already committed to adopting the architecture.
NVIDIA shows off its first Blackwell wafer manufactured in the US https://www.engadget.com/big-tech/nvidia-shows-off-its-first-blackwell-wafer-manufactured-in-the-us-192836249.html
AI recreates legendary 1941 literature course in four prompts
A researcher used AI to transform W.H. Auden’s notoriously difficult 6,000-page humanities syllabus into a fully annotated website, completing in minutes what would have taken hours of manual work. This demonstrates AI’s growing capability to handle complex academic content organization and annotation tasks that previously required extensive human expertise and time investment.
I took the surviving syllabus of W. H. Auden’s 1941 “”Hardest Class in the Humanities”” (6,000 pages of reading, memorization of poems, etc.) & turned it into an annotated site with all the readings. (Would have taken hours, instead it was 4 prompts) Here: https://x.com/emollick/status/1979689783485161728
Anthropic CEO defends company’s AI policies amid political criticism
Dario Amodei responded to “inaccurate claims” about Anthropic’s positions, emphasizing the company’s $200 million defense contract and support for Trump’s AI initiatives. The statement comes as Anthropic faces scrutiny over supporting California’s AI safety bill SB 53, which the company argues only affects the largest AI developers while protecting startups. Independent studies show Anthropic’s models are less politically biased than competitors, contradicting claims of partisan leanings.
A statement from Dario Amodei on Anthropic’s commitment to American AI leadership \ Anthropic https://www.anthropic.com/news/statement-dario-amodei-american-ai-leadership
SAG-AFTRA strikes first major AI voice protection deal with OpenAI
The actors’ union partnered with OpenAI to create safeguards for performers’ voices and likenesses in the upcoming Sora 2 video generator, marking the first comprehensive agreement between Hollywood talent and a major AI company. This deal could set the template for how the entertainment industry navigates AI’s ability to replicate human performances, with Bryan Cranston serving as a prominent advocate for the protections.
SAG-AFTRA, OpenAI, Bryan Cranston Collaborate to Ensure Voice and Likeness Protections in Sora 2 | SAG-AFTRA https://www.sagaftra.org/sag-aftra-openai-bryan-cranston-collaborate-ensure-voice-and-likeness-protections-sora-2
Netflix declares it’s ‘all in’ on AI tools for filmmakers
Netflix is positioning itself as Hollywood’s AI pioneer by using generative AI for specific production tasks like building collapses and actor de-aging, rather than replacing human creativity entirely. This matters because Netflix’s approach—AI as enhancement tool rather than replacement—could set the industry standard while other studios remain cautious about job displacement concerns. The company has already deployed AI in shows like “The Eternaut” and “Happy Gilmore 2,” signaling a pragmatic path forward amid broader Hollywood resistance to the technology.
Netflix goes ‘all in’ on generative AI as entertainment industry remains divided | TechCrunch https://techcrunch.com/2025/10/21/netflix-goes-all-in-on-generative-ai-as-entertainment-industry-remains-divided/
Netflix is betting that AI augmentation beats AI replacement, and they might be fighting the last war. There’s basically two ways generative AI eats Hollywood: 1. The Netflix path – slip AI tools into the existing pipeline. Make VFX 10x faster, de-age actors without breaking https://x.com/bilawalsidhu/status/1980903106943885694
Google Maps data now powers Gemini API for location-aware AI apps
Developers can now connect Gemini’s reasoning with real-time data from 250 million places through Google’s new Maps grounding tool. This enables AI applications to provide detailed travel itineraries, hyper-local recommendations, and place-specific answers using current business hours, reviews, and location data. The feature works alongside Google Search grounding to create more contextually aware responses than either tool alone.
Grounding for Google Maps now available in the Gemini API https://blog.google/technology/developers/grounding-google-maps-gemini-api/
Introducing grounding with Google Maps in the Gemini API, bringing data about 250 million places and Gemini together to create all new experiences 🗺️! So powerful to connect things like maps + search together in a single experience : ) https://x.com/OfficialLoganK/status/1979286216953733227
OpenAI trains ChatGPT to replace junior investment banker tasks
OpenAI’s “Project Mercury” employs over 100 former investment bankers at $150/hour to teach AI models financial modeling and PowerPoint creation, targeting the lucrative banking industry as consumer subscriptions plateau at 70% of its $13 billion revenue. This marks a strategic shift toward high-value B2B applications that could automate entry-level finance work. The initiative suggests OpenAI sees specialized professional services as its next major growth opportunity beyond general consumer AI assistance.
“OpenAI Looks to Replace the Drudgery of Junior Bankers’ Workload – Bloomberg https://www.bloomberg.com/news/articles/2025-10-21/openai-looks-to-replace-the-drudgery-of-junior-bankers-workload”
OpenAI has an army of ex-investment bankers making financial models to train ChatGPT – Sherwood News https://sherwood.news/tech/openai-has-an-army-of-ex-investment-bankers-making-financial-models-to-train/
Andrej Karpathy argues LLMs lack cultural accumulation and self-improvement loops
Former OpenAI researcher Karpathy told Dwarkesh Patel that current AI systems can’t write books for other AIs to read and build upon, unlike humans who developed culture and knowledge transfer. He calls this a fundamental limitation preventing true agent capabilities, estimating we’re still a decade away from AGI despite rapid progress in coding assistants and other narrow applications.
.@karpathy says that LLMs currently lack the cultural accumulation and self-play that propelled humans out of the savannah: Culture: > “Why can’t an LLM write a book for the other LLMs? Why can’t other LLMs read this LLM’s book and be inspired by it, or shocked by it?” Self https://x.com/dwarkesh_sp/status/1980333945385562176
Andrej Karpathy — “We’re summoning ghosts, not building animals” – YouTube https://www.youtube.com/watch?v=lXUZvyajciY
My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good. I re-watched the pod just now too. First of all, yes I know, and I’m sorry that I speak so fast :). It’s to my detriment because sometimes my speaking thread out-executes my”” / X https://x.com/karpathy/status/1979644538185752935
On Dwarkesh Patel’s Podcast With Andrej Karpathy https://thezvi.substack.com/p/on-dwarkesh-patels-podcast-with-andrej
The most interesting part for me is where @karpathy describes why LLMs aren’t able to learn like humans. As you would expect, he comes up with a wonderfully evocative phrase to describe RL: “sucking supervision bits through a straw.” A single end reward gets broadcast across https://x.com/dwarkesh_sp/status/1979259041013731752
AI agents learn computer skills by watching YouTube tutorials
Researchers created VideoAgentTrek, which trains AI agents to use computers by automatically extracting 1.52 million interaction steps from 39,000 unlabeled YouTube tutorial videos. The system converts raw screen recordings into structured action sequences without human annotation, then uses a two-stage training process to teach agents to navigate real applications and operating systems. This approach achieved 15.8% success on OSWorld benchmarks, demonstrating that agents can learn complex computer tasks from the same tutorial videos humans watch.
It makes perfect sense to let agents understand, imitate, and learn how humans use computers from videos! We present VideoAgentTrek, which builds strong computer-use agents through video pretraining and agentic tuning. This approach has already proven effective in the training of”” / X https://x.com/huybery/status/1981728838024560669
VideoAgentTrek: Computer Use Pretraining from Unlabeled Videos https://videoagenttrek.github.io/
Airbnb CEO says Chinese AI model Qwen beats OpenAI in production
Brian Chesky revealed Airbnb relies heavily on Alibaba’s Qwen model over OpenAI because it’s faster and cheaper for real-world applications. This signals a major shift as even Silicon Valley companies increasingly choose Chinese AI models for cost-effective deployment, challenging OpenAI’s dominance in enterprise markets.
Airbnb CEO Brian Chesky: “We’re relying a lot on Alibaba’s Qwen model. It’s very good. It’s also fast and cheap… We use OpenAI’s latest models, but we typically don’t use them that much in production because there are faster and cheaper models.” The valley is built on Qwen?”” / X https://x.com/natolambert/status/1980657338726887662
Figure CEO claims humanoid robots will handle general tasks through speech alone by next year
Figure’s CEO Brett Adcock boldly predicts their humanoid robots will perform any household task in unfamiliar environments using only voice commands within 12 months, while claiming a 1-2 year lead over competitors. The company is developing real-time speech-to-speech communication as the primary interface and has upgraded their F.03 model with 4x more powerful speakers and better microphones. If achieved, this would represent a major leap from today’s limited, pre-programmed robotic assistants to truly adaptable household helpers.
Figure CEO Brett Adcock: “”I think we’ll be able to do general-purpose work with a humanoid by just thru speech and have it do everything you want it to do in unseen places, like a home it’s never been in, next year.”” “”We’re multiple, 1-2 years, beyond anybody else in the world”””” / X https://x.com/TheHumanoidHub/status/1978865452777423114
We’re building real-time speech-to-speech, which will be the default UI between humans and robots F.03 has a 4x more powerful speaker with improved microphone for performance and clarity 👇 https://x.com/adcock_brett/status/1980301303172694209
China deploys first mass-produced military drone designed for infantry combat
China has begun manufacturing foldable grenade-carrying drones specifically engineered for battlefield use, marking a shift from repurposed civilian quadcopters to purpose-built military hardware. This represents the first mass-production of combat drones designed from scratch for infantry operations, potentially accelerating the militarization of drone technology globally.
🇨🇳 China just rolled out a mass-produced grenade drone. Foldable props. Infantry-ready. The battlefield just changed. Unlike most repurposed quadcopters, this model is designed from the ground up for combat. It can carry grenades, fold into a backpack, and integrate with https://x.com/IlirAliu_/status/1980673433546682574
Amazon develops smart glasses for delivery drivers using AI navigation and hazard detection
Amazon is testing AI-powered smart glasses that give delivery drivers hands-free access to package scanning, turn-by-turn walking directions, and hazard alerts through a heads-up display. The glasses eliminate the need for drivers to constantly check their phones, potentially improving safety and efficiency for the millions of daily deliveries. Hundreds of drivers helped design the system, which uses computer vision and Amazon’s mapping technology to guide workers from their vehicles to customers’ exact doorsteps.
Amazon is developing smart glasses allowing delivery drivers to work hands-free https://www.aboutamazon.com/news/transportation/smart-glasses-amazon-delivery-drivers
There’s Only One Lonely AI Visual: Week Ending October 24, 2025
Sora 2; “a movie trailer whose genre you can never quite figure out because it keeps drawing on cliches from many different genres through a series of fast cuts, make it over the top, featuring movie guy voice” https://x.com/emollick/status/1980285871002657026
Top 13 Links of The Week – Organized by Category
AgentsCopilots
Lets assume vibe coding gets good enough soon for non-coders to produce workable tools to solve their problems, though not enterprise-level stuff What skills should we teach people in class to take advantage of these capabilities? Right now, intro courses aren’t geared for this”” / X https://x.com/emollick/status/1979627762903392362
Tried the OpenAI browser for 20 minutes. Quit and went back to Chrome. “Agent mode” is slop. Most of the time I just want to yell: “Stop thinking and click that fking button!!” The models are not there yet. We’ve got a whole “decade of AI agents” ahead.”” / X https://x.com/Yuchenj_UW/status/1980846874904219932
TLDR: OpenAI Atlas > Perplexity Comet in an agent mode head to head. Here is my use case: I have a very real, very tedious use case, which is a manual task that I do every day. 1. I go to the school website to look at each of my daughter’s classes 2. I look at her grades 3. I https://x.com/raizamrtn/status/1980695747227210213
Anthropic
Claude Code subagents are all you need. Some will complain on # of tokens. However, the output this spits out will save you days. The code quality is mindblowing! Agentic search works exceptionally well. The subagents run in parallel. ChatGPT’s deep research is no match! https://x.com/omarsar0/status/1978235329237668214
We launched a sandbox within Claude Code that allows you to define exactly which directories and network hosts your agent can access. Type /sandbox to enable it. https://x.com/trq212/status/1980380866657526047
Google Skills: A new home for building AI skills https://blog.google/outreach-initiatives/education/google-skills/
Google revamps AI Studio with new features for vibe coding https://www.testingcatalog.com/google-revamps-ai-studio-with-new-features-for-vibe-coding/
Media
Andrej Karpathy — AGI is still a decade away https://www.dwarkesh.com/p/andrej-karpathy
The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 – Why self https://x.com/dwarkesh_sp/status/1979234976777539987
MetaAI
Datasets you need to build an AI JARVIS — Meta dropped 500 hours of 3D motion data spanning everything from individual gestures to multi-person conversations and co-living scenarios, complete with motion tracking, annotations, and audio tracks. https://x.com/bilawalsidhu/status/1980719297669525925
Perplexity
How to get 20,000 visitors per month with AI SEO: 1. Find 1000 keyword variations using Perplexity’s MCP 2. Claude Code builds and generates all pages 3. Each gets ~20 visits/month All automated. All targeted. Zero ad spend. (h/t @boringmarketer) https://x.com/startupideaspod/status/1957499676850270489
Publishing
Uber Giving Some US Drivers Option to Earn Money From Tasks Like Uploading Menus – Bloomberg https://www.bloomberg.com/news/articles/2025-10-16/uber-giving-some-us-drivers-option-to-earn-money-from-tasks-like-uploading-menus
TechPapers
Technological development was slow for 9,288 generations not because past humans were dumb, but prior to books & the scientific method, innovation happened at the level of societies, not people. So tech evolved gradually, not through leaps of genius, but slow cultural adaption https://x.com/emollick/status/1979616432355946961





Leave a Reply