Prompt from 2023: “it’s halloween in an 80s suburb. a scene from a classic 80s movie. it’s autumn. fall is in full effect. there is a crispness in the air. all of the houses are decorated for halloween. instead of kids trick or treating, computers are dressed for halloween, going from house to house. aspect ratio is 16:9 landscape.”
This is the cover generated by Dalle-3 two years ago:
And here’s the new image from Gemini:
For comparison, here’s an image from same prompt generated by ChatGPT this year:
For the category covers, I used a rubric (scripted Python) that asked me a few questions. I answered that it was Halloween in the suburbs, in a scene from a classic 80s movie, but gave no guidance re the categories other than the name of the each category.
Claude wrote the prompts and was very creative. I’ve put some of my category favorites below, along with the prompt Claude made:
AUDIO: A child wearing an elaborate cardboard cassette tape Halloween costume with spinning reels and tape streamers, standing on a leaf-covered suburban sidewalk at dusk in 1985, jack-o’-lanterns glowing on porches, the cassette label reading ‘AI VOICE MIX 85’, warm autumn colors, cinematic 80s film photography aestheticIMAGES: A vintage Polaroid instant camera on an 80s suburban porch railing at dusk, actively ejecting a developing photograph of a costumed trick-or-treater, the image half-formed and ghostly, glowing jack-o-lanterns and autumn leaves in soft focus behind it, warm nostalgic lighting, cinematic compositionCHIPS: Cinematic 80s suburban front porch at Halloween dusk, warm amber lighting, carved pumpkins and fall leaves, close-up of ceramic bowl filled with gleaming silicon wafers and computer chips instead of candy, child’s hand reaching toward bowl, Spielberg-style nostalgic glow, shallow depth of field.PUBLISHING: A teenage newspaper delivery boy on a red BMX bike rides through a misty 1980s suburban street at dawn, his canvas bag bursting with newspapers showing AI-generated headlines, Halloween decorations and glowing jack-o’-lanterns on porches, autumn leaves scattered across sidewalks, cinematic lighting with orange and purple tones, nostalgic Spielberg-era composition.LOCALLY HOSTED AI: Cozy 80s suburban basement with wood paneling and orange shag carpet, homemade server rack with blinking LEDs next to carved pumpkins and paper skeleton decorations, children in Halloween costumes peering through basement window from outside at dusk, warm incandescent lighting, Spielberg cinematography, film grainBYTEDANCE: An 80s suburban Halloween porch scene with warm jack-o’-lantern lighting, a costumed trick-or-treater holding a glowing smartphone displaying TikTok’s interface with viral metrics, an overflowing candy bucket with vintage 80s candy bars, classic aluminum siding house exterior, autumn leaves scattered on wooden steps, nostalgic Spielberg-esque cinematography with practical lighting and film grain.ETHICS: Split driveway scene at twilight in 1980s suburb, child in Halloween costume at sidewalk fork facing two versions of same house, left side has friendly jack-o-lantern and warm inviting porch, right side has sinister carved pumpkin and scattered candy with broken honor system, autumn leaves, warm nostalgic film grain, ethical choice visualized as diverging suburban pathsBUSINESS: Wide cinematic shot of an 80s American suburban street at golden hour dusk, adults in business suits carrying glowing jack-o’-lantern briefcases walking between Halloween-decorated houses, each home’s windows showing different business symbols like handshakes and dollar signs, autumn leaves swirling, warm orange street lamps, film grain, Spielbergian compositionPODCASTS: Kids in 1980s Halloween costumes being filmed with a large VHS camcorder under a streetlight on a suburban sidewalk covered in autumn leaves, jack-o-lanterns glowing on porches in the background, warm nostalgic lighting with visible red recording light on the camera, cinematic composition reminiscent of 80s Halloween movies.TWITTER: Wide shot of four kids in 80s Halloween costumes huddled on a suburban lawn at twilight, one holding a walkie-talkie, gesturing and whispering gossip while jack-o’-lanterns glow on porches behind them, cinematic autumn lighting with orange and purple sky, Spielberg-era composition.
This week’s humanities reading is a snippet from Mary Shelley’s “Frankenstein”:
“The different accidents of life are not so changeable as the feelings of human nature. I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.”
This week, I organized 522 links, 54 of which inform the executive summaries. We’ll start with the top stories, then move into updates from the various fronteir models and companies, followed by robotics news, a math breakthrough, ethics considerations around audio and safety, and publishing updates from Canva.
This Week’s Top Stories
Google Hits A $100 Billion Quarter “Just posted Q3 earnings. We delivered our first-ever $100B quarter driven by double-digit growth across every major part of our business. (Five years ago, our quarterly revenue was at $50B🚀) Our full-stack approach to AI is driving real momentum nd we’re shipping at speed.”
Just posted Q3 earnings. We delivered our first-ever $100B quarter driven by double-digit growth across every major part of our business. (Five years ago, our quarterly revenue was at $50B🚀)
Our full-stack approach to AI is driving real momentum and we’re shipping at speed.… pic.twitter.com/L4Yz1iUuyT
Open Source Only 3.5 Months Behind World’s Best Models If the AI bubble is going to fail, it’s possible an open-source flood is what takes it out.
Most people know OpenAI and Google, and when they think of large language models and AI, they probably think of ChatGPT, Google Gemini, maybe Anthropic’s Claude, possibly Perplexity, and after that most people don’t know any other models. These are all models built by companies that you can often use for free, or you can buy a monthly subscription for a stronger version. These companies are the ones spending billions and billions of dollars to stay one step ahead of each other.
However, there are many more models available that you can download or use for free, and make copies of and modify… aka open source.
Studies are showing that these free, publicly available models are only lagging about three and a half months behind the big models. The reason that’s a little bit insane is because the big models are the ones spending billions and billions of dollars on data centers, investments, and chips, and propping up the economy in what might be a bubble. If you’re willing to wait three and a half months, you can get a free version of something close to any of the top models. That means that even though the average person may not be able to install and use these free versions, companies with IT teams can certainly do so.
Last week, there was a statement by the CEO of Airbnb who said that Airbnb uses the Qwen model more than any other model because it’s free and it’s just as good. That’s a perfect example of the bottom falling out of these frontier models, even as they’re propped up by all the investment to be the best. Three and a half months is really short, and it’s been that way now ever since DeepSeek showed up in January after the election and pulled the rug out from under OpenAI’s state-of-the-art GPT model.
“We used our new capabilities index, the ECI, to measure the gap between open- and closed-weight models. The result? This gap is smaller than previously estimated. On average, it takes 3.5 months for an open-weight model to catch up with closed-source SOTA.”
We used our new capabilities index, the ECI, to measure the gap between open- and closed-weight models.
The result? This gap is smaller than previously estimated.
On average, it takes 3.5 months for an open-weight model to catch up with closed-source SOTA. pic.twitter.com/SMqfvQNzZ4
Props to Scott Galloway from the Pivot Podcast for saying that if he were the Chinese government, he would flood the market with free, frontier-model-level open source, similar to Amazon’s predatory pricing in retail or foreign steel price dumping.
Here’s a demonstration of how good Google Gemini is for pulling details from a YouTube clip:
Prompt: Can you pull the monologue from Scott Galloway about AI dumping from this video? He talks about what he would do with open source I’d like to grab a paragraph as a pull quote. (pasted URL of YouTube video)
Gemini: Here is the monologue from Scott Galloway regarding AI dumping and his “gangster move” strategy involving open source: “I believe that China, the CCP, the key gangster move for them would be to dump AI. Would be to dump LLMs. And that is create a series of LLMs and maybe chips and even new technologies, open-source, open-weight, dump it into the U.S. market and basically ruin Sam Altman’s fever dreams of having a trillion-dollar IPO. Right when every college kid starts using this new LLM or AI that is free and they can download in a second… If I were Xi, I’d be like, ‘Okay, we’re not going to go after them kinetically or economically; what we’re going to do is we’re just going to engage in AI dumping.’” [38:46]
Prompt: Can you make a graphic to go with this that includes the quote?
Gemini:
Anthropic Study: Introspective awareness in large language models The top story this week is a fun research exercise by Anthropic that sounds dramatic, but is actually scientific and valuable. https://www.anthropic.com/research/introspection
Before we talk about it, it’s important to understand two concepts with AI language models: interpretability and alignment.
Interpretability is simply understanding how the model’s “mind” works. It’s a bit like neuroscience for LLMs. The idea that we can talk and process information but not always understand how our brains work is a good analogy for a large language model. We see its results. We know it does a pretty good job with some things and not with others. But it’s not easy to go into the black box of all the training data and the mathematical representations stored in the model and back out why or how it’s responding. So interpretability is about understanding the internal mechanics of the model’s “mind,” and how that leads to the way the model responds.
Alignment is when the model is tweaked or trained to try to have it give answers that we think are the most helpful/humanlike. That’s where a lot of subjective training gets added, because alignment is very hard to determine for the entire population of the earth!
Not everyone is going to agree on how the model should behave. That’s why a lot of alignment (including at Anthropic) is designed to start the AI conversation as almost a blank slate—kind of mundane and average. It’s why lazy prompting sometimes gives lazy answers (or a spew of slop).
While models might start as the “average” conversation… based on the prompt you give, the model adapts and adjusts to your preferences. Your prompt itself is essentially a form of alignment.
If I’m rough around the edges, I can tell the model I want it to be gruff. If I’m soft-spoken and sincere, I can ask the model to take it easy. Same thing if you want short, curt responses versus long dialogue and explanations. You can actually align the model yourself through prompting. But that is very different from the core alignment in the model’s “DNA”… and also different from interpretability, which is understanding the inner machinations of a model and how it stores and retrieves it’s information.
Anthropic’s research here is essentially a test to see if models can, for lack of a better word, be introspective, and talk about their own training in a way that humans can’t.
It’s difficult to ask a human to explain their thought process or subconscious. However, because a language model is a computer, Anthropic is exploring whether the computer can self-reflect on how its data is stored as part of it’s answer… thus the term “introspection”.
If you ask an AI model what’s on its mind, it may simply make up a response and chit-chat with you. That’s not really introspection.
In this case, Anthropic did something more scientific by injecting things into copies of their model.
That’s not introspection. That’s messing up the model by skewing it so that every time you ask it a question, it can’t help but think about the Golden Gate Bridge.
In this newer work, they found a pattern (what’s called a vector) in how the model represents data internally. They identified that pattern by observing the model’s activity behind the scenes. They were able to find the place where the concept of ALL CAPS was stored. Then they injected a concept into the “all caps” vector, and suddenly the model became aware that it had been messed with…
In this case, the model immediately understood that something had been changed in its training.
That’s unlike the Golden Gate demo, where (after giving a lot of responses) the model was able to “reflect” simply by noticing that it was always talking about the Golden Gate Bridge after the fact.
Here, even prior to giving any response, the model was able to say, “I think something’s going on in the vector that represents all caps.”
The catch is that Anthropic’s research team is only able to get this to work about 20% of the time.
If you’re into the background of how models work, this is a really cool study. Anthropic has a wonderful blog post that’s accessible even for laypeople, and it walks through how this stuff works. This is an important element of model training. https://www.anthropic.com/research/introspection
Model News and General Performance
Amazon Announces Massive New Supercomputer In Partnership with Anthropic Amazon announced a partnership with Anthropic to build one of the world’s largest data centers. It’s 70% larger than any computing platform in AWS history, with 500,000 Trainium 2 chips. It’s already fully operational, with Anthropic using it to train models. The supercomputer is providing Anthropic 5x the compute used to train previous Claude models.
Andrew Jassy, CEO of Amazon, says he expects Claude to actively use more than 1 million Trainium 2 chips by the end of 2025. The data center is located near South Bend, Indiana.
Anthropic Launches Financial and Excel Integration Within Claude Anthropic announced Claude for financial services with an Excel plugin and connectors that can integrate real-time market data and portfolio analytics, along with pre-built agents designed to help with finance tasks like building cash flow models. https://www.anthropic.com/news/advancing-claude-for-financial-services
Claude for Excel is now available as a research preview. It lets users work with Claude in a sidebar within Microsoft Excel.
Some of the connectors Claude can provide include a large list of finance resources (see link). The skills are pretty neat, too: company analysis, cash flow models, due diligence data packs, company teasers, profiles, and earnings analysis.
Google Rolls Out Gemini for Home Voice Assistant Just a few weeks after Amazon announced the new Alexa with artificial intelligence baked into it, Google has announced the rollout of Gemini for Home, where Gemini’s chat abilities are integrated into the Google Home ecosystem. Google also released a blog post with 100 things you can try with the new assistant, as a way to help people learn the different skills.
My experience so far with the new Alexa device is that it’s way too chatty, leaves the microphone on for just enough time to pick up anything I may say, and then—if I say “never mind” or “I’m not talking to you”—it gives me a speech about how it’s sorry and will try to do better. I find it more annoying than helpful.
Google Integrates AI Into Google Earth Google announced a powerful integration between Google Earth and Gemini that allows users to conversationally search for and find objects and patterns instantly.
With geospatial reasoning, analysts can connect multiple Earth AI models, like weather forecasts, population maps, and satellite imagery. within Google Earth. This includes flood forecasting, wildfires, and air quality data.
Instead of just seeing where a storm might hit, analysts can use geospatial reasoning to identify which communities, and even homes, are most vulnerable, as well as what infrastructure is at risk.
I watched Meta’s capital expenditure ratio hit 37% of revenue in Q3 2025.
That’s not a typo. Meta is now spending more than a third of every dollar it makes on AI infrastructure. For context, that’s nearly double what they spent last year (20%), and it’s the highest capex-to-revenue ratio in the company’s history.
But here’s what really caught my attention: in just three months (September to October 2025), Meta announced $75.5 billion in infrastructure deals. That’s more than most countries spend on their entire tech sector in a decade.
The $75 Billion Question Nobody’s Asking When Mark Zuckerberg talks about building “superintelligent” AI systems, most people focus on the models. But the real story is in the infrastructure-and the unprecedented way Meta is financing it.
Between September 30 and October 27, 2025, Meta signed four massive deals:
Microsoft Launches Its AI Browser Last week I joked that this is the most powerful web browser I’m never going to use, because I’m not a fan of Microsoft Edge. Microsoft announced they were going to release their AI browser last week, and this week they came through. https://www.microsoft.com/en-us/edge/copilot-mode
They’ve introduced Copilot Mode, which lets you browse the web with AI integration. The browser can take actions on your behalf using agentic search, which is essentially “deep research” mode you see in most web-based chat tools with thinking modes.
Copilot remembers your browsing history, so you can talk to it and explore your history and browsing. GPT-5 is integrated within the browser. The browser can also make images or summarize videos. It’s basically all the things you’d get in Claude, Gemini, or ChatGPT, but they’re bundled into the Edge browser.
I hate to say it, but I think this is going to continue to be the strongest browser I never use.
Open AI Introduces ChatGPT Atlas: Agent Mode Last week, OpenAI launched their Atlas web browser with ChatGPT built in. This week, they announced that the browser now has Agent Mode, where ChatGPT can research, plan, and do things while you’re browsing. https://openai.com/index/introducing-chatgpt-atlas
ChatGPT Attempts To Improve Mental Health Risks “We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behavior by 65-80%.”
“OpenAI is racing against time to restructure its business — a must for the company if it hopes to hang on to a part of a multibillion-dollar investment led by SoftBank and gain more control over how it deploys its revenue.
The plans call for OpenAI to transform its for-profit arm into a public benefit corporation overseen by its nonprofit parent. If the company fails to reach a deal by year’s end, it could kiss half of the $40 billion in funding goodbye. But OpenAI’s biggest impediment reportedly comes from its closest investing partner: Microsoft.
The cloud computing giant invested $10 billion in OpenAI in 2023, following earlier outlays in 2019 and 2021. The investment gives Microsoft a major equity stake in OpenAI’s for-profit operations, along with exclusive access to OpenAI’s application programming interfaces (APIs) and models on Azure, revenue-sharing arrangements, and access to OpenAI’s intellectual property until 2030.
The team-up has been a boon for Microsoft, which has benefited from being able to sell OpenAI’s technologies to its customers. In the company’s fiscal 2024, Microsoft’s Intelligent Cloud segment, which includes its AI cloud services, generated $105 billion of the company’s $245 billion in total revenue.
But as OpenAI seeks to restructure its business, it’s also looking to break free from some of what it sees as the more onerous terms of the Microsoft deal. Now, the two companies are at loggerheads over the terms of a new deal. And unfortunately for OpenAI, it can’t move forward with its for-profit plans without buy-in from Microsoft.”
OpenAI has restructured so that it’s now run by a nonprofit called the OpenAI Foundation. The for-profit portion of OpenAI is a public benefit corporation called OpenAI Group PBC.
Microsoft now owns about 27% of the for-profit entity, reported as about $135 billion on a $500 billion valuation.
OpenAI says the Foundation’s stake in the for-profit is valued at about $130 billion, and that this gives the Foundation, quote, “a direct path to major resources before AGI arrives,” quote.
The Foundation is going to start with a $25 billion commitment focused on two goals. The first is health and curing disease. The second is the more ambiguous AI resilience, which can be interpreted as cybersecurity and infrastructure.
PayPal signs deal with OpenAI to become the first payments wallet in ChatGPT “Starting next year PayPal buyers and sellers will be able to complete transactions through the AI tool…. Last month, OpenAI said its users could buy from Shopify and Etsy merchants, and two weeks ago it announced an e-commerce deal with Walmart.”
Sam Altman says OpenAI will have a ‘legitimate AI researcher’ by 2028 “OpenAI says its deep learning systems are rapidly advancing, with models increasingly able to solve complex tasks faster. So fast, in fact, that internally, OpenAI is tracking toward achieving an intern-level research assistant by September 2026 and a fully automated “legitimate AI researcher” by 2028, CEO Sam Altman said during a livestream Tuesday.” https://techcrunch.com/2025/10/28/sam-altman-says-openai-will-have-a-legitimate-ai-researcher-by-2028/
OpenAI Sora Character Cameos
Recently, OpenAI debuted a feature called Character Cameos in its video generation tool, Sora. When Sora first came out, people could save avatars of themselves, put themselves into videos, and allow those avatars to be used and shared by friends. These were called Cameos, and they were tied to the identity of you or someone else.
Character Cameos are more flexible. They can be cartoons, 3D monsters, or basically anything you can think of. You can name a character and then put it out into the “marketplace,” so to speak, so other people can use that character in their videos. There’s even a character leaderboard that shows which characters are the most popular. This is part of a broader effort by OpenAI to create a social-networking effect inside the standalone Sora app.
NVIDIA Launches Many New Open Source Models for Reasoning, Physics, Robotics, and Biomedicine NVIDIA launched quite a few models this week, all open-sourced. The existing “parent” families of these models have been around for a while:
The Nemotron family is set up for reasoning and language models. The Cosmos platform is a physics model, or a world model. The GROOT family of models is designed for robotics. The Clara family is a biomedical AI group of models.
NVIDIA has contributed over 650 open models and 250 open datasets to Hugging Face, a repository of models that can be downloaded and shared publicly.
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone “For decades, patent search has relied on systems that require precise keyword combinations and knowledge of obscure syntax to perform comprehensive research. Public search tools feature limited capabilities, while professional platforms are expensive and require significant training. These limits have kept everyone except specialists on the sidelines.
Perplexity Patents ushers in a new era, in which anybody can access patent intelligence and get clear answers quickly, maintaining context across multiple questions. Find answers that match the pace of your innovation.” https://www.perplexity.ai/hub/blog/introducing-perplexity-patents
Politician holdings of public stocks are now available on Perplexity Finance I don’t have further context or a blog post, but it speaks for itself.
Cognition Announces Computer Use Integration with Their AI Software Engineer, Devin Cognition is best known for its AI software tool Devin. Devin is basically a copilot that helps write code. It’s a very strong tool that’s pitched—as the first “AI software engineer.”
Cognition announced that Devin can now use computers and share screen recordings. Devin can control desktop apps, build and QA mobile apps. Cognition showed examples of Devin making a desktop game, testing Windsurf features, navigate different types of code tools, and even developing Mac applications.
Devin now has full computer use capabilities and can share screen recordings.
You can control desktop apps, build and QA mobile apps, and automate tedious work.
Open Source Leader, Moonshot, Launches KimiLinear (nerdy update) Open-source frontier model creator Moonshot announced a new version of its frontier model family, Kimi, called KimiLinear. The model is optimized for tasks that require a lot of memory, or a lot of context.
There are plenty of technical details around how it works, but the achievement worth noting is that it reduces memory usage by up to 75%, while achieving up to 6x higher speed and throughput. I’m linking to the technical materials for anyone who’s interested.
AI-Assisted Math Proof of a 42 Year Old Open Math Problem Point Convergence of Nesterov’s Accelerated Gradient Method: An AI-Assisted Proof New Paper: Using ChatGPT to resolve a 42-year-old open problem (point convergence of Nesterov’s accelerated gradient method) https://arxiv.org/abs/2510.23513
Ethan Mollick shared this one: “Another example of the increasingly common situation where AI helps an academic greatly with intellectually challenging work (solving a 42 year old open math problem), with expert human guidance.” https://x.com/emollick/status/1983301090969038898
Robots
A few Tesla/Optimus headlines to take with a grain of salt.
Chamath says Tesla’s AI5 chip will be a functional technology leap that will go into Cybercab and Optimus, and the energy business will be a critical adjunct to robotics and self-driving.
Chamath says Tesla's AI5 chip will be a functional technology leap that will go into Cybercab and Optimus, and the energy business will be a critical adjunct to robotics and self-driving. pic.twitter.com/Y37sVL1MvG
Tesla’s learned world simulator, a neural network-based system, tackles the challenge of evaluating autonomous driving and Optimus robot AI. Trained on a massive, curated dataset from Tesla’s fleet, it synthesizes future states (e.g., high-resolution, multi-camera video streams)
Tesla’s learned world simulator, a neural network-based system, tackles the challenge of evaluating autonomous driving and Optimus robot AI.
Trained on a massive, curated dataset from Tesla’s fleet, it synthesizes future states (e.g., high-resolution, multi-camera video streams)… pic.twitter.com/jRsqhF0PAv
Music Udio Settles with Universal, Announces Partnership, A New Platform, and Possibly The End of the Original Platform The same week that OpenAI announced it’s going to get into the audio-generation business, Udio announced a strategic agreement for AI-licensed music creation with Universal Music Group.
The two have settled copyright infringement lawsuits and, moving forward, are going to collaborate on new commercial music creation.
This includes a new platform launching in 2026 that will be trained on authorized, licensed music and connected to Udio.
Canva announced what it’s calling a creative operating system. What strikes me the most is how wildly pretentious the announcement is. Check out this first paragraph:
“Every big idea that’s shaped our world started as a spark of imagination. From rockets that reach other planets to the supercomputers we carry in our pockets to the power of generating anything with a single prompt, every breakthrough once seemed impossible until someone imagined it into being.”
This continues for three more paragraphs. It’s actually a bit distracting.
Once you get into the actual product, it looks like a selection of stand-alone tools with an AI layer and a platform wrapper that they’re calling an operating system.
The first tool is their video tool, which at first feels pretty traditional, but it’s updated and intuitive.
However, as an alternative to the traditional video workflow, you can upload a bunch of clips, tell it what you want to do with them, and it will ‘magically’ cut everything together into a video you can use for social media.
The new Canva Forms tool seems pretty great. Forms are easy to create, and you can add feedback, RSVPs, surveys, or signups onto your website. The form results go into Canva Sheets.
There’s a new website editor with prompt-based vibe coding, and a vibe-coding email designer. Both are also integrated with drag-and-drop WYSIWYG functionality. It looks powerful, to be honest.
There’s also a 3D shape generator for building animations, a generative AI help desk, and a guided conversational tool that helps you define and structure presentations.
Ethics and Alignment
Statement from Character.AI on Safety “Taking Bold Steps to Keep Teen Users Safe on Character.AI” After months and months of almost nothing but negative press, Character.AI finally announced that they’re changing their policies for minors. https://blog.character.ai/u18-chat-announcement/
Users under 18 will no longer be able to have open-ended chats. They’ll be able to create videos, stories, and streams, but they won’t be able to just chat aimlessly with pretend friends.
Second, Character.AI is adding age-gating to make sure that anything they generate is appropriate for minors.
Third, they’re launching an independent nonprofit dedicated to researching safety techniques.
That’s a lot of actual action, and not just posturing.
It’s incredibly late in the game for them to be making all these changes. Their reputation is so far down in the toilet that I’m having trouble even caring what they say at this point. I would steer clear of them myself, especially if I were a kid.
This Week’s Humanities Reading
This week’s Halloween themed humanities reading is a snippet from Mary Shelley’s “Frankenstein” that has a dash of AGI in it as well:
“The different accidents of life are not so changeable as the feelings of human nature. I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.”
Full Executive Summaries with Links, Generated by Claude 4.5
Anthropic finds evidence that Claude models can detect their own thoughts Researchers injected known concepts into Claude’s neural activity and found the AI could identify these “artificial thoughts” about 20% of the time, suggesting genuine introspection rather than just plausible-sounding responses. The most capable models (Claude Opus 4 and 4.1) performed best, indicating this ability may grow with AI advancement. While highly unreliable, this represents the first experimental evidence that language models can monitor their own internal states, potentially enabling more transparent AI systems that can explain their reasoning processes.
New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude. https://x.com/AnthropicAI/status/1983584136972677319
These two paragraphs from this study from Anthropic on AI introspection are worth a second to read. I think it is fair to say that both conclusions are quite… controversial, but the paper makes a really interesting attempt to back up these assertions with real experiments. https://x.com/emollick/status/1983603377469845660
Microsoft hits $100 billion quarterly revenue milestone driven by AI growth The tech giant doubled its quarterly revenue in five years, with AI integration across its entire product stack fueling double-digit growth in every business division. This marks the first time any company has reached $100 billion in quarterly revenue primarily through AI-powered services, demonstrating how artificial intelligence is becoming a massive revenue driver rather than just a cost center.
Just posted Q3 earnings. We delivered our first-ever $100B quarter driven by double-digit growth across every major part of our business. (Five years ago, our quarterly revenue was at $50B🚀) Our full-stack approach to AI is driving real momentum and we’re shipping at speed. https://x.com/sundarpichai/status/1983627221425156144
Nvidia becomes first company to reach $5 trillion market value The chip maker crossed this unprecedented threshold as its stock climbed over 50% this year, driven by massive demand for AI processors that transformed it from a gaming company into the backbone of artificial intelligence infrastructure. The milestone comes amid $500 billion in expected AI chip orders and new government supercomputer contracts, though some experts warn of potential bubble risks in AI valuations.
Open-source AI models catch up to proprietary ones in 3.5 months A new capabilities index reveals the performance gap between free, publicly available AI models and premium closed systems is narrower than expected, suggesting the competitive moat around proprietary AI may be shrinking faster than companies anticipated.
We used our new capabilities index, the ECI, to measure the gap between open- and closed-weight models. The result? This gap is smaller than previously estimated. On average, it takes 3.5 months for an open-weight model to catch up with closed-source SOTA. https://x.com/EpochAIResearch/status/1983987212183335097
Meta builds massive AI computing cluster in former Indiana cornfields Meta transformed farmland into one of the world’s largest AI compute clusters in just one year, partnering with Anthropic on “Project Rainier” that’s 70% larger than any existing AI computing platform. This represents the unprecedented speed and scale of infrastructure investment needed to train cutting-edge AI models, signaling how tech giants are racing to secure the massive computing power that modern AI development demands.
About a year ago, this site near South Bend, Indiana was just cornfields. Today, it’s 1 of our U.S. data centers powering Project Rainier – one of the world’s largest AI compute clusters, built in collaboration with @AnthropicAI. It is 70% larger than any AI computing platform https://x.com/ajassy/status/1983616724642730217
Anthropic launches Claude Excel add-in and financial data connectors Anthropic released Claude for Excel in beta, allowing the AI to directly read and modify spreadsheets, plus new connectors to real-time market data from providers like LSEG and Moody’s. This builds on Claude Sonnet 4.5’s 55.3% accuracy on financial benchmarks and adds pre-built skills for tasks like building cash flow models and coverage reports. Major firms including Citi, RBC Capital Markets, and Coinbase are already using Claude for financial workflows.
Claude can now connect to @LSEplc, @moodysratings, @Aiera, @teamthirdbridge, @mtnewswires, Chronograph, and @egnyte. These updates build on Claude Sonnet 4.5’s industry-leading performance on finance tasks.”” / X https://x.com/AnthropicAI/status/1982842911369965897
We’re expanding Claude for Financial Services, with an Excel add-in, new connectors to real-time data and market analytics, and pre-built Agent Skills, including cash flow models and initiating coverage reports. https://x.com/AnthropicAI/status/1982842909235040731
Google launches Gemini for Home voice assistant in early access Google is replacing Google Assistant with its more advanced Gemini AI on smart speakers and displays, offering two interaction modes: traditional “Hey Google” commands for everyday tasks and “Hey Google, let’s chat” for conversational help. This represents the first major upgrade to home voice assistants in years, bringing ChatGPT-style conversations to living rooms. The rollout starts in the U.S. with plans to expand globally in 2026, though advanced features require a premium subscription.
Today, we’re starting the early access rollout of the Gemini for Home voice assistant in the U.S. You can either say “Hey Google” to your speaker or display to request specific help or answers, or talk naturally with Gemini Live by saying “Hey Google, let’s chat.” https://x.com/Google/status/1983246777215033718
Google launches Earth AI platform combining satellite data with Gemini reasoning The tech giant is democratizing geospatial intelligence by letting organizations instantly analyze satellite imagery, weather patterns, and population data through simple questions rather than requiring years of specialized research. Early adopters include WHO using it to predict cholera outbreaks and insurance companies accelerating disaster claims, suggesting AI could transform how we respond to climate and health crises. Google is expanding access through Cloud services and making the technology available to nonprofits, potentially giving smaller organizations the same analytical capabilities previously available only to governments and large corporations.
Google just took another big step towards becoming ChatGPT for planet earth. I can’t overstate how important this is — geospatial AI commodified. Here’s what it can do: https://x.com/bilawalsidhu/status/1981566109863289028
Meta spends $75 billion on AI infrastructure in three months Meta allocated 37% of revenue to AI infrastructure in Q3 2025, signing $75.5 billion in deals between September and October—more than most countries spend on tech in a decade. The company pioneered a new financing model through joint ventures and private credit, keeping assets off its balance sheet while committing to 16-year payment obligations. This represents the largest corporate AI infrastructure bet in history, with facilities like the 2-gigawatt Hyperion data center capable of training multiple GPT-4-scale models simultaneously.
Microsoft launches Copilot Mode in Edge browser with voice control Microsoft’s new experimental browser mode integrates AI chat directly into web browsing, allowing users to navigate hands-free through voice commands and get instant answers without switching tabs. The feature represents a shift from AI as a separate tool to AI as an integrated browsing companion, with capabilities like cross-tab summarization and proactive shopping assistance that could reshape how people interact with the web.
Microsoft rebuilds Outlook from scratch with AI-first design Microsoft reorganized its Outlook team under new leadership to completely rebuild the email client around AI rather than adding features to existing systems. The company envisions Outlook as an AI “body double” that autonomously reads messages, drafts replies, and manages schedules, requiring weekly feature experiments instead of quarterly releases. This represents a risky bet on unproven AI capabilities for a mission-critical business tool used by millions, especially as Microsoft still struggles with its ongoing “One Outlook” transition that has yet to match desktop app functionality.
ChatGPT launches agent mode that acts independently for users OpenAI has released an “agent mode” for ChatGPT that can autonomously research, plan, and execute tasks while users continue browsing, marking a shift from conversational AI to proactive digital assistance. The feature is currently available in preview for paying subscribers across Plus, Pro, and Business tiers, suggesting OpenAI is testing autonomous AI capabilities with its most engaged user base.
Turn on agent mode and ChatGPT can take action for you—research, plan, and get things done while you browse. Now in preview for Plus, Pro, and Business users. https://x.com/OpenAI/status/1984304194837528864
Sam Altman seeks trillions in funding to reshape global chip industry OpenAI’s CEO is reportedly pursuing massive investment to build new semiconductor manufacturing capacity and AI infrastructure, potentially restructuring how the world produces the chips that power artificial intelligence systems. This represents one of the largest funding ambitions in tech history, signaling Altman’s belief that current chip supply chains are inadequate for AI’s explosive growth trajectory.
OpenAI reduces ChatGPT mental health response failures by 65-80% With 900,000 users weekly showing suicidal intent in conversations, OpenAI worked with 170+ mental health experts to dramatically improve ChatGPT’s responses in crisis situations. This represents a significant safety milestone as AI chatbots increasingly serve as informal counselors for millions of users seeking emotional support.
Earlier this month, we updated GPT-5 with the help of 170+ mental health experts to improve how ChatGPT responds in sensitive moments—reducing the cases where it falls short by 65-80%. https://x.com/OpenAI/status/1982858555805118665
From this new post by OpenAI: 0.15% of users (something like 900k people given public numbers) show signs of suicidal intent in their ChatGPT chats each week But there seems to be progress in making ChatGPT respond appropriately to mental health issues. https://x.com/emollick/status/1983034815281500218
OpenAI faces $20 billion funding deadline amid fractious Microsoft negotiations OpenAI must restructure into a public benefit corporation by year-end or lose half of its $40 billion SoftBank-led funding, but Microsoft is blocking the transition over concerns about losing its exclusive AI access and revenue-sharing deals. The standoff has become so contentious that OpenAI is considering involving antitrust regulators, while Microsoft has reportedly threatened to walk away entirely. This dispute could reshape the AI industry’s power dynamics, as OpenAI seeks independence from what it views as restrictive partnership terms that currently give Microsoft exclusive access to its technology through 2030.
Microsoft invests $135 billion for 27% stake in restructured OpenAI OpenAI completed a major corporate restructuring that transforms it into a public benefit corporation controlled by a nonprofit foundation, while Microsoft secured a 27% stake worth $135 billion and extended IP rights through 2032. The deal removes previous fundraising constraints that had created tensions between the companies and positions OpenAI for a likely IPO to fund CEO Sam Altman’s $1.4 trillion infrastructure plans. This restructuring gives OpenAI the financial flexibility to compete in the capital-intensive AI race while maintaining nonprofit oversight through the OpenAI Foundation’s controlling board position.
Microsoft and OpenAI announce new definitive agreement with Microsoft holding $135 billion investment in OpenAI Group PBC representing 27% on diluted basis, extending IP rights through 2032 including post-AGI models with safety guardrails. Partnership preserves OpenAI as https://x.com/koltregaskes/status/1983175578824917210
Important and easy to overlook: > Through special voting and governance rights held solely by the OpenAI Foundation, the OpenAI Foundation appoints all members of the board of directors of OpenAI Group and can replace directors at any time. > the OpenAI Foundation (new name of”” / X https://x.com/stalkermustang/status/1983160636965019888
We completed our recapitalization. The non-profit, the OpenAI Foundation, is now one of the best resourced philanthropies ever, with equity valued at ~$130B. It continues to control the OpenAI for-profit, which is now a public benefit corporation. https://x.com/OpenAI/status/1983157159853777086
OpenAI reportedly developing music generation AI to compete with Suno OpenAI is building technology to create songs from text prompts, potentially challenging Suno’s early lead in AI music generation. This marks OpenAI’s expansion beyond text and images into audio creation, a move that could reshape the music industry by making song production accessible to anyone with a written idea. The development signals intensifying competition in generative AI as major players race to dominate new creative domains.
PayPal becomes first digital wallet integrated into ChatGPT for shopping PayPal signed an exclusive deal with OpenAI to embed its payment system directly into ChatGPT, allowing the AI platform’s 700 million weekly users to purchase items with full buyer protections starting next year. This marks a significant shift toward “agentic commerce” where AI assistants handle shopping transactions, positioning PayPal as the payment backbone for AI-driven e-commerce. The partnership gives PayPal access to ChatGPT’s massive user base while providing OpenAI with established payment infrastructure and fraud protection for its expanding shopping features.
OpenAI targets fully automated AI researcher capable of independent discoveries by 2028 CEO Sam Altman announced OpenAI’s timeline to develop an “AI researcher” system that can autonomously handle complex research projects, progressing from intern-level assistant by 2026 to full researcher by 2028. This represents a shift from AI as a tool to AI as an independent scientific contributor, with OpenAI committing $1.4 trillion in infrastructure to achieve what chief scientist Jakub Pachocki calls potential “superintelligence” within a decade. The announcement coincided with OpenAI’s corporate restructuring to raise capital more easily while maintaining research focus through its non-profit foundation.
OpenAI’s Sora video generator now lets users create recurring characters The new “character cameos” feature allows users to maintain consistent characters across multiple AI-generated videos, addressing a key limitation that previously made it difficult to create coherent video narratives. This represents a significant step toward AI-powered storytelling and content creation at scale.
Nvidia releases massive open-source AI model collection across four domains The company launched over 650 models spanning language reasoning, robotics, biology and physics simulation, marking one of the largest corporate contributions to open AI development. Major enterprises like ServiceNow, PayPal and Palantir are already integrating these models, with PayPal reporting 50% efficiency gains, demonstrating how open models can accelerate real-world AI deployment at scale.
We just launched new open models and datasets to make AI research and development more accessible 🤝 You now have open foundations to build specialized intelligent agents faster, safer, and at scale — from Nemotron to Cosmos, Isaac GR00T to Clara. Over 650 open models and 250 https://x.com/NVIDIAAIDev/status/1983227688333574318
Nvidia just released ChronoEdit-14B on Hugging Face enables physics-aware image editing and action-conditioned world simulation through temporal reasoning. It distills priors from a 14B-parameter pretrained video generative model and separates inference into (i) a video https://x.com/_akhaliq/status/1983953896415604836
A new open-source physics engine just dropped… and it could change how robots learn. Newton, built by @nvidia with support from @GoogleDeepMind and Disney Research, is now part of The Linux Foundation. It’s designed to bring precise, GPU-powered physics to robotics and https://x.com/IlirAliu_/status/1982726852507521065
👀 Meet @NVIDIAAI Nemotron Nano 2 VL, now hosted on Nebius AI Studio – 10× higher throughput – Document + video intelligence – Open weights, open data – Ready for production Build multimodal assistants today → https://x.com/nebiusaistudio/status/1983243873317974318
Perplexity launches first AI agent specifically for patent research The search company’s new tool democratizes intellectual property research by letting anyone query patent databases in plain English, potentially disrupting the specialized legal research industry that typically requires expensive databases and expert knowledge. This marks a shift from general AI search to purpose-built professional tools targeting specific industries.
Perplexity adds politician stock holdings to its finance search tool Perplexity’s AI search engine now tracks and displays public stock holdings of U.S. politicians, making congressional trading data more accessible to everyday investors. This matters because it democratizes access to information that was previously buried in government filings, potentially leveling the playing field between institutional and retail investors. The move highlights AI’s growing role in financial transparency, as search tools increasingly surface hard-to-find public data that affects market decisions.
Devin AI agent gains full computer control and screen sharing abilities Cognition’s Devin can now operate any desktop application, build mobile apps, and automate complex workflows by directly controlling computers like a human user. This represents a significant leap from text-based AI assistants to agents that can manipulate software interfaces, potentially automating entire categories of knowledge work that previously required human interaction with applications.
Devin now has full computer use capabilities and can share screen recordings. You can control desktop apps, build and QA mobile apps, and automate tedious work. Here are some examples that blew our team away: 1. Making a desktop game https://x.com/cognition/status/1983983151157563762
Kimi AI releases Linear model with 75% memory reduction and 6x speed boost Moonshot AI’s new Kimi Linear model uses hybrid linear attention to dramatically cut memory usage by 75% while delivering up to 6.3x faster text generation than previous models. The breakthrough addresses AI’s biggest scaling challenge by maintaining performance on 128,000-token contexts while requiring far less computational resources. This represents a significant efficiency leap beyond typical model improvements, potentially making large-scale AI deployment much more cost-effective.
🎉 Congrats to @Kimi_Moonshot! vLLM Day-0 model expands! Now supporting Kimi Linear — hybrid linear attention with Kimi Delta Attention(KDA): – RULER 128k context: 84.3 perf + 3.98× speedup – Up to 6× faster decoding & 6.3× faster TPOT (1M tokens) – 75% KV cache reduction 💡 https://x.com/vllm_project/status/1983941708233765149
🔥 Inside Kimi Linear: First-Hand Insights @Kimi_Moonshot just dropped something impressive again. @yzhang_cs from Kimi AI Infra, shared an insider’s look at the making of Kimi Linear — an architecture designed around hybrid linear attention and optimized for efficiency × https://x.com/ZhihuFrontier/status/1984321210055082207
Kimi just released another “”next-gen”” model that reduces memory usage by up to 75%, while achieving up to 6.3× higher decoding throughput and outperforming MLA and GDN baselines https://x.com/scaling01/status/1983926811051384965
AI helps mathematician solve 42-year-old unsolved problem A mathematician used artificial intelligence as a collaborative tool to crack a decades-old mathematical puzzle, demonstrating how AI can augment human expertise in highly specialized intellectual work. This represents a shift from AI replacing routine tasks to AI enhancing expert-level reasoning in fields requiring deep domain knowledge and creative problem-solving.
Another example of the increasingly common situation where AI helps an academic greatly with intellectually challenging work (solving a 42 year old open math problem), with expert human guidance. https://x.com/emollick/status/1983301090969038898
Jensen Huang predicts humanoid robots will dominate consumer and industrial markets NVIDIA’s CEO made bold predictions at GTC about humanoid robots becoming massive markets for both consumers and industrial applications, referencing Elon Musk’s similar efforts. This signals major tech leaders see humanoid robots moving from science fiction to mainstream commercial reality within years. The prediction matters because NVIDIA’s AI chips would likely power these robots, positioning the company at the center of what could be the next trillion-dollar technology category.
Jensen at NVIDIA GTC: It’s likely that humanoid robots, which my friend Elon is also working on, will be one of the largest consumer electronics markets and one of the largest industrial equipment markets. https://x.com/TheHumanoidHub/status/1983230275694760271
Tesla’s AI5 chip targets robotics and autonomous vehicle breakthrough Chamath predicts Tesla’s new AI5 chip will power both Cybercab self-driving taxis and Optimus robots, with energy infrastructure becoming essential to support these AI-powered machines at scale.
Chamath says Tesla’s AI5 chip will be a functional technology leap that will go into Cybercab and Optimus, and the energy business will be a critical adjunct to robotics and self-driving. https://x.com/TheHumanoidHub/status/1981954391658861055
Tesla builds neural network to simulate real-world driving scenarios The company trained an AI system on data from its vehicle fleet to generate realistic video predictions of traffic situations, helping evaluate self-driving car performance without requiring extensive real-world testing. This approach could accelerate autonomous vehicle development by providing a safer, more controlled environment for testing AI decision-making before deploying updates to actual roads.
Tesla’s learned world simulator, a neural network-based system, tackles the challenge of evaluating autonomous driving and Optimus robot AI. Trained on a massive, curated dataset from Tesla’s fleet, it synthesizes future states (e.g., high-resolution, multi-camera video streams) https://x.com/TheHumanoidHub/status/1981802545594216845
Universal Music Group settles with AI music platform Udio, launches licensed service The world’s largest music company ended its copyright lawsuit against Udio and will collaborate on a new AI music platform launching in 2026, trained only on licensed content. This marks the first major settlement between a music giant and AI music generator, potentially setting the template for how the industry will monetize rather than fight AI-generated music while ensuring artists get paid.
Canva launches AI-powered creative operating system for workplace design The design platform unveiled Magic Studio, combining AI writing, image generation, and brand management tools into a unified workspace that lets non-designers create professional marketing materials, potentially disrupting traditional creative workflows by making sophisticated design accessible to any employee.
Character.AI removes open chat for teens amid safety concerns Character.AI will eliminate open-ended AI conversations for users under 18 by November 25, replacing them with creative tools like video and story creation. The company cited regulatory pressure and news reports questioning teen safety with AI chat, making this move more restrictive than competitors. Character.AI is also launching an independent AI Safety Lab and implementing stronger age verification to ensure appropriate user experiences.
Agentic Commerce Protocol and building the Economic Infrastructure for AI — with Emily Glassberg Sands, Head of Data & AI at Stripe https://www.latent.space/p/stripe
Leave a Reply