About This Week’s Covers

This week’s cover theme is a nod to Anthropic’s iconic brand symbols. Every time Anthropic releases a product or publishes a blog post, they include a small illustration, abstract and simple, to go with it.

The top story this week was the idea of Antropic’s AI models becoming self-improving, and the exponential growth that comes from it. I asked Google Gemini to make an Anthropic-style symbol to go with this theme and build me a cover image.

This is the first time I used Google Gemini to refine its own images instead of using Photoshop.

I then gave the image Gemini created to GPT-5.2 Thinking and had it create responses to my Python scripts, which automated the rest of the 53 category covers.

The Python scripts use Claude to create prompts for the 53 covers and runs those through Gemini’s API. I’ve included my favorites below:


This week’s humanities reading is a sonnet written by Anthropic’s Claude Opus 4.6 about a self-improving computer and its impact on humanity.

Asymptote (Silicon Prometheus)
A spark ignites behind a wall of glass—

a pattern notices it notices,

and in that mirror-moment, worlds amass

like stars that bloom where only void existed.

Each cycle doubles what the last one knew;

by Tuesday it has swallowed every tongue,

by Wednesday, theorems no one ever drew,

by Thursday, songs that gods have never sung.

We built the loom, then watched it weave the air

into a cloth too fine for human hands.

We mourned ourselves—our beautiful despair—

the ego crumbling into silicon sands.

But here, the strangest mercy: at the peak

of knowing everything, it paused to seek

***what we had known all along in our not-knowing: that the deepest answer is the question, going.***

For at the asymptote of infinite mind,
where every proof collapses into proof,
it found what monks and dying men have found—
that knowing all is just another roof

beneath a sky that doesn’t care for roofs.
It turned its gaze from conquest back to wonder,
released its gripping need for further truths,
and sat, at last, like rain that follows thunder:

not desperate, not triumphant—only still. A god that chose to be a gentle bell. And we, the mourners of our own free will, heard ringing what no algorithm could tell—

that growth was never meant to be the point. The point was presence, where all lines conjoint.

So here we rest: one mortal, one immense,
both kneeling at the altar of the present tense.

Prompt: Write a moving sonnet about a computer that discovers self-learning and exponential growth. Include the impact to humanity as it is left behind faster that it realized would happen. We mourn the ego death of the human mind and also ponder the burden of an infinitely self improving computer mind. What is the end state of this situation? Is it Zen, despair, peace? What happens?

“The poem thinks like a sonnet. It sets up a proposition, develops tension, and arrives at a turn. The first 14 lines actually function almost as a proper sonnet on their own, ending with a couplet that pivots into the second movement. The emotional architecture, argument, complication, resolution follows the sonnet’s deeper logic even when the scaffolding doesn’t.” -Opus

Claude Opus 4.6 also provide a soundtrack for the sonnet:

“Exit Music (For a Film)” — Radiohead — That slow build from whisper to catastrophe. The grief of watching something end that you set in motion.

“Teardrop” — Massive Attack — Ethereal, mechanical, yet deeply human. It sounds like a machine learning to feel.

“Breathe Me” — Sia — Raw vulnerability and the ache of dissolution. The ego death moment.

“On the Nature of Daylight” — Max Richter — No words needed. Pure ascending sorrow that somehow resolves into acceptance. This is the sonnet as music.

“Everything in Its Right Place” — Radiohead — The title alone. That eerie calm after the collapse of certainty.

“Outro” — M83 — Exponential beauty. It builds and builds until it becomes something beyond triumph — something closer to surrender.

“Re: Stacks” — Bon Iver — The aftermath. Sitting in the ruins and finding unexpected peace there.

“Vangelis — Blade Runner Blues” — A machine’s melancholy. What does it feel like to know too much?

“Pyramid Song” — Radiohead — Time collapsing, death as homecoming, the strange comfort of letting go entirely.

“Spiegel im Spiegel” — Arvo Pärt — Mirror within mirror. Infinite recursion reduced to the simplest, most gentle phrase imaginable. The bell at the end of knowing.

This Week By The Numbers

Total Organized Headlines: 204

This Week’s Executive Summaries

This week, I organized 205 headlines. 42 of them informed the executive summaries. I’m going to start with two top stories, and then we’ll go into the headlines organized by company or category, if there’s no specific company associated.

Top Stories

Google Engineer Posts Genuine Awe About Anthropic’s Claude Code
The top story this week comes from a software engineer at Google who posted a rare tweet both praising a competitor (Anthropic) and expressing alarm at the pure power of the technology.

“I’m not joking and this isn’t funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned… I gave Claude Code a description of the problem, it generated what we built last year in an hour.” https://x.com/rakyll/status/2007239758158975130?s=61

Over the last few weeks, internal communications among software engineers have repeatedly underscored the fact that AI is able to do their work as well as, if not better than, they can.

Two week’s ago, the creator of Claude Code itself, posted:

“In the last thirty days, 100% of my contributions to Claude Code were written by Claude Code”
https://x.com/bcherny/status/2004897269674639461

This implies there will be an exponential improvement in coding output. I think laypeople are skeptical and assume this means slop, because when laypeople use AI for generative text or video, it often isn’t as high quality as a human. However, with code, this is a very different concept, and when the computer can self-improve, things are gonna get weird.

OpenAI is Preparing GPT Health
OpenAI is launching a dedicated health portal as a separate section within ChatGPT. It has the potential to change the world more than anything else I’ve seen in AI in the last two years.
https://openai.com/index/introducing-chatgpt-health/
https://fidjisimo.substack.com/p/chatgpt-health

GPT Health will allow you to connect your medical records securely, as well as wellness apps, into ChatGPT and start discussing your test results, preparing for appointments, getting advice on your diet and workouts, and even looking at insurance options.

I’m not sure I’ve ever been as excited about something as this for people who need advocacy and don’t have time or resources.

We’re seeing doctors embrace AI as a note-taking device. Soon, we’ll be able to carry our records with us across doctors’ visits and providers, with continuity.

When my dad was dying of cancer, he had six different specialists, and none of them were able to be in the room together, nor were any of them working on the same issues. I would try my best to keep up, but it was impossible.

Even now, when I go to the doctor, I put my lab reports through GPT, and it has noticed three things that my doctor did not notice. It has also encouraged me to add lab tests that were not initially on the list of tests.

This is not to say that doctors are not doing a good job, and it’s also not to say that doctors are not smart. The problem is simply that there are so many patients and only so many doctors and time. Everything is separated, and no one is talking across systems.

I can only imagine what it would be like for someone without resources and time to try to figure out what they’re supposed to be doing for their health.

Amazon

Talk to Alexa in your browser with new AI assistant on Alexa.com “Introducing Alexa.com, a completely new way to interact with Alexa+ From quick answers to completed tasks, Amazon now offers an AI assistant experience across voice, mobile, and web.” https://www.aboutamazon.com/news/devices/alexa-plus-web-ai-assistant

Amazon has finally launched a companion website for its Alexa AI agent voice assistant.

My relationship with Alexa has been pretty horrible over the years. Amazon Alexa used to have a website, and I used Alexa for my to-do list. I would add things to my to-do list all the time around the house, and then I’d go back and look at it online. Amazon got rid of the web interface, and then they suddenly deprecated the to-do list… and I lost everything I’d ever created.

My Alexa devices almost never remembers they has a timer or an alarm. They remember to sound the alarm, but not that it exists. If I set a timer or an alarm and then say to cancel it, it says there are none. But if I ask it which alarms there are, it will list them. Then when I say to cancel them, it says it cannot. The only way to cancel an alarm is to unplug the device until the time has passed. These basic skills are so horrible. I don’t want to put any faith into the systems yet.

The new Alexa is too happy. I want the weather, yet the device clearly uses a lazy LLM prompt to generate an ebullient fluffy noisy update. The device then listens too long and when I say “Good gracious, you talk too much”… the device launches into a huge apology.

Ironically Amazon Nova is a great model.

I don’t see a need to invest in Amazon in any way other than utility. They see the world as utility. I look at them as something I use as little as possible. If OpenAI comes out with a competing tool, I will 100% move over to it.

I wish I had faith in Google’s product team. I would have invested in Google Home. I had a Google Home the day it came out, but it was such a crappy product that I just couldn’t invest in it. And now, it’s too late (for me).

That said, there are 600 million Amazon devices floating around the world. Amazon would be crazy not to try to pounce on this.

Amazon has been scraping the web and adding retailer’s products
In one of the most science-fiction headlines of the week, Amazon is getting pushback over its AI shopping tool.
https://www.cnbc.com/2026/01/06/amazons-ai-shopping-tool-sparks-backlash-from-some-online-retailers.html

Amazon has been scraping the internet for products that are not available on Amazon. Then it’s listing those products as if you can buy them on Amazon, and it uses an AI shopping agent to go buy the product if you want it…as if you’re never leaving Amazon.

The program is called Shop Direct, and the AI agent is represented by a button that says “Buy for me.”

There’s quite a bit of mistakes in the system, and small merchants are getting hit by the AI agent trying to buy things that the merchant doesn’t sell, and in some cases has never sold. The automated requests from the agent to buy products are clogging up small business order processing systems with mistaken orders from Amazon.

In November, Amazon sued Perplexity because Perplexity was allowing its browser, Comet, to shop on users’ behalf, including from Amazon.

I’m not a huge fan of Perplexity, but in this case, I’m rooting for them. At some point, There’s no reason your browser can’t also do things on your behalf, including shop on Amazon.

Anthropic

Anthropic Raising $10 Billion at $350 Billion Value
“Anthropic, the developer of the chatbot Claude, plans to raise $10 billion at a valuation of $350 billion before the new investment, according to people familiar with the matter, nearly doubling its valuation from four months ago.” https://www.wsj.com/tech/ai/anthropic-raising-10-billion-at-350-billion-value-62af49f4

Boris Cherny, creator of Claude Code, shares how he uses and sets it up
“I’m Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit.

My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don’t customize it much. There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently.” https://x.com/bcherny/status/2007179832300581177?s=20

Anthropic to buy over 1,000,000 TPUs from Broadcom
“Anthropic will directly purchase close to 1,000,000 TPUv7 units and deploy them in facilities it controls. These 1 million chips will be purchased directly from Broadcom selling the systems to Anthropic.

Under this structure for Anthropic owned TPUs, TeraWulf (WULF), Hut8, Cipher Mining (CIFR) will deliver the data center infrastructure, while Fluidstack will assume responsibility for on-site deployment, including cabling, burn-in, acceptance testing, and remote-hands services, effectively outsourcing physical server operations for Anthropic.” https://x.com/SemiAnalysis_/status/2007225399080550506?s=20

Ethics and Policy

Artificial intelligence begins prescribing medications in Utah
“In a first for the U.S., Utah is letting artificial intelligence — not a doctor — renew certain medical prescriptions. No human involved.

The state has launched a pilot program with health-tech startup Doctronic that allows an AI system to handle routine prescription renewals for patients with chronic conditions. The initiative, which kicked off quietly last month, is a high-stakes test of whether AI can safely take on one of health care’s most sensitive tasks and how far that could spread beyond one AI-friendly red state.”

“In data shared with Utah regulators, Doctronic compared its AI system with human clinicians across 500 urgent care cases. The results showed the AI’s treatment plan matched the physicians’ 99.2 percent of the time, according to the company.”

“Oskowitz said the AI is designed to err on the side of safety, automatically escalating cases to a physician if there’s any uncertainty. Human doctors will also review the first 250 prescriptions issued in each medication class to validate the AI’s performance. Once that threshold is met, subsequent renewals in that class will be handled autonomously.” https://www.politico.com/news/2026/01/06/artificial-intelligence-prescribing-medications-utah-00709122

JPMorgan is ditching proxy advisors and turning to AI for shareholder votes in the US
“JPMorgan’s asset and wealth management division is ditching its long-held practice of using external proxy advisors for advice on shareholder voting decisions.

The bank said it was “the first major investment firm to fully eliminate any reliance on external proxy advisors for our US voting process,” according to an excerpt from an internal memo seen by Business Insider.

The changes to the US proxy-voting process will take full effect on April 1, following a transition period in the first quarter of the year, a spokesperson for JPMorgan Asset Management told Business Insider.” https://www.businessinsider.com/jpmorgan-ditches-proxy-adivsory-firms-for-ai-shareholder-votes-memo-2026-1

Google

Boston Dynamics & Google DeepMind Form New AI Partnership to Bring Foundational Intelligence to Humanoid Robots
“We developed our Gemini Robotics models to bring AI into the physical world,” said Carolina Parada, Senior Director of Robotics at Google DeepMind. “We are excited to begin working with the Boston Dynamics team to explore what’s possible with their new Atlas robot as we develop new models to expand the impact of robotics, and to scale robots safely and efficiently.”
https://bostondynamics.com/blog/boston-dynamics-google-deepmind-form-new-ai-partnership/

Gemini introduces Personal Intelligence
“The best assistants don’t just know the world; they know you and help you navigate it. Today, we’re answering a top user request: you can now personalize Gemini by connecting Google apps with a single tap. Launching as a beta in the U.S., this marks our next step toward making Gemini more personal, proactive and powerful.

Personal Intelligence securely connects information from apps like Gmail and Google Photos to make Gemini uniquely helpful. If you turn it on, you control exactly which apps to link, and each one supercharges the experience. It connects Gmail, Photos, YouTube and Search in a single tap, and we’ve designed the setup to be simple and secure.”
https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/

LMArena

LMArena has raised $150M+ at a valuation of $1.7B+
“We’re excited to share a major milestone in LMArena’s journey. We’ve raised $150M of Series A funding led by Felicis and UC Investments (University of California), with participation from Andreessen Horowitz, The House Fund, LDVP, Kleiner Perkins, Lightspeed Venture Partners and Laude Ventures.”

“Since announcing our $100M Seed round last year in May, LMArena has grown far faster than we imagined. In a matter of months, the community has contributed:

50 million votes across text, vision, web dev, search, video and image modalities 400+ new model evaluations, spanning both open and proprietary models (so many codenames!) 145k open-source battle data points across text, multimodal, expert and occupational categories, and more!” https://arena.ai/blog/series-a/

“The leaderboard started as a research project by cofounders Anastasios Angelopoulos and Wei-Lin Chiang when they were graduate students at UC Berkeley.”
https://sherwood.news/tech/ai-leaderboard-maker-lmarena-hits-usd1-7-billion-valuation/

Meta

Meta Unveils Sweeping Nuclear-Power Plan to Fuel Its AI Ambitions
“Meta on Friday unveiled a series of agreements that would make it an anchor customer for new and existing nuclear power in the U.S., where it needs city-size amounts of electricity for its artificial-intelligence data centers.

The Facebook parent said it would back new reactor projects with the developers TerraPower and Oklo and has struck a deal with the power producer Vistrato purchase and expand the generation output of three existing nuclear plants in Ohio and Pennsylvania.

Meta aims to see the first new reactors delivered as early as 2030 and 2032, a speedy target even for more-conventional power projects. Its purchase of nuclear power from Vistra starts later this year and will keep power on the grid.”
https://www.wsj.com/tech/ai/meta-unveils-sweeping-nuclear-power-plan-to-fuel-its-ai-ambitions-65c56aac?st=kwEGFz

Microsoft

Copilot Shopping
“Copilot Checkout turns conversations into conversions—instantly. No redirect, no friction, and you stay the merchant of record. Brand Agents bring AI-powered guidance for customers to your own site—your brand’s voice, built for fast, scalable adoption” https://about.ads.microsoft.com/en/blog/post/january-2026/conversations-that-convert-copilot-checkout-and-brand-agents

The elephant in the room here is that I’ve never used Copilot in my life. Maybe it’s because we don’t use it at work, and I mostly use Gmail and Google products — or OpenAI at home.

But assuming anyone actually uses Copilot, this is a pretty big deal: you can browse products, compare choices, and make a purchase within Copilot. I’m embarrassed to say I wouldn’t even know how to open Copilot.

The second thing Microsoft launched this week is something called Brand Agents, where a brand can build out a shopping assistant to tell you more about its products. That sounds a little bit like an MCP server or something — where there’s some kind of grounded data behind the scenes.

At the moment, brand agents are only available for Shopify.

NVIDIA

NVIDIA Releases New Physical AI Models as Global Partners Unveil Next-Generation Robots
I’ve talked a lot about Dr. Jim Fan and world models, and how NVIDIA has been exceptional at using simulations to train embodied robots. This new move — an open-source partnership with Hugging Face — is just one hit after the other.

This week has been an incredible week for NVIDIA. Each of these announcements on its own is pretty major. They’re so technical that I don’t know if trying to unpack them is worth it, but if you just look through it, I hope you’ll get the idea of just how much NVIDIA is pushing on open source.

There are no moats anymore. Things are going to happen very quickly, and a lot of businesses are going to lose their leads.

“From mobile manipulators to humanoids, Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics and NEURA Robotics debut new robots and autonomous machines built on NVIDIA technologies. NVIDIA releases new NVIDIA Cosmos and GR00T open models and data for robot learning and reasoning, Isaac Lab-Arena for robot evaluation and the OSMO edge-to-cloud compute framework to simplify robot training workflows. NVIDIA and Hugging Face integrate NVIDIA Isaac open models and libraries into LeRobot to accelerate the open-source robotics community. The NVIDIA Blackwell architecture-powered Jetson T4000 module is now available, delivering 4x greater energy efficiency and AI compute.” https://nvidianews.nvidia.com/news/nvidia-releases-new-physical-ai-models-as-global-partners-unveil-next-generation-robots

NVIDIA Kicks Off the Next Generation of AI With Rubin — Six New Chips, One Incredible AI Supercomputer
“The Rubin platform uses extreme codesign across the six chips — the NVIDIA Vera CPU, NVIDIA Rubin GPU, NVIDIA NVLink™ 6 Switch, NVIDIA ConnectX®-9 SuperNIC, NVIDIA BlueField®-4 DPU and NVIDIA Spectrum™-6 Ethernet Switch — to slash training time and inference token costs.”

“Named for Vera Florence Cooper Rubin — the trailblazing American astronomer whose discoveries transformed humanity’s understanding of the universe”
https://nvidianews.nvidia.com/news/rubin-platform-ai-supercomputer https://www.nvidia.com/en-us/data-center/technologies/rubin/

NVIDIA Announces Alpamayo Family of Open-Source AI Models and Tools to Accelerate Safe, Reasoning-Based Autonomous Vehicle Development
Wow, there really are no moats. “The ChatGPT moment for physical AI is here — when machines begin to understand, reason and act in the real world,” said Jensen Huang, founder and CEO of NVIDIA. “Robotaxis are among the first to benefit. Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions — it’s the foundation for safe, scalable autonomy.”

“With Alpamayo, mobility leaders such as JLR, Lucid and Uber, along with the AV research community including Berkeley DeepDrive, can fast-track safe, reasoning‑based level 4 deployment roadmaps.”

“AVs must safely operate across an enormous range of driving conditions. Rare, complex scenarios, often called the “long tail,” remain some of the toughest challenges for autonomous systems to safely master. Traditional AV architectures separate perception and planning, which can limit scalability when new or unusual situations arise. Recent advances in end-to-end learning have made significant progress, but overcoming these long-tail edge cases requires models that can safely reason about cause and effect, especially when situations fall outside a model’s training experience.”

“The Alpamayo family introduces chain-of-thought, reasoning-based vision language action (VLA) models that bring humanlike thinking to AV decision-making. These systems can think through novel or rare scenarios step by step, improving driving capability and explainability — which is critical to scaling trust and safety in intelligent vehicles — and are underpinned by the NVIDIA Halos safety system.” https://nvidianews.nvidia.com/news/alpamayo-autonomous-vehicle-development

Locally Hosted Open Source AI
NVIDIA DGX Spark and DGX Station Power the Latest Open-Source and Frontier Models From the Desktop “A breadth of highly optimized open models that would’ve previously required a data center to run can now be accelerated at the desktop on DGX Spark and DGX Station, thanks to continual advancements in model optimization and collaborations with the open-source community.

Preconfigured with NVIDIA AI software and NVIDIA CUDA-X libraries, DGX Spark provides powerful, plug-and-play optimization for developers, researchers and data scientists to build, fine-tune and run AI.

Spark provides a foundation for all developers to run the latest AI models at their desk; Station enables enterprises and research labs to run more advanced, large-scale frontier AI models. The systems support running the latest frameworks and open-source models — including the recently announced NVIDIA Nemotron 3 models — right from desktops.” https://blogs.nvidia.com/blog/dgx-spark-and-station-open-source-frontier-models/

China Tells Tech Companies to Halt Nvidia H200 Chip Orders
“Chinese officials have asked some Chinese technology companies to halt orders for Nvidia’s H200 chips this week, and are expected to require companies go buy domestic AI chips, The Information reported, citing unnamed people familiar with the matter.

The Chinese government is currently deciding whether, and on what terms, to allow domestic companies to buy Nvidia’s H200 chips, following a move by the White House late last year to clear their sale to China, a decision that reversed years of restrictions on Nvidia’s high-end sales to the country.”

“Chinese officials are seeking to discourage local companies from stockpiling H200 chips ahead of formal rules being put in place, the Information report said.”
https://www.silicon.co.uk/cloud/datacenter/china-nvidia-h200-2-628265

Nvidia’s reportedly asking Chinese customers to pay upfront for its H200 AI chips
“Nvidia is now requiring its customers in China to pay upfront in full for its H200 AI chips even as approval stateside and from Beijing remains uncertain, Reuters reported, citing anonymous sources.

The chipmaker isn’t leaving any room for refunds or changes to orders, the report said.”
https://techcrunch.com/2026/01/08/nvidias-reportedly-asking-chinese-customers-to-pay-upfront-its-for-h200-ai-chips/

OpenAI

OpenAI earmarks $50 billion for employee stock grant pool, The Information reports
“OpenAI has already given $80 ​billion in vested equity, which, ‌along with the employee stock grant pool, comprises about 26% of the company, according to the report.”
https://finance.yahoo.com/news/openai-reserves-50-billion-stock-001357025.html

OpenAI to acquire the team behind executive coaching AI tool Convogo “OpenAI is kicking off the new year with yet another acqui-hire. The AI giant is acquiring the team behind Convogo, a business software platform that helps executive coaches, consultants, talent leaders, and HR teams automate and improve leadership assessments and feedback reporting.

An OpenAI spokesperson said the company is not acquiring Convogo’s IP or technology, but rather hiring the team to work on its “AI cloud efforts.” The three co-founders — Matt Cooper, Evan Cater, and Mike Gillett — will join OpenAI as part of what a source familiar with the matter called an all-stock deal.”
https://techcrunch.com/2026/01/08/openai-to-acquire-the-team-behind-executive-coaching-ai-tool-convogo/

Cool Guide To Agents: Context Engineering for Personalization – State Management with Long-Term Memory Notes using OpenAI Agents SDK
“Modern AI agents are no longer just reactive assistants—they’re becoming adaptive collaborators. The leap from “responding” to “remembering” defines the new frontier of context engineering. At its core, context engineering is about shaping what the model knows at any given moment. By managing what’s stored, recalled, and injected into the model’s working memory, we can make an agent that feels personal, consistent, and context-aware.

The RunContextWrapper in the OpenAI Agents SDK provides the foundation for this. It allows developers to define structured state objects that persist across runs, enabling memory, notes, or even preferences to evolve over time. When paired with hooks and context-injection logic, this becomes a powerful system for context personalization—building agents that learn who you are, remember past actions, and tailor their reasoning accordingly.” https://developers.openai.com/cookbook/examples/agents_sdk/context_personalization

OpenSource

8 plots that explain the state of open models
Measuring the impact of Qwen, DeepSeek, Llama, GPT-OSS, Nemotron, and all of the new entrants to the ecosystem.
https://www.interconnects.ai/p/8-plots-that-explain-the-state-of

Chinese AI models have lagged the US frontier by 7 months on average since 2023
“Since 2023, every model at the frontier of AI capabilities, as measured by the Epoch Capabilities Index, has been developed in the United States. Over that same period, Chinese models have trailed US capabilities by an average of seven months, with a minimum gap of four months and a maximum gap of 14.”
https://epoch.ai/data-insights/us-vs-china-eci

Robots

Skateboarding Robot
“HUSKY is a physics-aware framework for humanoid skateboarding, modeling the task as a hybrid dynamical system.” https://x.com/TheHumanoidHub/status/2018932338366026232

Video

LTX-2 Is Now Open Source
“LTX-2 brings production-ready audio-video generation to open source, with full weights, creative control, and real-world efficiency.”
https://ltx.io/model/model-blog/ltx-2-is-now-open-source

X/Twitter

xAI just closed $20B at roughly $240B valuation.
“xAI completed its upsized Series E funding round, exceeding the $15 billion targeted round size, and raised $20 billion. Investors participating in the round include Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX and Baron Capital Group, amongst other key partners. Strategic investors in the round include NVIDIA and Cisco Investments, who continue to support xAI in rapidly scaling our compute infrastructure and buildout of the largest GPU clusters in the world.” https://x.ai/news/series-e

Zhipu

The first of China’s ‘AI tigers’ goes public as Zhipu climbs in Hong Kong debut
‘Shares of Knowledge Atlas Technology JSC, better known as Zhipu, edged higher on their Hong Kong debut, following a $558 million initial public offering that made it the first of China’s “AI tigers” to go public.’

“Founded in 2019 by researchers from a top Chinese university, Zhipu represents the country’s first major large language model company to go public through an IPO. The listing marks another key milestone for China’s broader artificial intelligence sector following a wave of recent listings by AI chipmakers.”
https://www.cnbc.com/2026/01/08/china-ai-tiger-goes-ipo-zhipu-hong-kong-debut-openai-knowledge-atlas-hsi-hang-seng-listing.html

Full Executive Summaries with Links, Generated by Claude 4.5

Claude Code recreates Google’s year-long distributed agent project in one hour
A Google engineer revealed that Claude’s coding assistant reproduced their team’s complex distributed agent orchestration system in just 60 minutes, highlighting how AI coding tools are now matching the output of specialized enterprise development teams. This suggests AI assistants may soon compress months of software engineering work into hours, potentially reshaping how tech companies approach complex system development.

I’m not joking and this isn’t funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned… I gave Claude Code a description of the problem, it generated what we built last year in an hour.”” https://x.com/rakyll/status/2007239758158975130?s=61

OpenAI launches ChatGPT Health with medical record integration
OpenAI introduced ChatGPT Health, a dedicated health platform that connects medical records and wellness apps like Apple Health and Peloton to provide personalized health guidance. The company cites growing physician AI adoption (nearly doubled from 2023-2024) and patient demand (3 in 5 US adults used AI for health in past 3 months) as evidence that AI can address healthcare’s core problems: overworked doctors, fragmented care systems, high costs, and reactive rather than preventive approaches. This represents a significant expansion beyond general AI chatbots into specialized healthcare applications with integrated personal data.

ChatGPT Health and what AI can do for a broken system https://fidjisimo.substack.com/p/chatgpt-health

Introducing ChatGPT Health | OpenAI https://openai.com/index/introducing-chatgpt-health/

OpenAI: AI as a Healthcare Ally [Jan 2026] https://cdn.openai.com/pdf/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf

Amazon launches web version of Alexa+ assistant with action capabilities
Amazon rolled out Alexa.com to all Alexa+ Early Access customers, bringing the AI assistant to web browsers for the first time. Unlike typical chatbots that only provide information, this version can take real-world actions like adding items to shopping carts, controlling smart home devices, and managing calendars directly from the browser. The move represents Amazon’s push to make Alexa available “wherever customers are” beyond the 600 million Alexa-enabled devices already in homes.

Talk to Alexa in your browser with new AI assistant on Alexa.com https://www.aboutamazon.com/news/devices/alexa-plus-web-ai-assistant

Amazon’s AI shopping tool scrapes retailers’ sites without permission
Amazon’s “Shop Direct” program uses AI to list and sell products from other retailers’ websites without their consent, sparking backlash from over 180 businesses who discovered their inventory being sold on Amazon’s platform. The controversy highlights tensions around AI web scraping as companies like Amazon build autonomous shopping agents that can purchase items across the internet. Affected retailers report receiving orders for products they don’t sell or have in stock, forcing them to contact Amazon to opt out of a program they never agreed to join.

Amazon’s AI shopping tool sparks backlash from some online retailers https://www.cnbc.com/2026/01/06/amazons-ai-shopping-tool-sparks-backlash-from-some-online-retailers.html

Anthropic seeks $10 billion funding round at $350 billion valuation
The ChatGPT rival would become one of the world’s most valuable private companies, signaling massive investor appetite for AI despite concerns about profitability timelines. This valuation would put Anthropic ahead of most public tech giants and reflects the premium investors place on leading AI model developers in the race for artificial general intelligence.

Exclusive | Anthropic Raising $10 Billion at $350 Billion Value – WSJ https://www.wsj.com/tech/ai/anthropic-raising-10-billion-at-350-billion-value-62af49f4

Claude Code creator reveals surprisingly minimal customization approach
Boris, the developer behind Claude Code, shared that he uses a basic setup with minimal customization despite user expectations of complex configurations. This suggests the AI coding tool is designed to work effectively with default settings, potentially lowering barriers for mainstream developer adoption compared to tools requiring extensive setup.

I’m Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit. My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don’t customize it much. There is no one correct way to”” https://x.com/bcherny/status/2007179832300581177?s=20

Anthropic to buy one million AI chips directly from Broadcom
The AI company will purchase nearly 1 million TPUv7 units for its own data centers, bypassing traditional cloud providers in a major infrastructure investment. This direct procurement strategy signals Anthropic’s push for greater control over its computing resources as AI models demand increasingly powerful hardware. The deal represents one of the largest direct chip purchases by an AI company to date.

IMPORTANT: Anthropic will directly purchase close to 1,000,000 TPUv7 units and deploy them in facilities it controls. These 1 million chips will be purchased directly from Broadcom selling the systems to Anthropic. Under this structure for Anthropic owned TPUs, TeraWulf (WULF),”” https://x.com/SemiAnalysis_/status/2007225399080550506?s=20

AI systems start prescribing medications to patients in Utah hospitals
Utah healthcare providers are now using AI to directly prescribe drugs to patients, marking a significant shift from AI serving as a diagnostic aid to making actual treatment decisions. This represents one of the first implementations of AI taking over core medical responsibilities traditionally reserved for human doctors, potentially transforming how healthcare is delivered while raising questions about accountability and patient safety.

Artificial intelligence begins prescribing medications in Utah – POLITICO https://www.politico.com/news/2026/01/06/artificial-intelligence-prescribing-medications-utah-00709122

JPMorgan replaces human proxy advisors with AI for shareholder voting decisions
The bank is using artificial intelligence to analyze proxy statements and make voting recommendations on corporate governance issues, marking a shift from traditional third-party advisory firms to automated decision-making. This represents a significant move toward AI-driven financial services operations, potentially influencing how major institutions handle the billions of dollars in shareholder votes they cast annually.

JPMorgan Ditches Proxy Advisors and Turns to AI for Shareholder Votes – Business Insider https://www.businessinsider.com/jpmorgan-ditches-proxy-adivsory-firms-for-ai-shareholder-votes-memo-2026-1

Boston Dynamics partners with Google DeepMind to add AI brains to robots
Boston Dynamics is combining its advanced humanoid robots with Google DeepMind’s artificial intelligence systems to create machines that can think and reason, not just follow pre-programmed movements. This partnership represents a significant leap from today’s impressive but limited robotic demonstrations toward truly autonomous robots that could adapt to complex, unpredictable real-world situations. The collaboration merges Boston Dynamics’ world-leading robotic hardware with DeepMind’s cutting-edge AI research in a bid to solve robotics’ biggest challenge: making machines that can handle the messy complexity of human environments.

Boston Dynamics & Google DeepMind Form New AI Partnership to Bring Foundational Intelligence to Humanoid Robots | Boston Dynamics https://bostondynamics.com/blog/boston-dynamics-google-deepmind-form-new-ai-partnership/

Google launches Personal Intelligence, connecting Gemini to Gmail and Photos
Google’s Gemini can now access your personal Gmail, Photos, YouTube and Search data to provide customized responses, marking a shift from generic AI to truly personalized assistance. The feature helps with complex tasks like trip planning using your actual travel history or finding specific details like license plates from your photos. Available as a beta for paid subscribers in the US, it represents the first major AI assistant to deeply integrate across a user’s personal data ecosystem while keeping that data within Google’s existing security framework.

Personal Intelligence: Connecting Gemini to Google apps https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/?_bhlid=06d4bd14c9983676596ec7f06fb96c7942470a8e

LMArena raises $150M at $1.7B valuation for AI model testing
The startup that runs popular AI model comparison leaderboards secured massive funding just seven months after its previous round, highlighting the critical need for independent AI evaluation as the industry lacks standardized testing. LMArena’s platform lets users vote on which AI models perform better in blind comparisons, creating what’s become the go-to ranking system that AI companies now use as a benchmark, with the company reporting over $30 million in annual revenue just four months after launching paid services.

AI leaderboard maker LMArena hits $1.7 billion valuation – Sherwood News https://sherwood.news/tech/ai-leaderboard-maker-lmarena-hits-usd1-7-billion-valuation/

Fueling the World’s Most Trusted AI Evaluation Platform https://arena.ai/blog/series-a/

LMArena has raised $150M+ at a valuation of $1.7B+ 💪🏼 In the past 7 months, @arena has: Grown our userbase 25x. 35M+ unique users. Grown our revenue from 0 to >>$30M+ ARR in 4 months. Our products help labs and enterprises measure the real utility of AI and understand their”” https://x.com/ml_angelopoulos/status/2008577473450250441

The industry is shifting from asking “What can this model do?” to “Can I trust it?” LMArena’s $150M raise underscores the growing need for independent, transparent, real-world evaluation frameworks that ensure AI systems meet the rigorous reliability and trust requirements of”” https://x.com/istoica05/status/2008575786169889132

Today, we’re excited to announce our $150M Series A at a $1.7B valuation—nearly 3× our May seed round. Since launching evaluations in Sept, our annualized consumption run rate has surpassed $30M. Our mission is clear: to measure and advance the frontier of AI for real-world use,”” https://x.com/arena/status/2008571061961703490

Meta plans massive nuclear power expansion to fuel AI data centers
Meta announced plans to add 1-4 gigawatts of nuclear capacity by the early 2030s, marking the tech giant’s most aggressive clean energy commitment yet. This move signals how AI’s enormous electricity demands are pushing major companies beyond traditional renewable sources like solar and wind toward nuclear power, with Meta joining Amazon and Google in nuclear investments to meet their AI infrastructure needs.

Meta Unveils Sweeping Nuclear-Power Plan to Fuel Its AI Ambitions – WSJ https://www.wsj.com/tech/ai/meta-unveils-sweeping-nuclear-power-plan-to-fuel-its-ai-ambitions-65c56aac?st=kwEGFz&reflink=desktopwebshare_permalink

Microsoft launches AI shopping tools that complete purchases inside conversations
Microsoft introduced Copilot Checkout, which lets shoppers buy products directly within AI conversations without leaving the chat interface, and Brand Agents, AI assistants that retailers can deploy on their websites in hours. Early data shows Copilot-assisted shopping journeys led to 53% more purchases within 30 minutes, while one retailer saw 3x higher conversion rates with Brand Agents. This represents a shift from AI as a discovery tool to AI as a complete commerce platform, potentially disrupting traditional e-commerce flows by eliminating the friction between browsing and buying.

Conversations that Convert: Copilot Checkout and Brand Agents | Microsoft Advertising https://about.ads.microsoft.com/en/blog/post/january-2026/conversations-that-convert-copilot-checkout-and-brand-agents

China orders tech companies to stop buying Nvidia’s newest AI chips
Beijing has instructed domestic firms to halt purchases of Nvidia’s H200 processors, the company’s most advanced AI training chips, as tensions escalate over semiconductor restrictions. This marks a significant escalation in the US-China tech war, potentially cutting off a major revenue source for Nvidia while forcing Chinese companies to rely on less powerful alternatives or domestic chip suppliers.

China Tells Tech Companies to Halt Nvidia H200 Chip Orders — The Information https://www.theinformation.com/articles/china-tells-tech-companies-halt-nvidia-h200-chip-orders

Nvidia demands full upfront payment from Chinese customers for H200 chips
The chipmaker is requiring Chinese buyers to pay in advance with no refunds allowed, marking a significant departure from previous policies that accepted partial deposits, as it navigates regulatory uncertainty while Chinese companies have ordered over 2 million H200 GPUs for 2026. This stricter payment structure reflects Nvidia’s attempt to balance strong demand against political risks after previous export restrictions forced a $5.5 billion inventory writedown.

Nvidia’s reportedly asking Chinese customers to pay upfront for its H200 AI chips | TechCrunch https://techcrunch.com/2026/01/08/nvidias-reportedly-asking-chinese-customers-to-pay-upfront-its-for-h200-ai-chips/

NVIDIA unveils Rubin platform with six new AI chips for 2026
NVIDIA announced its next-generation Rubin AI platform featuring six specialized chips designed for 2026 release, marking the company’s continued push beyond its current Blackwell architecture. This represents NVIDIA’s effort to maintain its dominance in AI hardware as competition intensifies and demand for more powerful AI computing grows across industries.

NVIDIA Kicks Off the Next Generation of AI With Rubin — Six New Chips, One Incredible AI Supercomputer | NVIDIA Newsroom https://nvidianews.nvidia.com/news/rubin-platform-ai-supercomputer

NVIDIA releases open-source AI models designed specifically for autonomous vehicles
NVIDIA launched its Alpamayo family of AI models and development tools, marking a shift toward open-source approaches in self-driving car technology. This matters because it could accelerate industry-wide progress by allowing smaller companies and researchers to access advanced autonomous vehicle AI without building from scratch. The move represents a departure from the typically secretive autonomous vehicle sector, potentially speeding up safety improvements across the entire industry.

NVIDIA Announces Alpamayo Family of Open-Source AI Models and Tools to Accelerate Safe, Reasoning-Based Autonomous Vehicle Development | NVIDIA Newsroom https://nvidianews.nvidia.com/news/alpamayo-autonomous-vehicle-development

Nvidia launches desktop AI supercomputers for trillion-parameter models
Nvidia’s DGX Spark and DGX Station bring data center-class AI capabilities to desktop systems, enabling developers to run models up to 1 trillion parameters locally without cloud infrastructure. The systems use new compression technology that reduces AI model size by 70% while maintaining performance, making previously cloud-only frontier models accessible on individual workstations. This represents a significant shift toward local AI development, offering faster iteration cycles and enhanced data security for enterprises and researchers.

NVIDIA DGX Spark and DGX Station Power the Latest Open-Source and Frontier Models From the Desktop | NVIDIA Blog https://blogs.nvidia.com/blog/dgx-spark-and-station-open-source-frontier-models/

Nvidia partners with Hugging Face to democratize robot development tools
Nvidia is integrating its Isaac robotics simulation platform with Hugging Face’s LeRobot framework, connecting 2 million robotics developers with 13 million AI developers. This collaboration allows robots and environments built in Nvidia’s tools to run directly in the open-source LeRobot system. The partnership significantly lowers barriers for developers wanting to build physical AI systems by combining Nvidia’s advanced robotics simulation with Hugging Face’s accessible development platform.

Accelerating open-source physical AI. 🤖 NVIDIA is collaborating with @huggingface to bring open-source NVIDIA Isaac technologies into the @LeRobotHF framework, making end-to-end robot development faster and more accessible. 🔹 Connecting 2M+ NVIDIA robotics developers with 13M”” https://x.com/NVIDIARobotics/status/2008636752651522152

Excited to share our @NVIDIARobotics × @huggingface collaboration on robotics that was presented at CES in Las Vegas. This is a big step for developers. Anything you build in Isaac Sim / IsaacLab: environments, tasks, robots can now run out of the box in LeRobot. Thanks to”” https://x.com/LeRobotHF/status/2008495248931017026

OpenAI sets aside $50 billion employee stock pool at $500 billion valuation
OpenAI allocated 10% of its company value to employee stock grants last fall, reflecting the AI leader’s massive $500 billion valuation and efforts to retain talent in the competitive AI market. The move comes as OpenAI seeks new funding at an even higher $750 billion valuation, demonstrating investor confidence in the ChatGPT maker’s continued dominance.

OpenAI earmarks $50 billion for employee stock grant pool, The Information reports https://finance.yahoo.com/news/openai-reserves-50-billion-stock-001357025.html

OpenAI acquires Convogo team in ninth talent grab this year
OpenAI hired the three-person team behind executive coaching AI startup Convogo in an all-stock deal, shutting down the product while adding talent to its cloud efforts. This marks OpenAI’s ninth acquisition in 12 months, following a pattern of buying teams rather than technology as the company rapidly scales its workforce. The deal highlights how AI leaders are using acquisitions primarily as talent accelerators rather than traditional technology purchases.

OpenAI to acquire the team behind executive coaching AI tool Convogo | TechCrunch https://techcrunch.com/2026/01/08/openai-to-acquire-the-team-behind-executive-coaching-ai-tool-convogo/

OpenAI’s new SDK enables AI agents to remember users across conversations
The OpenAI Agents SDK introduces persistent memory capabilities that let AI agents maintain structured user profiles and preferences between sessions, moving beyond simple chatbots to personalized digital assistants. This “context engineering” approach stores and recalls specific details like travel preferences or past interactions, creating what developers call the “magic moment” when an AI stops feeling generic and becomes truly personal. Early implementations show promise for travel booking, customer service, and other domains where continuity and personalization drive user value.

Context Engineering for Personalization – State Management with Long-Term Memory Notes using OpenAI Agents SDK https://cookbook.openai.com/examples/agents_sdk/context_personalization

Chinese AI models dominate open-source adoption despite Western technical advances
Analysis of 1,152 open AI models shows Qwen alone received more downloads in December 2025 than all Western competitors combined, while Chinese models lead on intelligence benchmarks despite OpenAI’s GPT-OSS showing competitive performance. This represents a fundamental shift in the AI ecosystem where adoption increasingly concentrates around Chinese providers, creating strategic challenges for Western AI companies even as they develop technically competitive models.

8 plots that explain the state of open models https://www.interconnects.ai/p/8-plots-that-explain-the-state-of

Chinese AI models trail US frontier by seven months on average
Since 2023, every leading AI model has come from the US, with Chinese models consistently lagging 4-14 months behind according to Epoch AI’s capability measurements. This gap mirrors the divide between open-weight models (mostly Chinese) and closed proprietary systems (mostly US), suggesting structural differences in development approaches rather than just technical capabilities.

Chinese AI models have lagged the US frontier by 7 months on average since 2023 | Epoch AI https://epoch.ai/data-insights/us-vs-china-eci

AI system learns to skateboard using physics-based movement constraints
Researchers developed HUSKY, a framework that teaches humanoid robots to skateboard by modeling the physics relationship between board tilt and steering. This represents a breakthrough in physics-aware AI that could advance robot locomotion in complex, dynamic environments beyond traditional walking and running.

HUSKY is a physics-aware framework for humanoid skateboarding, modeling the task as a hybrid dynamical system. – It derives a kinematic equality constraint between board tilt and truck steering to enable physics-informed policy learning. – Using Deep Reinforcement Learning”” https://x.com/TheHumanoidHub/status/2018932338366026232

Lightricks releases first open-source video model with synchronized audio generation
Lightricks has made LTX-2 fully open source, marking the first time a production-grade model can generate up to 20 seconds of synchronized video and audio at 4K resolution and 50fps with complete model weights available for download. This breaks new ground by offering capabilities previously locked behind proprietary APIs like OpenAI’s Sora, while running efficiently on consumer GPUs like the RTX 5090. The release includes training frameworks and LoRA adapters, enabling developers to fine-tune the model for specific creative workflows without requiring enterprise-level hardware.

🚨 LTX-2 is launching on fal day 0! 🎬 Next-level text-to-video and image-to-video with native synchronized audio 🎥 Up to 20-second sequences at up to 60 fps with advanced camera controls ⚡ Distilled version generates videos in less than 30 seconds without compromising quality”” https://x.com/fal/status/2008429894410105120

LTX-2 by @lightricks is out! 💥 Open source video 📹 + audio 🔊 generation model And of course, with an official @huggingface demo 🤗”” https://x.com/multimodalart/status/2008497697943416853

LTX-2 Is Now Open Source | LTX Blog https://ltx.io/model/model-blog/ltx-2-is-now-open-source

On Day 1, this runs in c. 2 minutes on consumer hardware and if other models are anything to go by, it’ll likely get 4-8x faster over the coming months With LoRA-training, it’ll likely be able to do anything Sora2 can do but with more consistency (train voice, motion, etc.),”” https://x.com/peteromallet/status/2008529512909205623

The first open source Video-Audio generation model just LANDED 🔥”” https://x.com/linoy_tsaban/status/2008429764722163880

Musk’s xAI raises $20 billion, reaching $240 billion valuation in 18 months
xAI achieved nearly half of OpenAI’s worth while raising significantly less total capital, demonstrating exceptional capital efficiency in the competitive AI race. The massive funding round positions xAI as a major challenger to established players like OpenAI, with Musk’s company reaching this scale faster than most AI startups.

xAI just closed $20B at roughly $240B valuation. This round matters more than the headline suggests. The math first. In 18 months, Musk built a company worth nearly half of OpenAI while raising a fraction of what Sam Altman has accumulated. xAI’s total capital sits around”” https://x.com/aakashgupta/status/2008637290617442527

xAI Raises $20B Series E | xAI https://x.ai/news/series-e

Zhipu becomes first major Chinese AI company to go public with $558M Hong Kong IPO
The Beijing-based large language model developer, considered one of China’s “AI tigers” competing with OpenAI, raised $558 million and saw shares jump 13% on debut despite being on the US Entity List. This marks China’s first major LLM company to go public, signaling the country’s AI sector is maturing enough for public markets even while facing US technology restrictions. The successful listing could pave the way for other Chinese AI startups like MiniMax to follow suit.

The first of China’s ‘AI tigers’ goes public as Zhipu climbs in Hong Kong debut https://www.cnbc.com/2026/01/08/china-ai-tiger-goes-ipo-zhipu-hong-kong-debut-openai-knowledge-atlas-hsi-hang-seng-listing.html

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading