Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided reference image, preserve every element exactly — the marigold-orange backdrop, the seated woman’s closed-eyes smile and purple-white windbreaker, the tattooed singer in the red beanie and layered red vest, the lighting and framing — but replace only the black handheld microphone at his mouth with a polished square silicon CPU chip held the same way between his fingers, its mirrored metal heat spreader and gold contact pins catching the warm key light with photographic realism and seamless integration. After generating the image, overlay the text “Chips” in the upper-left corner of the frame in large, bold, all-caps ITC Avant Garde Gothic Pro Medium (or a near-identical geometric sans-serif if unavailable), pure white (#FFFFFF), with no date, subtitle, drop shadow, or outline. The text should be substantial in scale — taking up a meaningful portion of the upper-left area — with comfortable margin from the top and left edges, set against the negative space of the orange backdrop so it does not overlap or obscure the singer, the seated woman, or the replaced object.
Five companies — Google, Microsoft, Meta, Amazon, and Oracle — now control about two-thirds of the world’s compute, up slightly from ~60% at the start of 2024. Many AI labs (including OpenAI and Anthropic) depend almost entirely on these hyperscalers for access to their compute.
https://x.com/EpochAIResearch/status/2044154042541301870
I asked Jensen: “2 out of the top 3 models in the world, Claude and Gemini, were trained on TPU. What does that mean for Nvidia going forward?” After a long technical back and forth about what the right accelerator for AI looks like (see full episode), Jensen lays down the
https://x.com/dwarkesh_sp/status/2044468295957635392
Five hyperscalers now own over two-thirds of global AI compute
https://epochai.substack.com/p/five-hyperscalers-now-own-over-two
Banger paper from NVIDIA. Agentic reasoning needs models that are not just capable, but efficient at long-context inference. The agent model layer is moving toward open, long-context, high-throughput architectures. This paper introduces Nemotron 3 Super, an open 120B parameter
https://x.com/dair_ai/status/2044452957023047943
NVIDIA Launches Ising, the World’s First Open AI Models to Accelerate the Path to Useful Quantum Computers | NVIDIA Newsroom
https://nvidianews.nvidia.com/news/nvidia-launches-ising-the-worlds-first-open-ai-models-to-accelerate-the-path-to-useful-quantum-computers
We’ve been developing a multi-agent system that builds and maintains complex software autonomously. Recently, we partnered with NVIDIA to apply it to optimizing CUDA kernels. In 3 weeks, it delivered a 38% geomean speedup across 235 problems.
https://x.com/cursor_ai/status/2044136953239740909
Today, we released Lyra 2.0, a framework for generating persistent, explorable 3D worlds at scale, from NVIDIA Research. Generating large-scale, complex environments is difficult for AI models. Current models often “forget” what spaces look like and lose track of movement over
https://x.com/NVIDIAAIDev/status/2044445645109436672
Microsoft Secures Former OpenAI “”Stargate”” Site in Norway for AI Infrastructure | TheEnergyMag
https://theenergymag.com/news/market-news/microsoft-secures-former-open-ai-stargate-site-in-norway-for-ai-infrastructure
OpenAI to spend more than $20 billion on Cerebras chips, receive stake, The Information reports
https://finance.yahoo.com/sectors/technology/articles/openai-spend-more-20-billion-013150907.html
OpenAI Stargate Execs to Join Meta’s New Compute Unit — The Information
https://www.theinformation.com/briefings/openai-stargate-execs-join-metas-new-compute-unit
OpenAI StarGate People Move To Meta Amid Data Center Boom
https://www.forbes.com/sites/johnwerner/2026/04/15/openai-stargate-people-move-to-meta-amid-data-center-boom/
Distilled recap of the back-and-forth with Jensen on export controls: Dwarkesh: Wouldn’t selling Nvidia chips to China enable them to train models like Claude Mythos with cyber offensive capabilities that would be threats to American companies and national security? Jensen:
https://x.com/dwarkesh_sp/status/2044483393941848131
Jensen regrets that when Anthropic and OpenAI first needed billions to scale, Nvidia wasn’t in a position to invest. So these labs went to hyperscalers like Microsoft, Google, and Amazon instead, and in return committed to using their compute. “I’m not going to make that same
https://x.com/dwarkesh_sp/status/2044498492450869624
Compute constraints are a double bind: On the inference side you need to either (a) raise prices, (b) ration use, and/or (c) serve worse models. This hurts current growth On the training side, you can’t train the next gen of models to stay competitive. This hurts future growth
https://x.com/emollick/status/2044226087610114356
Every year someone names a new bottleneck for AI compute scaling. @dylan522p on why power isn’t gonna be the big one over the next few years: fundamentally there’s many different ways to generate power (rather than just one company that can produce the EUV tools needed for the
https://x.com/dwarkesh_sp/status/2042256222242349146
Interesting: “”Currently, 38% of Americans live within 5 miles of at least one operational data center… Living near a data center doesn’t have much of an effect on public opinion about the facilities.”” From now on, it looks like most DCs will be rural.
https://x.com/emollick/status/2043904225944514728
Six months ago, there was a lot of focus on the idea that the there would be a massive glut of unused computing power which would could a recession as AI use plateaued. The “”compute bubble”” belief was absolutely everywhere. The degree to which this was wrong deserves some notice
https://x.com/emollick/status/2043690650550329710
The world is transitioning to a compute-powered economy. The field of software engineering is currently undergoing a renaissance, with AI having dramatically sped up software engineering even over just the past six months. AI is now on track to bring this same transformation to
https://x.com/gdb/status/2043831031468568734
With a Mac Mini and Personal Computer, you get to have the ambient software-hardware integration running in your home, but accessible on any interface (phone) to invoke. A lot of people wanted AI-native hardware. But what actually matters is a great orchestra conductor. That
https://x.com/AravSrinivas/status/2044845689306550694
AI data center startup Fluidstack in talks for $1B round at $18B valuation months after hitting $7.5B, says report | TechCrunch
Europe has a problem that nobody wants to say out loud: the continent keeps producing exceptional researchers who then spend their careers building infrastructure they’ll never own: The labs, the compute, the platforms… they end up somewhere else. And that is a bad problem. A
https://x.com/IlirAliu_/status/2044356335551074322
Meta commits to 1 GW with Broadcom, Hock Tan to leave board
https://www.cnbc.com/2026/04/14/meta-commits-to-one-gigawatt-of-custom-chips-with-broadcom-as-hock-tan-agrees-to-leave-board.html
Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat
https://www.dwarkesh.com/p/jensen-huang
The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia
https://x.com/dwarkesh_sp/status/2044456498441708013
It was a pleasure to sit down with @FidlerSanja, VP of AI Research at NVIDIA, leading company’s Spatial Intelligence Lab, who is actively building the next major frontier of AI – physical AI. During GTC, where her lab introduced AlpaDream, we discussed: • If Transformers are
https://x.com/TheTuringPost/status/2042512295742656776
Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters
https://blogs.nvidia.com/blog/lowest-token-cost-ai-factories/
Figure and Hark just took an entire data center of NVIDIA B200s – every rack in the building Figure will be using these to predict physics and Hark will train next generation multi-modal models
https://x.com/adcock_brett/status/2042675641037000868
What are world models actually? @FidlerSanja, VP of AI Research at NVIDIA, leading company’s Spatial Intelligence Lab, explains in our interview If you want to learn about the major next frontier in AI, watch the full conversation:
https://x.com/TheTuringPost/status/2043962055531868554
Elon just announced the tape-out of the in-house-designed AI5 chip, a huge step forward for the Optimus brain. It’s going to be “magnitudes better” than the current generation. “Tape-out” is a key milestone in semiconductor chip design. It marks the end of the design phase, when
https://x.com/TheHumanoidHub/status/2044531761510814204
Grading and foundation support work is progressing rapidly for the ’10 million/year’ Optimus factory in Austin.
https://x.com/TheHumanoidHub/status/2043058797166637067





Leave a Reply