Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided reference image, preserve every element exactly — the marigold-orange backdrop, the seated woman with closed eyes and faint smile in her purple-and-white windbreaker, the tattooed singer in the red beanie and layered red vest, the lighting, framing, and depth of field — but replace only the black handheld microphone with a sleek green-and-black NVIDIA GPU graphics card held lengthwise to his mouth, its dual cooling fans and gold PCIe connector pins catching the warm key light, gripped at the same angle and scale as the original mic with seamless photographic realism. After generating the image, overlay the text “NVIDIA” in the upper-left corner of the frame in large, bold, all-caps ITC Avant Garde Gothic Pro Medium (or a near-identical geometric sans-serif if unavailable), pure white (#FFFFFF), with no date, subtitle, drop shadow, or outline. The text should be substantial in scale — taking up a meaningful portion of the upper-left area — with comfortable margin from the top and left edges, set against the negative space of the orange backdrop so it does not overlap or obscure the singer, the seated woman, or the replaced object.
I asked Jensen: “2 out of the top 3 models in the world, Claude and Gemini, were trained on TPU. What does that mean for Nvidia going forward?” After a long technical back and forth about what the right accelerator for AI looks like (see full episode), Jensen lays down the
https://x.com/dwarkesh_sp/status/2044468295957635392
Banger paper from NVIDIA. Agentic reasoning needs models that are not just capable, but efficient at long-context inference. The agent model layer is moving toward open, long-context, high-throughput architectures. This paper introduces Nemotron 3 Super, an open 120B parameter
https://x.com/dair_ai/status/2044452957023047943
NVIDIA Launches Ising, the World’s First Open AI Models to Accelerate the Path to Useful Quantum Computers | NVIDIA Newsroom
https://nvidianews.nvidia.com/news/nvidia-launches-ising-the-worlds-first-open-ai-models-to-accelerate-the-path-to-useful-quantum-computers
We’ve been developing a multi-agent system that builds and maintains complex software autonomously. Recently, we partnered with NVIDIA to apply it to optimizing CUDA kernels. In 3 weeks, it delivered a 38% geomean speedup across 235 problems.
https://x.com/cursor_ai/status/2044136953239740909
Today, we released Lyra 2.0, a framework for generating persistent, explorable 3D worlds at scale, from NVIDIA Research. Generating large-scale, complex environments is difficult for AI models. Current models often “forget” what spaces look like and lose track of movement over
https://x.com/NVIDIAAIDev/status/2044445645109436672
Distilled recap of the back-and-forth with Jensen on export controls: Dwarkesh: Wouldn’t selling Nvidia chips to China enable them to train models like Claude Mythos with cyber offensive capabilities that would be threats to American companies and national security? Jensen:
https://x.com/dwarkesh_sp/status/2044483393941848131
Jensen regrets that when Anthropic and OpenAI first needed billions to scale, Nvidia wasn’t in a position to invest. So these labs went to hyperscalers like Microsoft, Google, and Amazon instead, and in return committed to using their compute. “I’m not going to make that same
https://x.com/dwarkesh_sp/status/2044498492450869624
Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat
https://www.dwarkesh.com/p/jensen-huang
The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia
https://x.com/dwarkesh_sp/status/2044456498441708013
It was a pleasure to sit down with @FidlerSanja, VP of AI Research at NVIDIA, leading company’s Spatial Intelligence Lab, who is actively building the next major frontier of AI – physical AI. During GTC, where her lab introduced AlpaDream, we discussed: • If Transformers are
https://x.com/TheTuringPost/status/2042512295742656776
Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters
https://blogs.nvidia.com/blog/lowest-token-cost-ai-factories/
Figure and Hark just took an entire data center of NVIDIA B200s – every rack in the building Figure will be using these to predict physics and Hark will train next generation multi-modal models
https://x.com/adcock_brett/status/2042675641037000868
What are world models actually? @FidlerSanja, VP of AI Research at NVIDIA, leading company’s Spatial Intelligence Lab, explains in our interview If you want to learn about the major next frontier in AI, watch the full conversation:
https://x.com/TheTuringPost/status/2043962055531868554





Leave a Reply