Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the deep midnight navy car hood, shallow depth-of-field sky background, chrome pedestal base, dramatic upward camera angle, and automotive advertisement lighting exactly as shown. Replace only the Mercedes star with a single vertical circular silicon wafer (8-inch diameter) mounted on the same chrome pedestal, rendered in polished silicon with visible microscopic circuit etchings and subtle rainbow iridescence from oxide layers, photorealistic and proportional to a luxury hood ornament. Add bold white sans-serif text reading CHIPS across the upper portion of the image.

Karpathy’s Autoresearch is bottlenecked by a single GPU. We removed the bottleneck. We gave the agent access to our K8s cluster with H100s and H200s and let it provision its own GPUs. Over 8 hours: • ~910 experiments instead of ~96 sequentially • Discovered that scaling model
https://x.com/skypilot_org/status/2034681533051855173

Exclusive | Jeff Bezos in Talks to Raise $100 Billion for AI Manufacturing Fund – WSJ https://www.wsj.com/tech/jeff-bezos-aims-to-raise-100-billion-to-buy-revamp-manufacturing-firms-with-ai-618a3cfe

What is computer in the future? According to Jensen: “In the future, the computer is really a manufacturing system for tokens. And the number of computers in the world built for token manufacturing is still very small. It’s small because most of the systems we have shipped so
https://x.com/TheTuringPost/status/2033983885131059636

Announced in Jensen’s keynote today: LangChain frameworks have crossed 1B downloads. We’re excited to join the NVIDIA Nemotron Coalition to help shape the open models that power these agents. ➡️ Read the announcement: https://t.co/CWlbAzhlXy ➡️ Check out the docs:
https://x.com/LangChain/status/2033788913937195132

🙌 Andrej Karpathy’s lab has received the first DGX Station GB300 — a Dell Pro Max with GB300. 💚 We can’t wait to see what you’ll create @karpathy! 🔗 https://t.co/8ct5QZ3frS @DellTech
https://x.com/NVIDIAAIDev/status/2034291235041554871

Jensen is cementing the idea that Nvidia-powered AI is now the backbone of every major industry. He said robotics alone will be a $50 trillion industry.
https://x.com/TheHumanoidHub/status/2033619022508659118

Jensen: “Nvidia is the first vertically integrated but horizontally open company.” This strategy positions Nvidia as the backbone of robotics without stifling innovation. Vertical integration ensures cutting-edge performance on each layer of the AI stack. Horizontal openness
https://x.com/TheHumanoidHub/status/2033622691408974133

The First Healthcare Robotics Dataset and Foundational Physical AI Models for Healthcare Robotics https://huggingface.co/blog/nvidia/physical-ai-for-healthcare-robotics

Announcing NVIDIA DLSS 5, an AI-powered breakthrough in visual fidelity for games, coming this fall. DLSS 5 infuses pixels with photorealistic lighting and materials, bridging the gap between rendering and reality. Learn More → https://x.com/NVIDIAGeForce/status/2033617732147810782

DLSS 5 is completely mind blowing. The neural rendering model with photoreal lighting and materials is a generation step up in visual fidelity. Gaming with DLSS 5 feels like future tech, but its possible now. It is truly incredible. 🤯
https://x.com/GeForce_JacobF/status/2033615891045454112

DLSS 5 might be the moment where the anti AI pendulum starts swinging back. Many in the 3D community who were against generative AI are now pushing back on the “”everything is AI slop”” crowd. The pendulum swung too far and they can feel it. Nice to see the rebalancing.
https://x.com/bilawalsidhu/status/2034281398052274666

Here’s everything we know about Nvidia’s “”greatest leap in graphics since real-time ray tracing”” You can see Digital Foundry’s jaw drop in this reaction after they just saw DLSS 5.0: – Will ship in Fall of 2026! – Demo ran 4k on 2 5090’s but is already running on single GPU in
https://x.com/Grummz/status/2033641075806769382

GR00T is moving away from VLM-based backbones in favor of integrated world models. Jensen Huang teased GR00T N2 during his keynote; NVIDIA’s next-gen foundation model built on DreamZero research. Utilizing a new world-action model architecture, it succeeds at novel tasks in
https://x.com/TheHumanoidHub/status/2034279221372321940

What if a robot could simulate the physical world from a single image. [📍Bookmark Paper & GitHub for later] PointWorld-1B from Stanford and NVIDIA is a large 3D world model that predicts how an entire scene will move, given RGB-D input and robot actions. The key idea is
https://x.com/IlirAliu_/status/2032895393407660380

Breaking: 1 trillion revenue for NVIDIA in 2027 Jensen Huang: “One year after last GTC, right here where I stand… I see, going down so much, through 2027. At least… one trillion dollars, you know? Now, does it make any sense? I’m certain computer demand will be much
https://x.com/TheTuringPost/status/2033622628385362068

Jensen just said NVIDIA’s $1T projection for 2025-27 covers only Blackwell and Rubin to keep it consistent with the previous projection. He mentioned he could have included Groq in that number: “”so if I would’ve included that, theoretically, not actually, but theoretically,
https://x.com/TheHumanoidHub/status/2033990614824665421

Nvidia targets data center revenue of $1+ trillion for 2025-2027. That’s already quite ridiculous, with the AI physical world only in its zeroth innings . $NVDA
https://x.com/TheHumanoidHub/status/2033627322331660784

A breakthrough in real-time video generation. As a research preview developed with @NVIDIA and shared at @NVIDIAGTC this week, we trained a new real-time video model running on Vera Rubin. HD videos generate instantly, with time-to-first-frame under 100ms. Unlocking an entirely
https://x.com/runwayml/status/2034284298769985914#m

NVIDIA GTC 2026 Keynote: Everything That Happened in 12 Minutes – YouTube https://www.youtube.com/watch?v=X2i_8O75_Os

Building the industrial scale compute infrastructure for AI is one of the most exciting challenges of our time – it’s about building a new economic foundation that empowers people to do more and helps businesses move faster. Am thrilled to be a part of this revolution, thank you
https://x.com/sk7037/status/2032122869338292469

The AI supply chain has the craziest value cascade of any industry in the world. thinks that over the next five years, the biggest bottleneck to deploying AI will be EUV machines. ASML sells EUV machines for $300-400 million. You need about three and a half machines, so $1.2
https://x.com/dwarkesh_sp/status/2032528369028370806

The Need for an Independent AI Grid – AMP PBC https://amppublic.com/

When you run a @PyTorch model on a GPU, the acutal work is executed through kernels. These are low-level, hardware-specific functions designed for GPUs (or other accelerators). If you profile a model, you’ll see a sequence of kernel launches. Between these launches, the GPU can
https://x.com/ariG23498/status/2034107361733054814#m

Assuming Jensen is building on the $500b over 5 ish quarters then the $1T by end of 2027 is 9 quarter ish then.
https://x.com/BenBajarin/status/2033623321540235661?s=20

caption this
https://x.com/swyx/status/2033666752836759687?s=20

Column: Jensen Huang doesn’t need a new chip. He needs a new moat. https://www.cnbc.com/2026/03/19/column-jensen-huang-doesnt-need-a-new-chip-he-needs-a-new-moat.html

I’ll be at the GTC this week, here with my friend Devang (@TheHumanoidHub) looking forward to see great sessions on robotics. Waiting for Jensen right now. One session I’m especially curious about today: ‘From Concept to Production: Humanoid Robotics at Scale’.
https://x.com/IlirAliu_/status/2033590948312322225

Jensen Huang just said something kind of wild: That much-cited $ 1 trillion AI infrastructure opportunity? It only covers Blackwell + Rubin through 2027. That’s not the whole stack. Not racks. Not storage. Not networking. Not the rest of the system. Just that slice alone.
https://x.com/TheTuringPost/status/2033981870141231215

Live from Jensen’s keynote remarks at GTC: “”The inflection point of inference has arrived. AI now has to think. In order to think, it has to inference. AI now has to do. In order to do, it has to inference. AI has to read. In order to do so, it has to inference. It has to
https://x.com/baseten/status/2033622003018830198

You know Jensen is a tech rockstar when 20,000 people fill up an NHL arena to watch him.
https://x.com/TheHumanoidHub/status/2033601338970673422

P-EAGLE from @AmazonScience and @NVIDIAAIDev removes the sequential bottleneck in speculative decoding — all K draft tokens generated in a single forward pass. 📈 Up to 1.69x speedup over vanilla EAGLE-3 on NVIDIA B200, with 5-25% gains sustained at high concurrency (c=64). How
https://x.com/vllm_project/status/2033634407634927624

Finishing a video episode of Attention Span about super interesting announcement from #NVIDIAGTC
https://x.com/TheTuringPost/status/2033568823396430101

New Scaling Law? What “Agentic Scaling”” Is – Inside NVIDIA’s Biggest Idea at GTC 2026
https://x.com/TheTuringPost/status/2033689291419734102

NVIDIA’s Nemotron 3 is an architectural response to the 2 pressures: – Long-context cost as agentic interactions scale – Repeated reasoning cost from invoking full models for small subtasks Nemotron 3 proposes several design decisions to solve this: ▪️ Hybrid architecture:
https://x.com/TheTuringPost/status/2034668980892479993

NemoClaw – NVIDIA’s contribution to the emerging OpenClaw ecosystem and one of the biggest announcements at NVIDIA GTC It’s a framework for long-running autonomous agents. ▪️ The idea: Install OpenClaw together with Nemotron models and OpenShell (NVIDIA’s new security runtime)
https://x.com/TheTuringPost/status/2034389444875428043

💚🤗💚 Jensen showing @huggingface during GTC keynote, where @NVIDIAAI dropped amazing new open models, datasets and blogs! Some of my favorites, links in comments: 🧠 Nemotron 3 Super 120A12B – Reasoning LLM 🏥 Open-H-Embodiment – Healthcare Robotics Dataset 🩻
https://x.com/jeffboudier/status/2033959279510884631

Jensen Huang: “It is now one of the recruiting tools in Silicon Valley. How many tokens comes along with my job?” @NVIDIAGTC
https://x.com/TheTuringPost/status/2033639746128515518

NVIDIA’s strategy in one picture @NVIDIAGTC
https://x.com/TheTuringPost/status/2033620574694752678

Robotics research is accelerating fast, especially around simulation. Factory deployment still isn’t. The gap between simulation and real production lines remains one of the biggest bottlenecks in manufacturing automation. That’s why @ABBRobotics’s partnership with @NVIDIA
https://x.com/IlirAliu_/status/2033381389232689529

Second day! “Technology Behind Robotic Characters”, session at @nvidia GTC. Moritz Baecher on how @Disney Imagineering builds believable physical AI: Many robotics teams struggle to move from digital animation to stable physical movement. Their approach bridges that gap. The
https://x.com/IlirAliu_/status/2033980181413827053

With legendary @Scobleizer and @wschenk #nvidiagtc @NVIDIAGTC
https://x.com/TheTuringPost/status/2033574233360699881

And 2.3 years later we have DLSS on steroids
https://x.com/bilawalsidhu/status/2033752195095535801

DLSS 5 casually solved the fancy coat of paint part of this vision
https://x.com/bilawalsidhu/status/2034131183353643289

DLSS 6 mode on about to take greyboxed 3d assets to final render. Ai video-to-video foreshadowed this; many said it could never happen in real time. Yet here we are.
https://x.com/bilawalsidhu/status/2033898489952841763

So proud of DLSS5: Fully generative neural rendering, in real-time, in real games. Mind-blowing realism. A whole new generation of real-time graphics. A decade of continuous research and development. Coming soon to PCs everywhere. 💚
https://x.com/ctnzr/status/2033613807105544666

Jensen Huang’s view on autonomous vehicles is pretty straightforward: the “automotive is less than 1% of your business” number misses what is actually happening. NVIDIA is selling three computers: – training systems – simulation and synthetic data – the AV system in the car
https://x.com/TheTuringPost/status/2033992848203514225

Been so much fun cooking OpenShell and NemoClaw with the @NVIDIAAI folks! 🙏🦞 Huge step towards secure agents you can trust. What’s your OpenClaw strategy?
https://x.com/steipete/status/2033641463104323868

GTC 2026 News | NVIDIA Newsroom https://nvidianews.nvidia.com/online-press-kit/gtc-2026-news

Jensen says he can’t think of a company building robots that isn’t working with Nvidia.
https://x.com/TheHumanoidHub/status/2033642974492659894

NVIDIA GTC 2026: Live Updates on What’s Next in AI | NVIDIA Blog https://blogs.nvidia.com/blog/gtc-2026-news/

Developers used to argue about programming languages; now they argue about harnesses. NemoClaw is NVIDIA’s answer to your OpenClaw safety woes — zero permissions by default, sandboxed subagents, private inference enforced at the infra layer. Here’s a guide on how to start:
https://x.com/baseten/status/2034649896523874356

Go from “”hello world”” to “”hello claw!”” 🦞 We’re hosting a Build-A-Claw extravaganza in the #NVIDIAGTC Park Mon-Thur where you can BYOD or buy a DGX Spark on-site and our NVIDIA experts will help you install @OpenClaw. See you there! 🙌 Full details 👉 https://x.com/NVIDIAAIDev/status/2032847578404888907

We’re going live at #NVIDIAGTC in 30 minutes. ⏱️ Join us for GTC Live at 8 a.m. PT as we get ready for Jensen Huang’s keynote 11 a.m. Featuring industry leaders from: @bfl_ml, @Cadence, @CaterpillarInc, @cohere, @CoreWeave, @DellTech, @EdisonSci, @FireworksAI_HQ, @IBM,
https://x.com/nvidia/status/2033551362210865371

🚀 Live from @NVIDIAGTC, we’re releasing Holotron-12B! Developed with @nvidia, it’s a high-throughput, open-source, multimodal model engineered specifically for the age of computer-use agents. Get started today! 🤗Hugging Face: https://t.co/SyAuqLIacS 📖Technical Deep Dive:
https://x.com/hcompany_ai/status/2033851052714320083

AI is already redesigning chip design itself! And the biggest bottleneck left is validation. Here is Bill Dally describing to @JeffDean how @nvidia uses AI to design chips: “We’re already using AI across multiple parts of the chip design process, and it’s delivering real
https://x.com/TheTuringPost/status/2034413469542588613

How NVIDIA Dynamo 1.0 Powers Multi-Node Inference at Production Scale | NVIDIA Technical Blog https://developer.nvidia.com/blog/nvidia-dynamo-1-production-ready/

With Nemotron 3 Nano 4B in the NVIDIA Nemotron 3 family, llama.cpp users get a compact model for action-taking conversational personas, available across NVIDIA GPU-enabled systems and @NVIDIA_AI_PC
https://x.com/ggerganov/status/2033947673825337477

The frontier has increasingly shifted to hybrid models – from Qwen to Kimi-Linear and now with NVIDIA’s Nemotron-3 Super – that rely on a strong linear sequence model. Today we release Mamba-3, the most powerful linear model to date.
https://x.com/tri_dao/status/2033948569502413245

NVIDIA thanks all its partners: the message? There is no way around NVIDIA. NVIDIA is the center of the revolution.
https://x.com/kimmonismus/status/2033615181415387610

Straight from NVIDIA GTC: Jensen Huang just unveiled a new vision for AI infrastructure For the first time, Rubin GPUs+Groq LPUs are paired: > 35× higher inference throughput > 10× more revenue from trillion-parameter models Architecture & why it’s needed
https://x.com/TheTuringPost/status/2033700480975520097

Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!
https://x.com/karpathy/status/2034321875506196585

Greetings, Earthlings: Philip Johnston of Starcloud on Data Centers in Space | Sequoia Capital https://sequoiacap.com/podcast/greetings-earthlings-philip-johnston-of-starcloud-on-data-centers-in-space/

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading