Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the deep midnight navy car hood, shallow depth-of-field windshield and sky background, chrome pedestal base, dramatic upward camera angle, and automotive advertisement lighting exactly as shown. Replace only the Mercedes star with a single photorealistic chrome robotic hand (articulated fingers with visible joints, thumb and forefinger nearly touching in precise gesture) mounted on the same pedestal at realistic hood ornament scale, rendered in polished mirror-finish metal matching the original star’s material quality. Add bold white sans-serif display text reading ROBOTS across the upper portion of the image.

🙌 Andrej Karpathy’s lab has received the first DGX Station GB300 — a Dell Pro Max with GB300. 💚 We can’t wait to see what you’ll create @karpathy! 🔗 https://t.co/8ct5QZ3frS @DellTech
https://x.com/NVIDIAAIDev/status/2034291235041554871

Jensen is cementing the idea that Nvidia-powered AI is now the backbone of every major industry. He said robotics alone will be a $50 trillion industry.
https://x.com/TheHumanoidHub/status/2033619022508659118

Jensen: “Nvidia is the first vertically integrated but horizontally open company.” This strategy positions Nvidia as the backbone of robotics without stifling innovation. Vertical integration ensures cutting-edge performance on each layer of the AI stack. Horizontal openness
https://x.com/TheHumanoidHub/status/2033622691408974133

The First Healthcare Robotics Dataset and Foundational Physical AI Models for Healthcare Robotics https://huggingface.co/blog/nvidia/physical-ai-for-healthcare-robotics

Announcing NVIDIA DLSS 5, an AI-powered breakthrough in visual fidelity for games, coming this fall. DLSS 5 infuses pixels with photorealistic lighting and materials, bridging the gap between rendering and reality. Learn More → https://x.com/NVIDIAGeForce/status/2033617732147810782

DLSS 5 is completely mind blowing. The neural rendering model with photoreal lighting and materials is a generation step up in visual fidelity. Gaming with DLSS 5 feels like future tech, but its possible now. It is truly incredible. 🤯
https://x.com/GeForce_JacobF/status/2033615891045454112

DLSS 5 might be the moment where the anti AI pendulum starts swinging back. Many in the 3D community who were against generative AI are now pushing back on the “”everything is AI slop”” crowd. The pendulum swung too far and they can feel it. Nice to see the rebalancing.
https://x.com/bilawalsidhu/status/2034281398052274666

Here’s everything we know about Nvidia’s “”greatest leap in graphics since real-time ray tracing”” You can see Digital Foundry’s jaw drop in this reaction after they just saw DLSS 5.0: – Will ship in Fall of 2026! – Demo ran 4k on 2 5090’s but is already running on single GPU in
https://x.com/Grummz/status/2033641075806769382

GR00T is moving away from VLM-based backbones in favor of integrated world models. Jensen Huang teased GR00T N2 during his keynote; NVIDIA’s next-gen foundation model built on DreamZero research. Utilizing a new world-action model architecture, it succeeds at novel tasks in
https://x.com/TheHumanoidHub/status/2034279221372321940

What if a robot could simulate the physical world from a single image. [📍Bookmark Paper & GitHub for later] PointWorld-1B from Stanford and NVIDIA is a large 3D world model that predicts how an entire scene will move, given RGB-D input and robot actions. The key idea is
https://x.com/IlirAliu_/status/2032895393407660380

Breaking: 1 trillion revenue for NVIDIA in 2027 Jensen Huang: “One year after last GTC, right here where I stand… I see, going down so much, through 2027. At least… one trillion dollars, you know? Now, does it make any sense? I’m certain computer demand will be much
https://x.com/TheTuringPost/status/2033622628385362068

Jensen just said NVIDIA’s $1T projection for 2025-27 covers only Blackwell and Rubin to keep it consistent with the previous projection. He mentioned he could have included Groq in that number: “”so if I would’ve included that, theoretically, not actually, but theoretically,
https://x.com/TheHumanoidHub/status/2033990614824665421

Nvidia targets data center revenue of $1+ trillion for 2025-2027. That’s already quite ridiculous, with the AI physical world only in its zeroth innings . $NVDA
https://x.com/TheHumanoidHub/status/2033627322331660784

A breakthrough in real-time video generation. As a research preview developed with @NVIDIA and shared at @NVIDIAGTC this week, we trained a new real-time video model running on Vera Rubin. HD videos generate instantly, with time-to-first-frame under 100ms. Unlocking an entirely
https://x.com/runwayml/status/2034284298769985914#m

NVIDIA GTC 2026 Keynote: Everything That Happened in 12 Minutes – YouTube https://www.youtube.com/watch?v=X2i_8O75_Os

DoorDash’s New Paid Tasks Turn Couriers Into AI and Robot Trainers – Bloomberg https://www.bloomberg.com/news/articles/2026-03-19/doordash-s-new-paid-tasks-turn-couriers-into-ai-and-robot-trainers

💚🤗💚 Jensen showing @huggingface during GTC keynote, where @NVIDIAAI dropped amazing new open models, datasets and blogs! Some of my favorites, links in comments: 🧠 Nemotron 3 Super 120A12B – Reasoning LLM 🏥 Open-H-Embodiment – Healthcare Robotics Dataset 🩻
https://x.com/jeffboudier/status/2033959279510884631

Jensen Huang: “It is now one of the recruiting tools in Silicon Valley. How many tokens comes along with my job?” @NVIDIAGTC
https://x.com/TheTuringPost/status/2033639746128515518

NVIDIA’s strategy in one picture @NVIDIAGTC
https://x.com/TheTuringPost/status/2033620574694752678

Robotics research is accelerating fast, especially around simulation. Factory deployment still isn’t. The gap between simulation and real production lines remains one of the biggest bottlenecks in manufacturing automation. That’s why @ABBRobotics’s partnership with @NVIDIA
https://x.com/IlirAliu_/status/2033381389232689529

Second day! “Technology Behind Robotic Characters”, session at @nvidia GTC. Moritz Baecher on how @Disney Imagineering builds believable physical AI: Many robotics teams struggle to move from digital animation to stable physical movement. Their approach bridges that gap. The
https://x.com/IlirAliu_/status/2033980181413827053

With legendary @Scobleizer and @wschenk #nvidiagtc @NVIDIAGTC
https://x.com/TheTuringPost/status/2033574233360699881

And 2.3 years later we have DLSS on steroids
https://x.com/bilawalsidhu/status/2033752195095535801

DLSS 5 casually solved the fancy coat of paint part of this vision
https://x.com/bilawalsidhu/status/2034131183353643289

DLSS 6 mode on about to take greyboxed 3d assets to final render. Ai video-to-video foreshadowed this; many said it could never happen in real time. Yet here we are.
https://x.com/bilawalsidhu/status/2033898489952841763

So proud of DLSS5: Fully generative neural rendering, in real-time, in real games. Mind-blowing realism. A whole new generation of real-time graphics. A decade of continuous research and development. Coming soon to PCs everywhere. 💚
https://x.com/ctnzr/status/2033613807105544666

Jensen Huang’s view on autonomous vehicles is pretty straightforward: the “automotive is less than 1% of your business” number misses what is actually happening. NVIDIA is selling three computers: – training systems – simulation and synthetic data – the AV system in the car
https://x.com/TheTuringPost/status/2033992848203514225

Been so much fun cooking OpenShell and NemoClaw with the @NVIDIAAI folks! 🙏🦞 Huge step towards secure agents you can trust. What’s your OpenClaw strategy?
https://x.com/steipete/status/2033641463104323868

GTC 2026 News | NVIDIA Newsroom https://nvidianews.nvidia.com/online-press-kit/gtc-2026-news

Jensen says he can’t think of a company building robots that isn’t working with Nvidia.
https://x.com/TheHumanoidHub/status/2033642974492659894

NVIDIA GTC 2026: Live Updates on What’s Next in AI | NVIDIA Blog https://blogs.nvidia.com/blog/gtc-2026-news/

Developers used to argue about programming languages; now they argue about harnesses. NemoClaw is NVIDIA’s answer to your OpenClaw safety woes — zero permissions by default, sandboxed subagents, private inference enforced at the infra layer. Here’s a guide on how to start:
https://x.com/baseten/status/2034649896523874356

Go from “”hello world”” to “”hello claw!”” 🦞 We’re hosting a Build-A-Claw extravaganza in the #NVIDIAGTC Park Mon-Thur where you can BYOD or buy a DGX Spark on-site and our NVIDIA experts will help you install @OpenClaw. See you there! 🙌 Full details 👉 https://x.com/NVIDIAAIDev/status/2032847578404888907

We’re going live at #NVIDIAGTC in 30 minutes. ⏱️ Join us for GTC Live at 8 a.m. PT as we get ready for Jensen Huang’s keynote 11 a.m. Featuring industry leaders from: @bfl_ml, @Cadence, @CaterpillarInc, @cohere, @CoreWeave, @DellTech, @EdisonSci, @FireworksAI_HQ, @IBM,
https://x.com/nvidia/status/2033551362210865371

🚀 Live from @NVIDIAGTC, we’re releasing Holotron-12B! Developed with @nvidia, it’s a high-throughput, open-source, multimodal model engineered specifically for the age of computer-use agents. Get started today! 🤗Hugging Face: https://t.co/SyAuqLIacS 📖Technical Deep Dive:
https://x.com/hcompany_ai/status/2033851052714320083

AI is already redesigning chip design itself! And the biggest bottleneck left is validation. Here is Bill Dally describing to @JeffDean how @nvidia uses AI to design chips: “We’re already using AI across multiple parts of the chip design process, and it’s delivering real
https://x.com/TheTuringPost/status/2034413469542588613

How NVIDIA Dynamo 1.0 Powers Multi-Node Inference at Production Scale | NVIDIA Technical Blog https://developer.nvidia.com/blog/nvidia-dynamo-1-production-ready/

With Nemotron 3 Nano 4B in the NVIDIA Nemotron 3 family, llama.cpp users get a compact model for action-taking conversational personas, available across NVIDIA GPU-enabled systems and @NVIDIA_AI_PC
https://x.com/ggerganov/status/2033947673825337477

The frontier has increasingly shifted to hybrid models – from Qwen to Kimi-Linear and now with NVIDIA’s Nemotron-3 Super – that rely on a strong linear sequence model. Today we release Mamba-3, the most powerful linear model to date.
https://x.com/tri_dao/status/2033948569502413245

NVIDIA thanks all its partners: the message? There is no way around NVIDIA. NVIDIA is the center of the revolution.
https://x.com/kimmonismus/status/2033615181415387610

Straight from NVIDIA GTC: Jensen Huang just unveiled a new vision for AI infrastructure For the first time, Rubin GPUs+Groq LPUs are paired: > 35× higher inference throughput > 10× more revenue from trillion-parameter models Architecture & why it’s needed
https://x.com/TheTuringPost/status/2033700480975520097

Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!
https://x.com/karpathy/status/2034321875506196585

A preview of the animatronic Olaf coming to Disneyland Paris.
https://x.com/TheHumanoidHub/status/2033077902930219388

BREAKING: @sundayrobotics has just raised a MASSIVE $165M Series B at a $1.15 billion valuation! It has happened! @Coatue led the round, with @BainCapVC, TigerGlobal, @benchmark, and FidelityInv also participating. Thomas Laffont joins the board! This has been a long time
https://x.com/IlirAliu_/status/2032150560850206743

How Disney Research brought the animated character Olaf to life, achieving an accurate, stylized gait alongside robust balance, low noise, and thermal safety. Sets new standard for animated-to-physical robotic characters.
https://x.com/TheHumanoidHub/status/2033085648794755195

I’ve been walking this floor for two days. Let me show you what I saw. @ABBRobotics plugging Omniverse into RobotStudio: 99% sim-to-real accuracy. @LightwheelAI building the simulation infrastructure layer underneath it all: SimReady assets, synthetic data pipelines, and
https://x.com/IlirAliu_/status/2034344312927117438

Latent Encoder-Decoder code base. Fully open sourced! You can train and visualize the latent space. [📍 Save it, to find it later when you need it] Thanks for sharing, Xueyan Zou (@xyz2maureen). Code: https://t.co/Orsx4fPDGb Paper: https://t.co/frNUfxbzHd —– Weekly robotics
https://x.com/IlirAliu_/status/2032017770972389572

LATENT’ learns tennis skills for humanoid robots from human motion data. The robot can sustain multi-shot rallies, handle ball speeds of 15+ m/s, and showed a 90.9% success rate for the forehand. No onboard cameras or vision models, relies on external MoCap for
https://x.com/TheHumanoidHub/status/2033074800999150065

Mood
https://x.com/IlirAliu_/status/2033937267648340434

Over the past decade humanoid robots have improved dramatically. Reinforcement learning, better electric actuators, and vision-language-action models now allow robots like Atlas or Digit to walk, plan tasks, and manipulate objects. Yet surprisingly simple things like stairs,
https://x.com/IlirAliu_/status/2033103386887835652

Renault Group has deployed the Calvin-40, a humanoid robot developed by a French startup, Wandercraft, at its Douai factory to haul car tires. Renault has taken a stake in Wandercraft and plans to deploy 350 more Calvin robots over 18 months.
https://x.com/TheHumanoidHub/status/2032150269199597985

RL-trained in-hand skills, such as grasp stability and object rotation are used to assist teleoperation. Teleoperation data is then used to train a VLA that fuses force and tactile feedback to generate actions. Humanoid hardware: Sharpa Paper: https://x.com/TheHumanoidHub/status/2033223493517754828

South Korean researchers built a paper-thin robot that squeezes through 3mm gaps and lifts 70x (!) its weight. The flexible robotic “”sheet”” mimicks myosin, the protein that powers muscle contractions in your body. Inside the sheet are dozens of microscopic air chambers stacked
https://x.com/rowancheung/status/2032127322426654965

This robotic hand can weave itself together in minutes. Allonic developed a process that “”braids”” robot bodies around a 3D-printed skeleton in a single automated step. The tech draws from the textile industry, using braided fibers instead of traditional mechanical joints and
https://x.com/rowancheung/status/2033939752362381818

A surgeon just removed a man’s prostate …and he was 1,500 miles away. From an office in London, Dr. Prokar Dasgupta controlled a 4-armed robot at a hospital in Spain. The robot was fitted with a 3D camera and connected to London via fiber optic cable with a backup 5G
https://x.com/rowancheung/status/2033578058586968362

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading