Image created with Ideogram 3.0. Image prompt: Lower-East-Side street-corner photograph reminiscent of a late-80s album cover: weathered red-brick tenement with exterior fire-escapes, canvas awning shading racks of vintage clothes; above the awning, a hand-painted board reads ‘Robots SPORTSWEAR’; a hanging blade sign in cursive script reads ‘Robots Boutique’; a retro tin toy robot waves cheerfully from the shop window; warm golden-hour light, subtle 35mm film grain, muted yet punchy color palette, gritty NYC vibe.
NVIDIA offers two blueprints for synthetic data generation: ⦿ Isaac GR00T-Mimic: Uses a physics engine to amplify human motion data in simulation. ⦿ GR00T-Dreams (announced yesterday): Fine-tunes a video generation AI model to create new motion videos from a single image. https://x.com/TheHumanoidHub/status/1924538121687073167
Jensen just announced NVIDIA’s Isaac GR00T N1.5 and GR00T-Dreams blueprint at COMPUTEX 2025: ⦿ Isaac GR00T N1.5 is the first update to NVIDIA’s open, generalized, fully customizable foundation model for humanoid reasoning and skills. ⦿ “Human demonstrations aren’t scalable — https://x.com/TheHumanoidHub/status/1924332201862414495
JUST IN🚨: Nvidia open sourced Physical AI models reasoning models that understand physical common sense and generate appropriate embodied decisions 👀 https://x.com/reach_vb/status/1924525937443365193
NVIDIA released new vision reasoning model for robotics: Cosmos-Reason1-7B 🤖 > first reasoning model for robotics 😱 > based on Qwen 2.5-VL-7B, use with @huggingface transformers or vLLM 🤗 > comes with SFT & alignment dataset and a new benchmark 👏 https://x.com/mervenoyann/status/1924817927561183498
New video of fully autonomous Optimus. Performing many new tasks – instructed via natural language. All the tasks are done by a single neural net – learned directly from human videos. “”This breakthrough allows us to learn new tasks much faster.”” https://x.com/TheHumanoidHub/status/1925052725714419889
This will likely produce a business ~100x bigger than Apple’s market cap (or more) What’s clear now is we’re in the right decade for humanoid robotics – this will feel like the future jumped ahead by 50 years”” / X https://x.com/adcock_brett/status/1923406193743081596
NVIDIA has published a paper on DREAMGEN – a powerful 4-step pipeline for generating synthetic data for humanoids that enables task and environment generalization. – Step 1: Fine-tune a video generation model using a small number of human teleoperation videos – Step 2: Prompt https://x.com/TheHumanoidHub/status/1925255036965408887
Sam Altman: On that first day, when you’re just walking down the street and seven humanoid robots walk past you, it’s going to feel very sci-fi. And I don’t think that’s too far off from, like, a visceral, “oh, man, this is going to do a lot of things people used to do.” https://x.com/TheHumanoidHub/status/1924868956017590511
Sundar Pichai on the robotics opportunity: ⦿ He’s impressed by recent humanoid progress – so much so that he sometimes has to look closely to tell if a robot video is real or fake. ⦿ Google initially moved too early into the application layer, but now sees the combination of https://x.com/TheHumanoidHub/status/1923278278275760383
Elon on Optimus in today’s CNBC interview ⦿ Currently, Optimus is being trained from demonstrations collected by humans wearing mocap suits with cameras on their heads – performing primitive tasks such as opening doors, picking up objects, and dancing. This is needed to https://x.com/TheHumanoidHub/status/1924981814311133509
Optimus can now learn from first-person video. Many new skills are emerging that can be instructed via natural language. Next step: expand this to learning from third-person videos (random internet videos) and push reliability via self-play (Reinforcement learning). https://x.com/TheHumanoidHub/status/1925057174092579253
Humanoids are the iPhone of AGI”” / X https://x.com/adcock_brett/status/1923577489403806156
New sota open-source depth estimation: Marigold IID 🌼 > normal maps, depth maps of scenes & faces > get albedo (true color) and BRDF (texture) maps of scenes, they even release a depth-to-3D printer format demo 😮 link to all models and demos on the next one ⤵️ https://x.com/mervenoyann/status/1923318140965990814
Elon: “The only things that matter in the long term are vehicle autonomy and Optimus – those overwhelmingly dominate the future financial success of Tesla.” https://x.com/TheHumanoidHub/status/1924914655681773726
Tesla shared some information on Optimus in a session held for Morgan Stanley clients: ⦿ Tesla targets a $20k cost for Optimus. ⦿ Commercialization could begin by mid-2026. Current production in Fremont is manual, at a rate of ‘a dozen or so at a time’. ⦿ Morgan Stanley https://x.com/TheHumanoidHub/status/1923509958802538565
The U.S. Secretary of Transportation, Sean Duffy, at Tesla Giga Texas today – discussing the future of autonomous transportation with Optimus robots in the background. https://x.com/TheHumanoidHub/status/1924990626485174721
SharpaWave, a 22-DOF dexterous hand, by Singapore-based startup Sharpa. ⦿ Features over 1,000 tactile pixels per fingertip and 0.005 N pressure sensitivity. ⦿ Delivers 20 N fingertip strength and more than 4 Hz hand movement speed across all gestures. https://x.com/TheHumanoidHub/status/1925045971312116127
AGIBOT has unveiled a Nezha-inspired X2-N humanoid robot. https://x.com/TheHumanoidHub/status/1923250726874251747
California-based Foundation Robotics shipped its production humanoid to its first customer The company says it will be working over coming months to ensure the deployed fleet manages to “”perform full shifts”” https://x.com/adcock_brett/status/1924133871987106009
Humanoid robots will pick up on the nuances of environmental and contextual sounds to make smarter decisions. https://x.com/TheHumanoidHub/status/1924518352799859036
I guess drone wars season 2 is going to be very interesting https://x.com/bilawalsidhu/status/1924227551662071955
Our proprietary actuators are one of the features that make Phantom special. They enable us to build robots that are smooth, powerful, and safe to be around. Learn more about our design in this video. https://x.com/sankaet/status/1915469431771455839
Researchers at CMU Robotics presented DexWild, a data collection system to gather human hand data across environments In tests, it enabled AI policies to generalize in unseen scenes 3.8x better when compared to training with robot data only https://x.com/adcock_brett/status/1924133894367953174
The Gen-3 Optimus hand and forearm – that’s some neat packaging work in the forearm. https://x.com/TheHumanoidHub/status/1925266236449202227
Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇 https://x.com/_tonytao_/status/1922275638032957673
Unitree humanoid robots in Hangzhou are training for the world’s first MMA-style “Mech Combat Arena.” Four teams will control the robots with remotes in real-time competitive combat. The event will be held in late May and broadcast live on Chinese TV. https://x.com/TheHumanoidHub/status/1923087269914706414
What if robots could dream inside a video generative model? Introducing DreamGen, a new engine that scales up robot learning not with fleets of human operators, but with digital dreams in pixels. DreamGen produces massive volumes of neural trajectories – photorealistic robot https://x.com/DrJimFan/status/1924819887139987855
This is a major milestone for Figure and humanoids in general – doing useful work at a real production line for extended periods. Figure recently completed a 20-hour run of back-to-back shifts on the BMW X3 line! They’ve been running 10-hour shifts for several weeks now. https://x.com/TheHumanoidHub/status/1925234858730852805
Palo Alto-based K-Scale Labs is building open-source humanoid robot hardware and software for developers. The K-Bot humanoid stands 4′7″, weighs 77 lb, and integrates with their open-source software and ML stack. Price: $8,999. Pre-order for $100 -deliveries begin in July 2025 https://x.com/TheHumanoidHub/status/1924143417103409327
Figure’s culture of getting stuff done https://x.com/adcock_brett/status/1923927435940331858
On Friday, Figure completed a 20-hour run of back-to-back shifts on the BMW X3 production line! We’ve been running 10-hour shifts for several weeks now and as far as we know, Figure and BMW are the first in the world to do this with humanoid robots https://x.com/adcock_brett/status/1925216191733502270
New video of AGIBOT’s Lingxi X2 robot showcasing dynamic movements, fall recovery, quiet operation, and vision-based perception/planning. https://x.com/TheHumanoidHub/status/1923259033160384676
X2C: A Dataset Featuring Nuanced Facial Expressions
for Realistic Humanoid Imitation https://lipzh5.github.io/X2CNet/
Elon, November 2023: “Optimus will figure out how to do things by watching videos.” https://x.com/TheHumanoidHub/status/1925067964027699466
Jensen: The humanoid robot is likely the only robot that will work – because technology needs scale, and most robots we’ve had so far are too low volume to drive the flywheel of technology improvements. The humanoid robot is likely to be the next multi-trillion-dollar industry. https://x.com/TheHumanoidHub/status/1924341417662672972
Tsinghua University researchers detailed HuB, a unified framework to help humanoids handle extreme balancing tasks It integrates reference motion refinement, balance-aware policy learning, and robustness training to improve sim-to-real consistency https://x.com/adcock_brett/status/1924133916971020739




