Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: A weathered bronze robotic hand with visible articulated joints rests among fragments of crumbling white marble and classical stone ruins, dramatic chiaroscuro lighting with warm golden rays illuminating the metallic surface against deep shadows, desaturated stone grays, high detail on surface patina and erosion, minimalist composition, neoclassical architectural fragments in soft focus background.

.@BostonDynamics and @GoogleDeepMind are teaming up to put real brains behind those robotic biceps Unveiled at #CES2026, this partnership fuses DeepMind’s Gemini Robotics AI with the next-gen Atlas humanoid. DeepMind’s visual-language-action models let robots: – perceive -“” https://x.com/TheTuringPost/status/2009354215429427389

Demis Hassabis on X: “Can’t wait to get our hands on the awesome new Atlas robots from @BostonDynamics and combine them with our state-of-the-art Gemini Robotics models!” / X
https://x.com/demishassabis/status/2009420116312625334

Demis is aligning his forces ahead of the breakthrough moment in robotics.”” https://x.com/TheHumanoidHub/status/2009460541832745442

The World Model as NEO’s Cognitive Core 1X has revealed a major AI development where the NEO humanoid can translate any natural language prompt into robotic action. It demonstrates this capability even for novel tasks, objects, and environments not found in its robot dataset. -“” https://x.com/TheHumanoidHub/status/2010767070997131645

– NEO stands at a glass sliding door – receives a command to close it – “”Dreams”” the execution using a World Model (left) – real NEO then “”copies”” the dream into physical reality (right)”” https://x.com/TheHumanoidHub/status/2010831583318593896

Six months ago, on the first-ever episode of The Humanoid Hub, Eric Jang discussed World Models as something that wasn’t being taken seriously as the core of AI systems. Today, 1X introduced a robotics policy that turns that vision into reality, converting video generation from”” https://x.com/TheHumanoidHub/status/2010828834418147827

Very cool demo of Cosmos Transfer 2.5 at the @NVIDIARobotics booth during CES 2026. Developers can take a single recorded robot movement and generate hundreds of variations using natural language prompts. Thanks Spencer Huang and Edith Llontop for the demo!”” https://x.com/TheHumanoidHub/status/2011117953585148244

Skild AI is laying its cards on the table (partially, anyway). Teleoperation data lacks diversity and is limited by a 1:1 human operator time-scale. To address this, Skild pre-trained its model using internet-scale video data (already widely available in the form of”” https://x.com/TheHumanoidHub/status/2010967800119210369

Skild AI is now worth $14B!! – about 3 times the $4.5B valuation from the last funding round reported to have closed six months ago The company has just announced raising $1.4B in Series C funding at a $14B valuation led by SoftBank, with participation from NVIDIA, Macquarie”” https://x.com/TheHumanoidHub/status/2011509889667846609

Figure 03 is capable of wireless inductive charging Charging coils in the robot’s feet allow it to simply step onto a wireless stand and charge at 2 kW In a home setting, this means the robot can automatically dock and recharge itself as needed throughout the day There was”” https://x.com/adcock_brett/status/2009307039386960087

Met with Spencer Huang, Product Lead for @NVIDIARobotics, at CES 2026. We discussed scalable robotics policy evaluation, world models, and his outlook for the year ahead.”” https://x.com/TheHumanoidHub/status/2009707137807855977

🎙️ In this episode, I talk with Prof. Dr. Marco Huber, Professor for Cognitive Production Systems at the University of Stuttgart: Marco shares his journey from a middle-class upbringing with no academic role models to becoming a leading figure in applied AI for manufacturing. We”” https://x.com/IlirAliu_/status/2009265584987365728

Absolute madman trying to catch a falling terminator”” https://x.com/TheHumanoidHub/status/2009821387096240198

AME-2 is a unified RL framework for agile locomotion. It employs an attention-based map encoder to identify main terrain features and a lightweight mapping pipeline to manage noise and occlusions. The system achieves state-of-the-art agility and zero-shot generalization across”” https://x.com/TheHumanoidHub/status/2011500545689665788

Atlas ended CES 2026 on a high note.”” https://x.com/TheHumanoidHub/status/2010028200097886472

Awesome-Robotics Repository (5.5k+ stars) This is a comprehensive collection of robotics resources including courses from: Udacity, Coursera, edX, and MIT, books covering everything from probabilistic robotics to ROS programming, software libraries like Gazebo, ROS, and Webots,”” https://x.com/IlirAliu_/status/2009627983040921755

Can. You. Catch. It❓ A robot that catches balls in mid-air. [📍 Bookmark for the GitHub] This team built FrankaBallCatcher, a 7-DoF Franka Emika Panda arm that tracks a flying ball and intercepts it in real time using computer vision. • Ball is visible for only about 1″” https://x.com/IlirAliu_/status/2011361660766740728

Last week at CES: Robots! More Robots! And Jensen Huang says they will have human-level capabilities THIS year. We went to see if robots were aware of that. We ignore the hype to look at the machines actually doing the work ⬇️”” https://x.com/TheTuringPost/status/2010878204895216104

Learn mobile robotics from one of the strongest academic groups in the field. The University of Freiburg offers a full course on mobile robotics by Prof. Dr. Wolfram Burgard, one of the pioneers of probabilistic robotics and SLAM. Topics covered: • Kinematics and wheeled”” https://x.com/IlirAliu_/status/2009187324656062865

Mohi Khansari has been promoted to Head of Robot Learning at 1X! He has been the chief architect behind Redwood AI (1X’s vision-language model) – Veteran roboticist who previously led the imitation learning effort at Everyday Robots (Google X) and was a key tech lead at Cruise.”” https://x.com/TheHumanoidHub/status/2009768053547118819

Robots assembling real products without scripts or human demos just became real. Fabrica is a dual-arm system from MIT that can plan and execute multi-part assembly tasks end-to-end. • Plans the full assembly hierarchy precedence, sequence, grasp, motion, even fixture”” https://x.com/IlirAliu_/status/2010789663875727406

Robots don’t just need better reasoning. They need instinct. [📍 All code is open-sourced] Project-Instinct is a full-stack toolkit for instinct-level whole-body control on legged and humanoid robots. • One unified stack from training to real robot deployment • Perceptive”” https://x.com/IlirAliu_/status/2011152090077298977

Sharpa demonstrating impressive hand dexterity at CES 2026 – fully autonomous.”” https://x.com/TheHumanoidHub/status/2009341117209432243

Sharpa’s North robot autonomously playing ping-pong against a human at CES 2026″” https://x.com/TheHumanoidHub/status/2009882725591986479

That sound isn’t a robot breathing heavily- it’s the motors. Adam humanoid by PNDbotics”” https://x.com/TheHumanoidHub/status/2011516508284080338

This is the difference to @physical_int”” https://x.com/IlirAliu_/status/2011044970703274178

You know how ChatGPT got scary-good by reading the entire internet? Robotics couldn’t do that. With language models, you can scrape Wikipedia, Reddit, every book ever written. Trillions of words. All digital. All ready to train on. But robots? They needed to physically DO”” https://x.com/IlirAliu_/status/2010999007024226461

ZWHAND from Shenzhen ZWHAND, showcased at CES 2026. Features 17 tiny motors, 20 active degrees of freedom, and pressure sensors on the fingertips.”” https://x.com/TheHumanoidHub/status/2009355063953875250

If you think Physical AI is “just train a policy and ship it,” you’re about to waste months. If you work on robotics, this one is worth bookmarking‼️ The bottleneck is not the robot. It’s the world. Again: you do not fail because your model is dumb. You fail because the real”” https://x.com/IlirAliu_/status/2009553931102257357

.@Lindon_Gao, cofounder and CEO of Dyna Robotics, is working on general-purpose foundation models that perform diverse physical tasks at commercial scale. Here’s our chat at CES 2026, while their robot autonomously folded laundry in the background.”” https://x.com/TheHumanoidHub/status/2009383852259541030

Robots that plan by imagining the future. Large Video Planner (LVP-14B) from MIT is a robot foundation model built on video generation instead of vision-language-action. Instead of predicting actions directly, it generates a short video of how a human hand or robot gripper”” https://x.com/IlirAliu_/status/2010274518619587023

Synchronized robots, tested in 3D, to boost uptime and cut integration errors: This setup uses RobotStudio to simulate dual-robot coordination with precise axis timing and collision checks. Every move is validated in a digital twin before going live. – Full axis”” https://x.com/IlirAliu_/status/2011514447521550610

What if a robot could simulate the physical world from a single image. [📍Bookmark Paper & GitHub for later] PointWorld-1B from Stanford and NVIDIA is a large 3D world model that predicts how an entire scene will move, given RGB-D input and robot actions. The key idea is”” https://x.com/IlirAliu_/status/2009912186462724113

Chinese humanoid robotics company LimX Dynamics has unveiled COSA (Cognitive Operating System of Agents). COSA is described as a unified “”brain-body”” architecture that allows the robot to think and act simultaneously in the real world. It integrates: – high-level cognition”” https://x.com/TheHumanoidHub/status/2011530396102770916

A flapping-wing drone that actually handles its own vibrations. BionicBird’s X-Fly is not a propeller drone. It is an ornithopter with a six-axis gyro and precision G-sensors designed specifically to deal with the chaotic dynamics of wing flapping. • Sensor fusion tuned for”” https://x.com/IlirAliu_/status/2010064939361132899

Modern drone production. An assembly line in China shows how far drone manufacturing has been industrialized. Conveyor systems move the airframes between stations. Each worker performs a narrowly defined step, closer to poka-yoke than to classic workshop assembly. • Highly”” https://x.com/IlirAliu_/status/2009702506205438164

Is your robot policy World-Model pilled? Jim Fan at NVIDIA is betting big on it. He argues that VLM-based VLAs are fundamentally misaligned for robotics because they prioritize high-level semantics over the granular physical details required for dexterity. ‘A video world model”” https://x.com/TheHumanoidHub/status/2011176025733075315

Advanced Robotics: UC Berkeley This is course is from Peter Abbeel and covers a review on reinforcement learning and continues to applications in robotics. If you work on robotics… this one is worth bookmarking‼️ MDPs: Exact Methods Discretization of Continuous State Space”” https://x.com/IlirAliu_/status/2010427245400121446

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading