Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Minimalist editorial illustration in Anthropic style showing a simple hand-drawn window frame icon with thick black wobbly strokes on warm off-white background, geometric shapes floating inside frame suggesting portal to virtual space, vertical split composition with warm tan right panel containing bold black ‘AR/VR’ typography, flat vector aesthetic with subtle paper grain texture, 16:9 landscape format

A conversation with the Project Genie team on the path to launching the most powerful world model to date, where they expect to see value from these models in the short term, what comes next, and more : ) Featuring: @jparkerholder @shlomifruchter and @drivascos”” https://x.com/OfficialLoganK/status/2018420009115017310

Fun to turn paintings into scenes I can walk around in using Genie 3: here are the works of Giorgio de Chirico, Munch, Turner, and the Bayeux Tapestry. I can move freely around the scenes, and, yes, they are a little weird, but its real-time dynamic image creation by the AI.”” https://x.com/emollick/status/2017852070620025250

Giving the world’s first photograph, the View from the Window at Le Gras, from 1822, to Genie 3.”” https://x.com/emollick/status/2018494862178316725

Much debate over Genie vs 3D engines. You can have both – the control of 3D scene graphs + the creativity of generative ai. Wrote this in 2024 breaking down the vision. The models are almost there. Now just imagine if Unreal / Unity productized this.”” https://x.com/bilawalsidhu/status/2018119240612536587

The capabilities of Genie 3 are strange and surprising. Sometimes NPCs are animated & move around & react, but I can’t seem to control that happening. Sometimes objects have physical properties like stretching or tearing. The world in the world model peeks through at odd times.”” https://x.com/emollick/status/2017805206419918883

Took an old photo of a WWI battlecruiser, gave it to Genie 3, and prompted it to let me play as a torpedo boat at the Battle of Jutland. Considering this is a research preview, astonishing how fast this has come. An AI dynamically generating the world with no game engine…”” https://x.com/emollick/status/2018198584508760108

Accelerating open-source physical AI. 🤖 NVIDIA is collaborating with @huggingface to bring open-source NVIDIA Isaac technologies into the @LeRobotHF framework, making end-to-end robot development faster and more accessible. 🔹 Connecting 2M+ NVIDIA robotics developers with 13M”” https://x.com/NVIDIARobotics/status/2008636752651522152

Hard to know which X articles are valuable, but this is a good summary of the significance of world modeling by a distinguished scientist and robot expert NVIDIA”” https://x.com/emollick/status/2018774863734075878

Excited to share our @NVIDIARobotics × @huggingface collaboration on robotics that was presented at CES in Las Vegas. This is a big step for developers. Anything you build in Isaac Sim / IsaacLab: environments, tasks, robots can now run out of the box in LeRobot. Thanks to”” https://x.com/LeRobotHF/status/2008495248931017026

📢 New paper from GEAR team @NVIDIARobotics We released DreamZero, a World Action Model that turns video world models into zero-shot robot policies. Built on a pretrained video diffusion backbone, it jointly predicts future video frames and actions. 🌐”” https://x.com/yukez/status/2019096072690553112

Project AVA: 3D Hologram AI Desk Companion | Razer United States https://www.razer.com/concepts/project-ava

Playing as Godot, finally arriving. Just as Beckett intended, thanks to AI.”” https://x.com/emollick/status/2018213227503534572

I tested Google’s world model Genie 3… Then DeepMind told me everything 00:00 – Intro & Authoring Workflow 00:27 – Genie 3 Playtesting & Demos 05:33 – Interview w/ Google DeepMind (Genie 3 co-lead @jparkerholder and Sr. PM Diego Rivas) 06:54 – Wildest emergent behaviors”” https://x.com/bilawalsidhu/status/2018487746508018051

Tired of teleoperation? One human video → 1,000s of robot demos. (📍GitHub ) Scaling Robot Data Without Dynamics Simulation or Robot Hardware Real2Render2Real (R2R2R) is a new way to scale robot data without physics simulation or hardware. You take a phone scan + a single”” https://x.com/IlirAliu_/status/2017884655869976975

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading