Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: {“category”:”World Models”,”short_title”:”Snow Globe Serenade”,”concept_description”:”The singer holds a small crystal-clear snow globe to his lips, inside which a tiny detailed miniature world drifts — rolling hills, a little house, swirling particles suspended in liquid. The globe stands in for the microphone, suggesting AI world models as entire simulated realities being whispered into existence. The transparent sphere catches the warm orange backdrop light, glowing like a captured planet held in his tattooed hand.”,”image_prompt”:”Using the provided reference image, preserve every element exactly — the marigold-orange backdrop, the seated woman with closed eyes and contented smile in her purple windbreaker, the tattooed singer in the red beanie and layered red vest leaning in mid-serenade — but replace ONLY the black microphone in his hand with a small crystal snow globe containing a tiny detailed miniature landscape with hills and a little house and suspended swirling particles, held to his mouth in the exact same grip and position as the original microphone, photorealistic with the glass catching warm studio light.} After generating the image, overlay the text “World Models” in the upper-left corner of the frame in large, bold, all-caps ITC Avant Garde Gothic Pro Medium (or a near-identical geometric sans-serif if unavailable), pure white (#FFFFFF), with no date, subtitle, drop shadow, or outline. The text should be substantial in scale — taking up a meaningful portion of the upper-left area — with comfortable margin from the top and left edges, set against the negative space of the orange backdrop so it does not overlap or obscure the singer, the seated woman, or the replaced object.
Today, we released Lyra 2.0, a framework for generating persistent, explorable 3D worlds at scale, from NVIDIA Research. Generating large-scale, complex environments is difficult for AI models. Current models often “forget” what spaces look like and lose track of movement over
https://x.com/NVIDIAAIDev/status/2044445645109436672
📢📢A double launch today! We’re releasing a paper analyzing the rapidly growing trend of “open-world evaluations” for measuring frontier AI capabilities. We’re also launching a new project, CRUX (Collaborative Research for Updating AI eXpectations), an effort to regularly
https://x.com/random_walker/status/2044841045867778365
Meet @HappyOysterAI from Alibaba ATH, an open‑ended world model built for real‑time world creation and interaction. Be part of the first wave and see what you can build. 🌍✨ #AlibabaAI #HappyOyster
https://x.com/AlibabaGroup/status/2044634595937882394?s=20
Most Physical AI models recognize patterns. They don’t understand the world. That’s why they fail on edge cases. BADAS 2.0 is a V-JEPA2 world model trained by @getnexar on real-world videos. We used the model to find what it didn’t understand, then trained on that. It
https://x.com/eranshir/status/2044759951340388611
Must-read research of the week ▪️ Neural Computers ▪️ The Illusion of Stochasticity in LLMs ▪️ Learning is Forgetting: LLM Training as Lossy Compression ▪️ A Frame is Worth One Token: Efficient Generative World Modeling with Delta Tokens ▪️ INSPATIO-WORLD: A Real-Time 4D World
https://x.com/TheTuringPost/status/2044113565771280775
We’re open-sourcing HY-World 2.0, a multimodal world model that generates, reconstructs, and simulates interactive *3D worlds* from text, images, and videos. Outputs can be integrated into game engines and embodied simulation pipelines. Key highlights: 🔹 One-click world
https://x.com/TencentHunyuan/status/2044604754836505076?s=20
Genie3 generates videos. We generate 𝟯𝗗 𝘄𝗼𝗿𝗹𝗱𝘀 you can actually use. Launching tomorrow — Tencent #HYWorld 2.0, an engine-ready World Model🚀 This isn’t a video. It’s a real 3D scene, all generated & editable. One image in. A whole 3D world out. 🔥Open-source tomorrow
https://x.com/DylanTFWang/status/2043952886166761519
What are world models actually? @FidlerSanja, VP of AI Research at NVIDIA, leading company’s Spatial Intelligence Lab, explains in our interview If you want to learn about the major next frontier in AI, watch the full conversation:
https://x.com/TheTuringPost/status/2043962055531868554
[2604.13036] Lyra 2.0: Explorable Generative 3D Worlds
https://arxiv.org/abs/2604.13036





Leave a Reply