Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Animation cel style: A muscular blue-skinned genie with friendly expression emerges from a golden oil lamp at the base of a large wooden artist easel, magical cyan wisps flowing from his raised hand onto a canvas showing a photorealistic landscape materializing from blank white, warm studio lighting, clean background, Disney animation quality with bold outlines and volumetric magical effects, jewel tones and warm golds, horizontal composition with space for title text across top third.
I got early access to Project Genie from @GoogleDeepMind ✨ It’s unlike any realtime world model I’ve tried – you generate a scene from text or a photo, and then design the character who gets to explore it. I tested dozens of prompts. Here are the standout features 👇”” https://x.com/venturetwins/status/2016919922727850333
HOLY FUCK Genie 3 is the craziest thing I’ve tried in a long time Just… wow. Watch this.”” https://x.com/mattshumer_/status/2017058981286396001
Project Genie is an impressive demonstration of what world models can do. But there’s a difference between seeing the future and being able to build with it today. This is what running locally looks like”” https://x.com/overworld_ai/status/2017298592919392717
Here’s how it works: 🔵 Design your world and character using text and visual prompts. 🔵 Nano Banana Pro makes an image preview that you can adjust. 🔵 Our Genie 3 world model generates the environment in real-time as you move through. 🔵 Remix existing worlds or discover new”” https://x.com/GoogleDeepMind/status/2016919762924949631
Project Genie is a prototype web app powered by Genie 3, Nano Banana Pro + Gemini that lets you create your own interactive worlds. I’ve been playing around with it a bit and it’s…out of this world:) Rolling out now for US Ultra subscribers.”” https://x.com/sundarpichai/status/2016979481832067264
5/ Building responsibly 🛡️ Building AI responsibly is core to our mission. As an experimental @GoogleLabs prototype, Project Genie is still in development. This means you might encounter 60-second generation limits, control latency, or physics that don’t always perfectly adhere”” https://x.com/Google/status/2016972686208225578
Project Genie: AI world model now available for Ultra users in U.S. https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/
Thrilled to launch Project Genie, an experimental prototype of the world’s most advanced world model. Create entire playable worlds to explore in real-time just from a simple text prompt – kind of mindblowing really! Available to Ultra subs in the US for now – have fun exploring!”” https://x.com/demishassabis/status/2016925155277361423
Introducing Project Genie: An experimental research prototype powered by Genie 3, our world model, that lets you prompt an interactive world into existence — and then step inside 🌎”” https://x.com/Google/status/2016926928478089623
Project Genie is rolling out for AI Ultra members in the USA. It’s an experimental tool that allows you to create and explore infinite virtual worlds, and I’ve never seen anything like this. It’s still early, but it’s already unreal. Nano Banana Pro + Project Genie = My low-poly”” https://x.com/joshwoodward/status/2016921839038255210
Step inside Project Genie: our experimental research prototype that lets you create, edit, and explore virtual worlds. 🌎”” https://x.com/GoogleDeepMind/status/2016919756440240479
Project Genie is rolling out to @Google AI Ultra subscribers in the U.S. (18+) With this prototype, we want to learn more about immersive user experiences to advance our research and help us better understand the future of world models. See the details → https://x.com/GoogleDeepMind/status/2016919765713826171
I’ve written 250k+ lines of game engine code. Here’s why Genie 3 isn’t what people think it is: World models are something genuinely new. A third category of media we don’t have a name for yet. Near-term they’re too slow and expensive for consumers. But for training robots?”” https://x.com/jsnnsa/status/2017276112561422786
xAI’s Grok Imagine takes the #1 spot in both Text to Video and Image to Video in the Artificial Analysis Video Arena, surpassing Runway Gen-4.5, Kling 2.5 Turbo, and Veo 3.1! Grok Imagine is the latest video model from @xAI, and joins an increasing roster of models such as”” https://x.com/ArtificialAnlys/status/2016749756081721561
🚨BREAKING: @xAI’s first model in Video Arena debuts in the top 3! Grok-Imagine-Video ranks #3 on the Image-to-Video Arena and #4 on the Text-to-Video Arena. It is close to the top-ranked @GoogleDeepMind Veo 3.1 and @OpenAI Sora 2 Pro models. Grok-Imagine-Video offers: -“” https://x.com/arena/status/2016748418635616440
@xai Try New Grok Imagine here! Text to Image https://t.co/OeJMwL9hoH Image Editing https://t.co/Q7lojX41I1 Text to Video https://t.co/fAzEJABTYn Image to Video https://t.co/zTdoJQjkqk Video Editing”” https://x.com/fal/status/2016746473887609118
🚨Leaderboard update: Tencent’s Hunyuan-Image-3.0-Instruct now ranks #7 in the Image Edit Arena! A new lab breaks into the top-10, closely matching Nano-Banana and Seedream-4.5. Congrats to @TencentHunyuan on the huge milestone! 👏”” https://x.com/arena/status/2015846799446311337
Today, we introduce HunyuanImage 3.0-Instruct, a native multimodal model focusing on image-editing by integrating visual understanding with precise image synthesis! 🚀 It understands input images and reasons before generating images. Built on an 80B-parameter MoE architecture”” https://x.com/TencentHunyuan/status/2015635861833167074
MCP CLI + Skill 👀 Give your Agent full control over any MCP server without context bloat. 🧙 “”Generate a product image with Nano Banana, upload it to Cloud Storage, and add the link to our Google Sheet””. It just works. “`jsx mcp-cli call genmedia generate_image”” https://x.com/_philschmid/status/2017246499411743029
Try Grok Imagine now:”” https://x.com/chaitu/status/2017297699973042412
Grok Imagine API | xAI https://x.ai/news/grok-imagine-api
Grok Imagine only gets better from here”” https://x.com/elonmusk/status/2016768088855769236
Realtime | Krea https://www.krea.ai/realtime
We’re helping AI to see the 3D world in motion as humans do. 🌐 Enter D4RT: a unified model that turns video into 4D representations faster than previous methods – enabling it to understand space and time. This is how it works 🧵”” https://x.com/GoogleDeepMind/status/2014352808426807527





Leave a Reply