“Native image generation with Gemini 2.0 Flash in AI Studio is incredible.. 🔥 Can’t believe its free. Many paid Android apps doing this will feel the heat now.. https://x.com/rohanpaul_ai/status/1899936200682856714
Introducing Gemini with personalization https://blog.google/products/gemini/gemini-personalization/
“Introducing YouTube video 🎥 link support in Google AI Studio and the Gemini API. You can now directly pass in a YouTube video and the model can usage its native video understanding capabilities to use that, with just a link! 🚢 https://x.com/OfficialLoganK/status/1899914266062577722
“You can now start building with Gemma 3. 🛠️ Made from the same tech powering Gemini 2.0, these are our state-of-the-art open models designed to run fast and directly on devices — helping developers create AI applications wherever people need them. → https://x.com/GoogleDeepMind/status/1900549631647367268
“Google joins the smol models club with Gemma3 1B! Here’s a timeline showing the acceleration of smol (1B-2B) model releases over the past 18 months. This space is heating up fast! 🔥 https://x.com/LoubnaBenAllal1/status/1899873487231345062
“still trying to digest this 🤯 Gemma 3 is the biggest thing that happened in AI since DeepSeek R1 release https://x.com/mervenoyann/status/1899879621396750801
“Gemma 3 27B’s Intelligence vs. Size positioning is compelling compared to other smaller, open weights models We have completed our independent intelligence evaluations of Gemma 3 27B and have benchmarked an Artificial Analysis Intelligence Index of 38. While Gemma 3 27B is not https://x.com/ArtificialAnlys/status/1900579291404046696
“🎉 Congrats to @GoogleDeepMind on Gemma-3-27B, the newest and one of the strongest open models in Arena! 💠 Top 10 overall – beating out many proprietary models with only 27B parameter 💠 2nd best open model only below DeepSeek-R1 💠 128K context window Check out their blog to https://x.com/lmarena_ai/status/1899729292617277501
“Gemma 3 is available in a range of sizes, from 1B to 27B – and comes with a 128K token context window as well as support for over 140 languages. https://x.com/GoogleDeepMind/status/1900549635267014878
Gemma 3: Google’s new open model based on Gemini 2.0 https://blog.google/technology/developers/gemma-3/
“Gemma 3 is best in class for a VLM that runs on 1 GPU. Should make RL fine tuning feasible. Also Academic researchers can apply for Google Cloud credits (worth $10,000 per award) to accelerate their Gemma 3-based research.” / X https://x.com/sirbayes/status/1900520172059815986
“I’m so happy to announce Gemma 3 is out! 🚀 🌏Understands over 140 languages 👀Multimodal with image and video input 🤯LMArena score of 1338! 📏Context window of 128k Available in AI Studio, Hugging Face, Ollama, Vertex, and your favorite OS tools 🚀Download it today! https://x.com/osanseviero/status/1899726995170210254
“Gemma 3 is here and its the best open non-reasoning model on LMSYS! 🚀 @GoogleDeepMind Gemma 3 is an open, multimodal (text + vision), multilingual LLM with a context of 128k tokens and comes in 4 sizes! TL;DR: 4️⃣ Four sizes with 1B, 4B, 12B, 27B as pre-trained and https://x.com/_philschmid/status/1899726907022963089
“Some thoughts about Gemma 3. The tech report (as with most labs) is not really detailed but still provides some interesting info https://x.com/nrehiew_/status/1899882552946532498
“Gemma 3 can understand videos, and it’s more powerful than you think it is ⏯️ I put together a short notebook on interleaving frames and doing video inference 📖 you’re welcome 🤝 https://x.com/mervenoyann/status/1899823530524447133
“Google is BACK!! Welcome Gemma3 – 27B, 12B, 4B & 1B – 128K context, multimodal AND multilingual! 🔥 Evals: > On MMLU-Pro, Gemma 3-27B-IT scores 67.5, close to Gemini 1.5 Pro (75.8) > Gemma 3-27B-IT achieves an Elo score of 133 in the Chatbot Arena, outperforming larger LLaMA 3 https://x.com/reach_vb/status/1899728796586025282
“Today we launched Gemma 3, our most advanced and portable open models yet. This collection of lightweight models is designed to run fast, directly on devices like smartphones and laptops, to help devs create responsible AI apps at scale. Learn more ↓ https://x.com/Google/status/1899916049002217855
“Google DeepMind introduced two foundational models for embodied reasoning, enabling robots to comprehend, react, and take action in the physical world: ⦿ Gemini Robotics – built on Gemini 2.0. Integrates vision, language, and action for real-world dexterity, . ⦿ Gemini https://x.com/TheHumanoidHub/status/1899875342221009265
“Robots must be able to interact seamlessly with humans. 🤝 When it’s interrupted or situations change, Gemini Robotics can adjust its actions on the fly. This level of steerability will empower us to better work with future robot assistants in the home, at work and beyond. https://x.com/GoogleDeepMind/status/1899839632772067355
“We’re partnering with @Apptronik to build the next generation of humanoid robots with Gemini 2.0 – and opening our Gemini Robotics-ER model to trusted testers such as Agile Robots, @AgilityRobotics, @BostonDynamics and @EnchantedTools. Find out more → https://x.com/GoogleDeepMind/status/1899839644302270671
“Meet Gemini Robotics: our latest AI models designed for a new generation of helpful robots. 🤖 Based on Gemini 2.0, they bring capabilities such as better reasoning, interactivity, dexterity and generalization into the physical world. 🧵 https://x.com/GoogleDeepMind/status/1899839624068907335
“They also accomplished tasks not seen in training, showing the ability to generalize to new scenarios. 💡 We show that on average, Gemini Robotics more than doubles performance on a comprehensive generalization benchmark – compared to other state-of-the-art https://x.com/GoogleDeepMind/status/1899839635720663463
“Google released Gemma 3 – their new Open-Source multimodal model family Here is everything you need to know in one thread: Gemma 3 27B currently ranks 9th in the LMSLOP arena beating models like o1-mini and o3-mini, DeepSeek-V3, Claude 3.7 Sonnet and Qwen2.5-Max In comparison https://x.com/scaling01/status/1899792217352331446
“Gemma3 technical report detailed analysis 💎 1) Architecture choices: > No more softcaping, replace by QK-Norm > Both Pre AND Post Norm > Wider MLP than Qwen2.5, ~ same depth > SWA with 5:1 and 1024 (very small and cool ablation on the paper!) > No MLA to save KV cache, SWA do https://x.com/eliebakouch/status/1899790607993741603
“⚙️ It goes head to head with our team to wrap a timing belt around gears – a feat that’s harder than you think ↓ https://x.com/GoogleDeepMind/status/1899839630242955536
Google DeepMind on X: “They also accomplished tasks not seen in training, showing the ability to generalize to new scenarios. 💡 We show that on average, Gemini Robotics more than doubles performance on a comprehensive generalization benchmark – compared to other state-of-the-art”
https://x.com/GoogleDeepMind/status/1899839635720663463
Google DeepMind on X: “Our model Gemini Robotics-ER allows roboticists to tap into the embodied reasoning of Gemini. 🌐 For example, if a robot came across a coffee mug, it could detect it, use ‘pointing’ to recognize parts it could interact with – like the handle – and recognize objects to avoid” / X
https://x.com/GoogleDeepMind/status/1899839638493077892
AI Video Tool Showdown: Comparing Google Veo 2 And OpenAI Sora in 2025 https://www.forbes.com/sites/moinroberts-islam/2025/03/06/2025s-ai-video-showdown-comparing-google-veo-2-and-openai-sora/
Google DeepMind unveils new AI models for controlling robots | TechCrunch https://techcrunch.com/2025/03/12/google-deepmind-unveils-new-ai-models-for-controlling-robots/
Google releases SpeciesNet, an AI model designed to identify wildlife | TechCrunch https://techcrunch.com/2025/03/03/google-releases-speciesnet-an-ai-model-designed-to-identify-wildlife/
“The MLX community is so fast (while I was 😴): Gemma 3 comes out: – Already supported in MLX VLM – PR up for text only model in MLX LM – PR up for VLM support in MLX Swift to run on an iPhone! Thanks to @pcuenq @fleetwood___ @Prince_Canuma @DePasqualeOrg” / X https://x.com/awnihannun/status/1899822376797536701
“Try Gemma in AI Studio: https://x.com/_philschmid/status/1899726910227181889
“Also, does this make Gemma3 27B to be the best non-reasoning LLM???? https://x.com/reach_vb/status/1899734270328889367
“Pretty wild to me that Gemma3 4B is competitive with Gemma2 27B (8 months old)🤯 EXPONENTIAL TIMELINES https://x.com/reach_vb/status/1899732585699533138
Google upgrades Colab with an AI agent tool | TechCrunch https://techcrunch.com/2025/03/03/google-upgrades-colab-with-an-ai-agent-tool/
“…so i use images instead. look at how uniform the pareto curves of every frontier lab is…. and then look at Gemini 2.0 Flash. @GoogleDeepMind is highkey goated and this is just in text chat. In native image chat it is in a category of its own. (updated price-elo plot of https://x.com/swyx/status/1900248519451046364
“Gemma 3 support has been merged in llama.cpp https://x.com/ggerganov/status/1899749881624817971
“We wrote a blog on everything you need to know as Developer for Gemma 3: https://x.com/_philschmid/status/1899863222649331747
“1 line Code change and you can use Gemini 2.0 with the @OpenAI Agents SDK! 🚀” / X https://x.com/_philschmid/status/1900589029961109514
“Excited to share that @UnslothAI now supports: • Full fine-tuning + 8bit • Nearly any model like Mixtral, Cohere, Granite, Gemma 3 • No more OOMs for vision finetuning! Blogpost with details: https://x.com/danielhanchen/status/1900592202621087944
“Gemini dropped their truly multimodal in/out image generator. Been on the EAP, here are my thoughts + some tips & tricks for “conversational image editing” Bookmark this, and try it later for free in google AI studio. Let’s get into it: https://x.com/bilawalsidhu/status/1899904526284710371
“Great talk by Google’s Bill Jia on their GenAI work, including Astra and Deep Research agents (both of which I think are very cool). https://x.com/AndrewYNg/status/1900596396140671194
Experiment with Gemini 2.0 Flash native image generation – Google Developers Blog https://developers.googleblog.com/en/experiment-with-gemini-20-flash-native-image-generation/
Google Workspace Updates: Quickly add events to Google Calendar based on your emails with Gemini in Gmail https://workspaceupdates.googleblog.com/2025/03/add-events-to-google-calendar-using-gemini-in-gmail.html
“Gemma 3 tech report review First off, the model names actually match the parameter counts 😁 1B model is text-only, 4B+ are multimodal https://x.com/vikhyatk/status/1899773905591792054
State-of-the-art text embedding via the Gemini API – Google Developers Blog https://developers.googleblog.com/en/gemini-embedding-text-model-now-available-gemini-api/
“Can anyone explain to me Google’s logic in scaling D so aggressively with model N, contra the rest of the industry? Whence Gemma-1B trained on 2T? Is it even any good for speculative decoding?” / X https://x.com/teortaxesTex/status/1899886591344095457
“completely sota for image editing! both generated and real images great 🚢 from google! i hope to see a gemma with image generation soon too ✨ https://x.com/multimodalart/status/1899881757396099231
“Super excited to ship Gemini’s native image generation into public experimental today 🙂 We’ve made a lot of progress and still have a way to go, please send us feedback! And yes, I made the image using Gemini. https://x.com/19kaushiks/status/1899856652666568732
“Nice quality of life update, we now have a Gemini API and Google AI Studio status page! Apologies for taking so long to stand this up, the team went and built this from scratch with lots of nice details (though hopefully you won’t need to ever use it), link below! 🧵 https://x.com/OfficialLoganK/status/1900252790909177999
“The Gemini app is launching a bunch of improvements! Upgraded Flash Thinking model which has much stronger reasoning capabilities, and a deeper integration into apps. Plus integrations with Deep Research and Personalization. All free to try 💎” / X https://x.com/jack_w_rae/status/1900325293447061877
“Google releases Gemma 3! ✨ The Gemma 3 (text + image) models are multimodal and come in 1B, 4B, 12B, and 27B sizes. The 27B model matches Gemini-1.5-Pro on many benchmarks. It introduces vision understanding, has a 128K context window, and multilingual support in 140+ https://x.com/danielhanchen/status/1899728162130694266
Gemma3Report.pdf https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf
“✨ AI Mode expands on AI Overviews with more advanced reasoning, thinking and multimodal capabilities. We’re starting to roll it out to Google One AI Premium subscribers as an opt-in experiment in Labs. Sign up for early access → https://x.com/Google/status/1897332929136877854
“Google beats OpenAI to market with their new multimodal Gemini 2.0 Flash model I let Gemini 2.0 Flash describe the original image and then tasked it in a seperate chat to recreate the image purely based on the description.” / X https://x.com/scaling01/status/1899873762222186528
“Google Deep Research is noticeably improved today! Cool to see people’s experience so far. Why is it better? A bunch of product development from the team, and the underlying model updating from 1.5 Pro –> 2.0 Flash Thinking” / X https://x.com/jack_w_rae/status/1900401734046126274
Apply now for Google for Startups Accelerator: AI for Energy https://blog.google/outreach-initiatives/sustainability/google-for-startups-accelerator-ai-energy/
“You can test Gemma 3 27B using the `google-genai` sdk! https://x.com/_philschmid/status/1899816992585945539
“I had to try this. Gemini 2.0 Flash Experimental with image output 🤯 https://x.com/fofrAI/status/1899927094727000126
“It has been fun to sit a few desks away from @m__dehghani and see the team’s progress in conjuring native image generation for Gemini 2 🖼️ It’s a very different experience to text-to-image models!” / X https://x.com/jack_w_rae/status/1900334465945395242
“🤯 Gemma 3 is available on Ollama! Multimodal is here for Gemma. 1B (text-only): ollama run gemma3:1b 4B: ollama run gemma3:4b 12B: ollama run gemma3:12b 27B: ollama run gemma3:27b https://x.com/ollama/status/1899742981676007791
“My Gemma-3 analysis: 1. 1B text only, 4, 12, 27B Vision + text. 14T tokens 2. 128K context length further trained from 32K 3. Removed attn softcapping. Replaced with QK norm 4. 5 sliding + 1 global attn 5. 1024 sliding window attention 6. RL – BOND, WARM, WARP Detailed analysis: https://x.com/danielhanchen/status/1899735308180267176
“Gemini Embedding Generalizable Embeddings from Gemini https://x.com/_akhaliq/status/1899674020880027752
“Google debuted AI Mode, a Search Labs experiment that turns traditional search into a conversational experience —Powered by custom Gemini 2.0 model —Runs parallel searches across diverse sources —Gives detailed yet well-reasoned responses https://x.com/rowancheung/status/1897554300907254107
“google just casually killing other models with a smaller and better model yet another Wednesday? https://x.com/mervenoyann/status/1899774973725540627
“Our model Gemini Robotics-ER allows roboticists to tap into the embodied reasoning of Gemini. 🌐 For example, if a robot came across a coffee mug, it could detect it, use ‘pointing’ to recognize parts it could interact with – like the handle – and recognize objects to avoid when https://x.com/GoogleDeepMind/status/1899839638493077892
“Gemini Robotics can solve multi-step tasks that require significant dexterity, such as folding origami 📄 packing a lunch box 🥗 and more. See it in action ↓ https://x.com/GoogleDeepMind/status/1899839627139383762
“Google DeepMind releases Gemma 3 family of open models!! 1B trained on 2T, 4B trained on 4T, 12B trained on 12T, and 27B params trained on 14T tokens vision input with 400M SigLIP, context length of 128k 27B ranks 9th on LMArena, outperforming o3-mini, DeepSeek V3, Claude 3.7 https://x.com/iScienceLuvr/status/1899729481176138122
“Wow – Google expected to DOUBLE in value after Anthropic reaches its future valuation of only $14T 🤯 https://x.com/nearcyan/status/1899624995413950910
“New 42 min deep dive: Using Claude, Gemini & Grok to build a 3D city simulation, dynamic video annotations, smart shot lists, and even a AR HUD overlay. Chapters below: 00:00 LLM Superpowers You’re Not Using 00:26 Claude 3.7 Overview 02:38 3D City Vibe Coding for Creatives https://x.com/bilawalsidhu/status/1898020814576066585
“Text generation 3.2 is out. Gemma 3 support ! And a lot of improvements around tool calling agenting. And also lots of work towards many hardware/many backend !” / X https://x.com/narsilou/status/1899813420007919925
““The woman drives the spaceship through a space battle and engages the enemy 1:1” imo veo 2 is currently the best ai video model, i mean just look at this… https://x.com/bilawalsidhu/status/1897484382182445190
“We’re also launching ShieldGemma 2, a powerful 4B image safety checker built on the Gemma 3 foundation. 🛡️ It provides a ready-made solution for image safety, which can be further customized. Find out more → https://x.com/GoogleDeepMind/status/1900549638802813312
“google maps 3d data is perfect for quick environments in blender https://x.com/bilawalsidhu/status/1899628902361751746
“No more PaliGemma: Gemma3 goes multimodal! This is the last big thing I helped with before leaving. Basically as good as Gemini1.5 Pro! Happy to see it made it all the way through and they didn’t kick me off the author list, thanks @armandjoulin. https://x.com/giffmana/status/1899776751925920181




