Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Photorealistic 35mm cinema shot of child aged 7 viewed from side angle sitting on plush cream rug in warm-lit bedroom, surrounded by panoramic arc of TV screens showing security camera feeds and local news broadcasts with neighborhood maps, scattered paper street maps and community newsletters on floor, small window showing actual residential street outside, warm peach and lavender tones with cool blue screen glow, shallow depth of field, category title LOCAL in bold text at top, cozy intimate lighting, natural skin texture and fabric detail
After a day of gemini 3 flash in antigravity, I think I’m convinced. It’s really good to have a lightning fast and smart model for daily work. I’ve been pretty adamant that slower is ok if the model is smarter, but the models have produced just slightly too much cruft and I”” / X https://x.com/andrew_n_carr/status/2001487412749570549
For a fast model, Gemini 3 Flash offers incredible performance, allowing us to provide frontier intelligence to everyone globally. Try the ‘fast’ mode from the model picker in the @GeminiApp – it’s shockingly speedy AND smart. Best pound-for-pound model out there ⚡️⚡️⚡️ https://x.com/demishassabis/status/2001325072343306345
For developers, it combines advanced coding skills with the low latency needed for building interactive apps. On SWE-bench Verified – a benchmark for evaluating coding agents – it outperforms not only the 2.5 series, but also Gemini 3 Pro. Watch 3 Flash give near real-time AI https://x.com/GoogleDeepMind/status/2001321765503377546
Gemini 3 Flash gives you frontier intelligence at a fraction of the cost. ⚡ Here’s how it’s built for speed and scale 🧵 https://x.com/GoogleDeepMind/status/2001321759702663544
Gemini 3 flash is a bigger deal than Gemini 3 pro. While 2.5 flash is the most used model this year, but it struggled with tool calling. But Gemini 3 flash gets it. – tool calling feels natural to the model – it’s faster than turbo models + way smarter too (best for real time”” / X https://x.com/0xdevshah/status/2001330346961604732
Gemini 3 Flash is beating 3 Pro on SWE bench verified Hmm what https://x.com/MS_BASE44/status/2001698991801798927
Gemini 3 Flash is starting to roll out in the @GeminiApp and across Google products. Learn more ↓ https://x.com/Google/status/2001746491275083925
Gemini 3 Flash punches way above its weight class, surpassing 2.5 Pro on many benchmarks, while being much cheaper, faster, and more token efficient. https://x.com/OfficialLoganK/status/2001323840459456715
Google has released Gemini 3 Flash Preview – 2x cheaper than Gemini 3 Pro Preview, with only a 2-point drop in our Intelligence Index, making it the most intelligent model for its price range @GoogleDeepMind gave us pre-release access to Gemini 3 Flash Preview. The model scores https://x.com/ArtificialAnlys/status/2001335953290670301
Gemini 3.0 Flash is an absolutely fantastic release. Consider this: It costs a quarter (1/4) of what Gemini 3.0 Pro costs and achieves similar results to the Pro model in almost all benchmarks, such as HLE and ARC-AGI 2. In other benchmarks, it even outperforms the more https://x.com/kimmonismus/status/2001326181875154983
Introducing Gemini 3 Flash: Benchmarks, global availability https://blog.google/products-and-platforms/products/gemini/gemini-3-flash/
Starting today, Gemini can serve up local results in a rich, visual format. See photos, ratings, and real-world info from @GoogleMaps, right where you need them.”” / X https://x.com/GeminiApp/status/1999631529379791121
Distributed inference in MLX on Apple silicon will be much faster in Tahoe 26.2 https://x.com/awnihannun/status/1999596403472105975
Devstral 2 is now available in MLX on Apple Silicon. Run a local SOTA coding model on your MacBook. Please update to LM Studio 0.3.35 first! Have a nice weekend! 👾 https://x.com/lmstudio/status/1999648656958296119
Introducing Real-time Transcription with Speakers! – Step change in accuracy, surpassing top cloud APIs – Faster than real-time on Mac and iPhone – Still under 3 watts when all features are enabled Available in Argmax SDK 2.0 for early access! Benchmarks and details in comments. https://x.com/argmax/status/2001296557556040028
🌎 Google’s FunctionGemma is 270M model that’s fine-tuned by Google for function calling. Try it on Ollama’s latest v0.13.5 ollama pull functiongemma examples on model page 👇👇👇 https://x.com/ollama/status/2001705006450565424
Fine tune Google’s FunctionGemma for Mobile, with agents, on colab, locally, or Hugging Face. Google Deepmind Have just release FunctionGemma and anyone can finetune it with TRL. This is the model: – uses the Gemma 3 270M architecture + adapted chat template – specifically for https://x.com/ben_burtenshaw/status/2001704049490489347
FunctionGemma – a google Collection https://huggingface.co/collections/google/functiongemma
Google is preparing for a new open source release on @huggingface Also noticed just recently that Gemma models are not available on AI Studio anymore. What do you expect? 👀 https://x.com/testingcatalog/status/2000597370707611991
I’m very excited to release Gemma Scope 2: Sparse Autoencoders, and transcoders on every layer of every Gemma 3 model: 270M to 27B, base and chat We want to make it easier to do deep dives into interesting model behaviour, I’m excited to see what you all can do with them”” / X https://x.com/NeelNanda5/status/2002080911693643806
Introducing FunctionGemma 🤏270m model for function calling 📱can run in your phone, browser or other devices 🤖designed to be specialized for your own tasks https://x.com/osanseviero/status/2001704034667769978
Introducing T5Gemma 2, the next generation of encoder-decoder models 🚀 Built on top of Gemma 3, we were able to build compact models at sizes of 270m-270m, 1B-1B, and 4B-4B sizes. While most models today are decoder-only, T5Gemma 2 is the first (I’m aware of) multimodal, https://x.com/osanseviero/status/2001723652635541566
To build safer AI, we need to understand how models “”think””. 🧠 Enter Gemma Scope 2, a new set of tools to interpret Gemma 3: our family of lightweight open models. It can help researchers trace internal reasoning, debug complex behaviors and identify risks → https://x.com/GoogleDeepMind/status/2002018669879038433
Update: Gemma 4 incoming! Let’s go, google!”” / X https://x.com/kimmonismus/status/2000537345326452790
We made 3 @UnslothAI tool calling notebooks for FunctionGemma! 1. Fine-tuning it to make it reason before tool calling 2. Multi-turn tool calling 3. Tool calling fine-tuning to enable mobile actions Guide: https://x.com/danielhanchen/status/2001713676747968906
e
🚨BREAKING: Leaderboard updates for Text, Vision & WebDev Gemini-3-Flash by @GoogleDeepMind is now ranked top 5 across Text, Vision, and WebDev, making it the most cost-efficient frontier model (input $0.5 and output $3/MTokens). Gemini-3-Flash highlights: 🔹 Top 5 across Text, https://x.com/arena/status/2001322123730788698
📢 New Model(s) Drop: Gemini 3 Flash Preview is now live on Yupp’ The latest from @GoogleDeepMind offers frontier-level intelligence with reduced costs and more speed. Ready to test it out? It’s available on Yupp in several variants! https://x.com/yupp_ai/status/2001340530828206586
Gemini 3 Flash above GPT-5.2 on EpochAI’s ECI https://x.com/scaling01/status/2001850867620946169
Gemini 3 Flash is now available ⚡ Since introducing the Gemini 3 series last month, we’ve seen you vibe code simulations to learn about complex topics, build and design interactive websites and understand multimodal content. Now we’re introducing Gemini 3 Flash, our latest https://x.com/Google/status/2001322381533409733
Gemini 3 Flash is now available in Cursor! We’ve found it to work well for quickly investigating bugs.”” / X https://x.com/cursor_ai/status/2001326908030804293
Gemini 3 Flash is now available to all Perplexity Pro and Max subscribers. https://x.com/perplexity_ai/status/2001447398317724153
Gemini 3 Flash is now rolling out to @code developers! https://x.com/pierceboggan/status/2001327058425917795
Gemini 3 Flash is rolling out globally today. ⚡⚡⚡ Let us know how you’re using it in the replies ↓”” / X https://x.com/GeminiApp/status/2001412101286563865
Gemini 3 Flash is the new default for vibe coding”” / X https://x.com/OfficialLoganK/status/2001352972379549721
Gemini 3 Flash Low on LisanBench – low does obviously worse than high – still inefficient reasoning, ~2x lower score for ~2x less tokens – validity ratios are absolutely abysmal https://x.com/scaling01/status/2001359254578753852
Gemini 3 Flash on the @ArtificialAnlys intelligence benchmark, the most cost per intelligence efficient model in the world!!! https://x.com/officiallogank/status/2001368440016392314
Gemini 3 Flash on the @ArtificialAnlys intelligence benchmark, the most cost per intelligence efficient model in the world!!! https://x.com/OfficialLoganK/status/2001368440016392314
Gemini 3 Flash Preview ranking 5th on SimpleBench ahead of GPT-5.2 Pro https://x.com/scaling01/status/2002024316842512812
Gemini 3 Flash ranks #3 in the LMArena leaderboard (which is especially notable given its API pricing and its low latency).”” / X https://x.com/JeffDean/status/2001335803642024157
Gemini 3 Flash rolling out to @code now 🚀 Try it out and let us know what you think! https://x.com/code/status/2001335940934246503
Gemini 3 Flash scores higher than GPT-5.2, Opus 4.5 and Gemini 3 Pro on SWE-Bench Verified ??? https://x.com/scaling01/status/2001803023811797433
Gemini 3 Flash takes the #1 spot on Toolathlon https://x.com/scaling01/status/2001849103647674538
Gemini 3.0 Flash achieved a very impressive 161.8/190 on one of my vibe tests, the Korean Sator Square Test (KSST), placing it 2nd or 3rd among all the models I have tested so far. This is slightly higher than Gemini 3.0 Pro, and the difference is within the margin of error. https://x.com/Hangsiin/status/2001341564145250770
Going live with the team in a few to talk about Gemini 3 Flash : ) send us your questions! https://x.com/OfficialLoganK/status/2001372183663378723
How good is Gemini 3 flash? “”We ran a behind-the-scenes test with 3 Flash. Because of how much faster it was, retention went up, the number of things people were building went up, and engagement went up.”””” / X https://x.com/_philschmid/status/2001492609114456471
🗣️ “”Help me build an app…”” That’s all it takes. Watch Gemini 3 Flash turn a single voice prompt into a functional prototype in the @GeminiApp. https://x.com/Google/status/2002123256854425918
Congrats to the Gemini team on the great release and exceptional SWE-bench Verified numbers! 76.2% (3 Pro) vs. 78% (3 Flash), +6 task instances – a whole lot in the realm of the last quarter of SWE-bench. mini-SWE-agent + Gemini 3 Flash coming soon!”” / X https://x.com/jyangballin/status/2001336879120363639
Gemini 3 Flash across different test-time compute levels (green line below) represents a new score/cost Pareto frontier on ARC-AGI-2. Congrats to @demishassabis and @sundarpichai on the launch! https://x.com/fchollet/status/2001330643423449409
Gemini 3 Flash is out ⚡️- and we built a CLI agent powered by this latest model to perform work over your filesystem 🤖 Basically all the file capabilities within Claude Code in a lighter form factor. Shoutout to @itsclelia for the launch demo, check it out! Repo: https://x.com/jerryjliu0/status/2001335494534402521
how can flash beat pro??”” -> the answer is RL! flash is not just a distilled pro. we’ve had lots of exciting research progress on agentic RL which made its way into flash but was too late for pro. can’t wait to finally bring them to pro👀”” / X https://x.com/ankesh_anand/status/2002017859443233017
Introducing Gemini 3 Flash ⚡️Performance close to Gemini 3 Pro, with great multimodal and tool use quality ⚡️3x faster than Gemini 2.5 Pro, while cheaper and better at most benchmarks ⚡️LMArena score of 1477 (top 3 model) The time to build is now (and yes, there’s a free tier)”” / X https://x.com/osanseviero/status/2001323721232163053
Introducing Gemini 3 Flash, our frontier intelligence model, available at scale for everyone. It excels at coding, tool calling, and is stronger than 2.5 Pro across most metrics!! ⚡️ Available in the API at $0.50 in / 1M tokens and $3.00 out / 1M tokens across. https://x.com/OfficialLoganK/status/2001322275656835348
Introducing Gemini 3 Flash! ⚡️⚡️⚡️ Frontier intelligence built for speed at a fraction of the cost. Here’s ~4 minutes of demos. https://x.com/addyosmani/status/2001324727504359745
Speed test: Gemini 3 Flash vs. Gemini 2.5 Pro ⏱️ We put our new Gemini 3 Flash model (left) up against Gemini 2.5 Pro (right) in @GoogleAIStudio, so you can watch the difference in near real-time. Watch them go head-to-head ↓ https://x.com/Google/status/2001397324551946523
Study with help from Gemini 3 Flash. Upload an audio recording of yourself explaining a difficult concept and Gemini will identify knowledge gaps, create a custom quiz, and provide instant assessments and explanations for each question.”” / X https://x.com/GeminiApp/status/2001351746338329063
Today, we’re releasing an updated Gemini 2.5 Flash Native Audio model. Now available via the Live API 🗣 https://x.com/googleaidevs/status/1999539531826036973
Watch Gemini 3 Flash vs Gemini 3 Pro playing Pokemon Crystal : ) https://x.com/OfficialLoganK/status/2001428651121025391
We’re back in a Flash ⚡ Gemini 3 Flash is our latest model with frontier intelligence built for lightning speed, and pushing the Pareto Frontier of performance and efficiency. It outperforms 2.5 Pro while being 3x faster at a fraction of the cost. With this release, Gemini 3’s https://x.com/sundarpichai/status/2001326061787942957
We’re expanding the Gemini 3 family with the launch of Gemini 3 Flash. This model: — Combines Gemini 3’s Pro-grade reasoning with Flash-level latency, efficiency, and cost — Delivers frontier-level performance on PHD-level reasoning and knowledge benchmarks — Is our most https://x.com/googleai/status/2001323069105692914
we’re going live at 11:30am PT with the team for a deep dive on gemini 3 flash hosted by @OfficialLoganK, @joshwoodward, @tulseedoshi and more post your questions below ⬇️ https://x.com/GoogleAIStudio/status/2001330099841556490
We’ve pushed out the Pareto frontier of efficiency vs. intelligence again. With Gemini 3 Flash ⚡️, we are seeing reasoning capabilities previously reserved for our largest models, now running at Flash-level latency. This opens up entirely new categories of near real-time https://x.com/JeffDean/status/2001323132821569749
With Gemini 3 Flash, you can quickly build fun, useful apps from scratch using your voice without any prior coding knowledge. Just dictate to Gemini on the go, and it can transform your unstructured thoughts into a functioning app in minutes.”” / X https://x.com/GeminiApp/status/2001760080518353261
NEW: Google releases FunctionGemma, a lightweight (270M), open foundation model built for creating specialized function calling models! 🤯 To test it out, I built a small game: use natural language to solve fun physics simulation puzzles, running 100% locally in your browser! 🕹️ https://x.com/xenovacom/status/2001703932968452365
⚡️ Gemini 3 Flash is now available on Ollama’s cloud: ollama run gemini-3-flash-preview:cloud”” / X https://x.com/ollama/status/2001372370469290280
Despite the small size, this is by far the best model we’ve released! And as always, we don’t just release the model, but we release pretty much everything you need to reproduce it. If you’re interested in math reasoning, we have two new datasets for you to try. -“” / X https://x.com/igtmn/status/2000591849669693931
Local, cloud, and background agents, all in a unified experience in @code https://x.com/code/status/1999575448087396563
“”Nemotron 3 Nano runs nicely with mlx-lm on an M4 Max. Could be a great model for local use on Mac: MoE + hybrid attention make it fast even for very long context. Generating in realtime with 4-bit model: https://x.com/awnihannun/status/2000718403380691417
Nemotron 3 Nano for MLX is now available in LM Studio. General purpose reasoning and chat model trained from scratch by @nvidia. 30B, 3.5B active MoE runs blazingly fast on Apple Silicon 🍎🚀 https://x.com/lmstudio/status/2001015687003963730
You can now fine-tune LLMs and deploy them directly on your phone! 🚀 We collabed with PyTorch so you can export and run your trained model 100% locally on your iOS or Android device. Deploy Qwen3 on Pixel 8 and iPhone 15 Pro at ~40 tokens/sec. Guide: https://x.com/UnslothAI/status/2001305185206091917
One smartphone at the core of a robot. The idea is simple but interesting: use a used smartphone as the main controller for a hobby robot, in this case a hexapod. Instead of adding separate boards and sensors, the phone already brings a lot of what robots need: – IMU and https://x.com/IlirAliu_/status/1999917129575833795
Just dropped a new text embedding methodology. Fast as heck on CPU only and still great for document similarity analysis, clustering, and classification. How? Use a tiny ReLU network to approximate a big transformer from lexical (term frequency / bag of words) features. https://x.com/lukemerrick_/status/1999516702808375791





Leave a Reply