Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, keep the exact left-third close-crop profile composition, deep blue-purple cinematic lighting, emotional heaviness, atmospheric smoke dissolving rightward, and glitter-catching-light detail, but replace the central subject with a downcast figure wearing Google’s signature four-color eyeglasses (red yellow blue green temples), fine glitter scattered across the glass lenses and frames, maintaining the melancholic post-celebration mood and HBO prestige drama aesthetic, with ‘google’ in thin lowercase white Helvetica Neue Light overlaid on the misty right two-thirds.
Google tests Jules V2 agent capable of taking bigger tasks
https://www.testingcatalog.com/google-prepares-jules-v2-agent-capable-of-taking-bigger-tasks/
Gemma 4 E2B on iPhone 17 Pro Max in AI Edge Gallery! Using skills to query wikipedia. 🔥 App link below. [cr: @mweinbach]
https://x.com/_philschmid/status/2041171039598543064
Insane I’m running Gemma 4 on my iPhone 16 pro max Vibe coded the app in under 1h Singularity is here
https://x.com/enjojoyy/status/2040563245925151229
Gemma 4 E4B is impressive for an on-device LLM. GPT-4ish quality, and expect hallucinations. Here is: “List five sociological theories starting with u and what they are. Then describe them in a rhyming verse” Its in real time, the last is a little bit of a stretch, but not bad!
https://x.com/emollick/status/2040851723774808310
We’ve signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027, to train and serve frontier Claude models.
https://x.com/AnthropicAI/status/2041275561704931636
I cancelled my Claude subscription. Gemma 4 is free, runs locally, and hits 80% … The gap is basically gone. Why are you still paying? 💵💰
https://x.com/AlexEngineerAI/status/2040260903053197525
GLM-5.1 by @Zai_org is now #3 in Code Arena – surpassing Gemini 3.1 and GPT-5.4, and now on par with Claude Sonnet 4.6. The first frontier level open model to break into the top 3. It’s a major +90 point jump over GLM-5, and +100 over Kimi K2.5 Thinking. Huge congrats to
https://x.com/arena/status/2042611135434891592
GLM-5.1 is here! Try it on OpenClaw🦞🦞🦞 ollama launch openclaw –model glm-5.1:cloud Claude Code ollama launch claude –model glm-5.1:cloud Chat with the model ollama run glm-5.1:cloud
https://x.com/ollama/status/2041556572334428576
OpenAI, Anthropic, Google Unite to Combat Model Copying in China – Bloomberg
https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china
Google has the equivalent of roughly 5 million Nvidia H100 GPUs! Therefore, it’s no surprise that Anthropic’s needs are now benefiting Google. As I said yesterday, Google is exceptionally well-positioned: strong revenue streams, its own chips, and above all: distribution.
https://x.com/kimmonismus/status/2041464540446228484
Your inbox is your business. Gemini in @gmail helps you without holding onto your data or using your personal emails to train our foundational AI models. Gmail VP @blakebarnes explains how 👇
https://x.com/Google/status/2041546713001861185
Who owns the world’s compute? Our new Chip Ownership hub shows that Google leads, holding around 25% of all compute sold since 2022.
https://x.com/EpochAIResearch/status/2041600102654148673
Google controls the most AI computing power, driven by its custom TPUs
https://epochai.substack.com/p/google-controls-the-most-ai-computing
Generate 3D models and interactive charts with the Gemini app
https://blog.google/innovation-and-ai/products/gemini-app/3d-models-charts/
Google quietly launched an AI dictation app that works offline | TechCrunch
Google quietly launched an AI dictation app that works offline
Google’s Gemma 4 E2B running on-device on iPhone 17 Pro Gemma 4 is built from the same research as Gemini 3, has image understanding capabilities and can reason if needed Running at ~40tk/s with MLX optimized for Apple Silicon
https://x.com/adrgrondin/status/2040512861953270226
Lots of people want Gemma 4! Google AI Edge is #8 on the iOS App Store for productivity apps.
https://x.com/OfficialLoganK/status/2040874501777317982
Gemma 2 Release – a google Collection
https://huggingface.co/collections/google/gemma-2-release
Gemma 3 Release – a google Collection
https://huggingface.co/collections/google/gemma-3-release
Gemma 4 – a google Collection
https://huggingface.co/collections/google/gemma-4
Gemma 4 is now available in the Gemini API and Google AI Studio. Use `gemma-4-26b-a4b-it` and `gemma-4-31b-it` with the same `google-genai` sdk as Gemini. 📝 Text generation with generate_content . 🧭 System instruction + Function Calling example. 🖼️ Image understanding example.
https://x.com/_philschmid/status/2041532358969446596
Google’s new AI can predict flash floods 24 hours before they strike. How it works: > Uses Gemini to extract confirmed flood locations and times from global news > Builds a dataset of past events that never formally existed. > That dataset feeds a neural network > The neural
https://x.com/rowancheung/status/2041172396116476371
Google’s PaperOrchestra AI Converts Lab Notes Into Publication-Ready Research Papers – Decrypt
https://decrypt.co/363837/googles-paperorchestra-ai-converts-lab-notes-into-publication-ready-research-papers
Muse Spark is notably token efficient for its intelligence level. It used 58M output tokens to run the Intelligence Index, comparable to Gemini 3.1 Pro Preview (57M) and notably lower than Claude Opus 4.6 (Adaptive Reasoning, max effort, 157M), GPT-5.4 (xhigh, 120M) and GLM-5
https://x.com/ArtificialAnlys/status/2041913045749002694
Run Gemma 4 locally with OpenClaw 🦀 in 3 steps:
https://x.com/googlegemma/status/2041512106269319328
An open-source Python library for structured data extraction – LangExtract from Google It turns unstructured text into grounded, verifiable structured outputs using LLMs. Every extraction is mapped back to the source, fully traceable and verifiable. LangExtract: – Combines
https://x.com/TheTuringPost/status/2040097129759445439
Customize your Gemini agent in Colab
https://blog.google/innovation-and-ai/technology/developers-tools/colab-updates/
I am impressed by Gemma 4, there’s a lot of power for an on-device model at fast speeds. But I am not convinced you can get real agentic workflows out of a small model on device. So much depends on model judgement, self-correction, and accuracy. Small models are too weak there.
https://x.com/emollick/status/2040925197767762425
AIE Europe Day 1: Keynotes & OpenClaw/Personal Agents ft Google Deepmind, OpenAI, Vercel, & more – YouTube
Our first successful Gemma 4 Runtime in London with @swyx @patloeber @nick_kango @cormacb and others! 💎Great to go out for a run and talk about Gemma, agents, evals and more
https://x.com/osanseviero/status/2042512059049398785?s=20
There were some exceptionally cool demos from @ollama and omlx using MLX to run Qwen 3.5 and Gemma 4 on Apple silicon. The capabilities of local LLMs and the surrounding ecosystem have come a long way in the past couple years.
https://x.com/awnihannun/status/2042456446122803275
Gemma-4 finetuning 2B, 4B, 26B, 31B all work in Unsloth! We also fixed a few issues: 1. Grad accumulation no longer causes losses to explode 2. Index Error for 26B and 31B for inference 3. use_cache=False had gibberish for E2B, E4B 4. float16 audio -1e9 overflows on float16
https://x.com/danielhanchen/status/2041516671119327590
Introducing Gemma 4, our series of open weight (Apache 2.0 licensed) models, which are byte for byte the most capable open models in the world! Gemma 4 is build to run on your hardware: phones, laptops, and desktops. Frontier intelligence with a 26B MOE and a 31B Dense model!
https://x.com/OfficialLoganK/status/2039735606268314071
People underestimate the level of collaboration that needs to happen for a model such as Gemma 4 to land Before the launch, we worked with HF, VLLM, llama.cpp, Ollama, NVIDIA, Unsloth, Cactus, SGLang, Docker, CloudFlare, and so many others This ecosystem is amazing 🔥
https://x.com/osanseviero/status/2041154555530932578
Gemma 4 31B, quantized and evaluated. Instruction following evals are live on our NVFP4 and FP8-block model cards. Results look great. Reasoning and vision evals coming later this week. NVFP4:
https://t.co/GIc7y1Abkc FP8:
https://x.com/RedHat_AI/status/2040766645480628589
Gemma 4 is #1 on @huggingface!
https://x.com/ClementDelangue/status/2040911131108069692
Gemma 4 is a beast.
https://x.com/Yampeleg/status/2040495537598648357
Speculative decoding for Gemma 4 31B (EAGLE-3) A 2B draft model predicts tokens ahead; the 31B verifier validates them. Same output, faster inference. Early release. vLLM main branch support is in progress (PR #39450). Reasoning support coming soon.
https://x.com/RedHat_AI/status/2042660544797110649
Gemma 4 is the #1 trending model on @huggingface 🤗
https://x.com/GlennCameronjr/status/2040529333794824456
Ollama’s cloud is now the best place to run Gemma 4 in the cloud! Available through a subscription for developers and third-party integrations. 🦞OpenClaw ollama launch openclaw –model gemma4:31b-cloud Claude Code ollama launch claude –model gemma4:31b-cloud Run the model
https://x.com/ollama/status/2041238722914685336
With GLM-5.1,
https://t.co/nvW0zf0SAH maintains the #1 open model rank in Code Arena and is now within ~20 points of the top overall while outperforming Claude Sonnet 4.6, Opus 4.5, GPT-5.4 High, and Gemini-3.1 Pro. Open models are now competitive at the frontier.
https://x.com/arena/status/2042643933768151485
Could not be more bullish on Google, so much good stuff cooking : ) going to be a fun next few months.
https://x.com/OfficialLoganK/status/2041692053575217220
Today we are rolling out service tiers in the Gemini API! You can now (optionally) set “”flex”” or “”priority””. In the case of flex, this will save you ~50% on API costs (with lower reliability). In the case of priority, this will cost ~80% more but give you higher priority!
https://x.com/OfficialLoganK/status/2039795986713776135
TorchTPU: Running PyTorch Natively on TPUs at Google Scale – Google Developers Blog
https://developers.googleblog.com/torchtpu-running-pytorch-natively-on-tpus-at-google-scale/
Projects in the @GeminiApp are now live, with a fun twist…. Notebooks! Enjoy the NotebookLM inspired experience.
https://x.com/OfficialLoganK/status/2042025888053702911
… @NASA, @GoogleDeepMind and more at @YCombinator for a general-purpose robotics hackathon: Each team will have a robot and compete across all Al modalities to make the coolest AI project! Most hackathons focus on either manipulation, voice or navigation… @innate_bot robots
https://x.com/IlirAliu_/status/2040850420612935821
[🧵1/12] We evaluated Gemini 3.1 Pro and its Deep Think mode on regional contests of International Mathematical Olympiad, International Collegiate Programming Contest, and International Olympiad in Informatics in 8 languages. Deep Think beats/matches competitors on all contests.
https://x.com/conglongli/status/2041519526110785657
“Can you make a robot actually do something useful” ❗️Saturday 11 April 👉 9:00 – 17:00 GMT-7 📍 San Francisco, California Registrations here:
https://t.co/cQagedveVM @innate_bot, @GoogleDeepMind, @NASA, @ycombinator
https://x.com/IlirAliu_/status/2040514676392337884
• every team gets a robot • you build across all AI modalities at once • prize goes to the most complete real-world use case General-purpose robotics leaving the lab! The team around @innate_bot is bringing NASA, Google DeepMind, Scale AI and more to Y Combinator for a
https://x.com/IlirAliu_/status/2040126511064523253




Leave a Reply