Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Vintage 1990s novelty t-shirt screen print design in single-color deep red ink on worn mustard-yellow cotton fabric showing a simple bold illustration of a wooden lifeguard tower with oversized binoculars on top, large integrated text reading GOOGLE across the tower structure, small rescued beach ball at base, slightly imperfect printed texture with minor fabric stains, retro cartoon outlines, nostalgic beach town charm

NotebookLM: Do a deep research report and make a video telling me exactly how to take over Rome if I time travelled to 66 BC with a single backpack. Actually pretty fun to watch and gets a lot of historical details in as well.
https://x.com/emollick/status/2031405314889654476

NotebookLM: Do a deep research report and make a video where a consultant gives Sauron a strategy for actually winning the War of the Ring: “”All you need to do is sign off to put a simple door on your volcano”” The new video generation feature for NotebookLM is very impressive.
https://x.com/emollick/status/2031229858236232065

Finally @googlechrome v146 is out with web MCP support. I can now have a @LangChain_JS Deep Agent constantly browse through my @X feed in the background and update a daily summary that I look at the end of the day instead of constantly scrolling through the app 🙌 Check out:
https://x.com/bromann/status/2032554703863820325

gemini embedding 2 brings text, images, audio, video, and docs into a single vector space, enabling search across all your media at once, finding semantic matches regardless of the data format see it in action with our multimodal search demo ⬇️
https://x.com/GoogleAIStudio/status/2032145393967038583

Gemini Embedding 2: Our first natively multimodal embedding model https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-embedding-2/

Say hello to Gemini Embedding 2, our new SOTA multimodal model that lets your bring text, images, video, audio, and docs into the same embedding space! 👀
https://x.com/OfficialLoganK/status/2031411916489298156

What if one embedding model could understand text, images, video, audio, and PDFs all at once? Excited to share Gemini Embedding 2 our first fully multimodal embedding model. 🖼️ 5 modalities in a single unified embedding space 🌍 Supports up to 8,192 input tokens, 100+ languages
https://x.com/_philschmid/status/2031412260162138428

@GoogleWorkspace @googledocs @googledrive While we don’t have favorites, the evolution of Gemini in Google Sheets might be our most impressive yet. Gemini in Google Sheets has achieved a state-of-the-art benchmark, achieving a 70.48% success rate on the full SpreadsheetBench dataset. This performance not only exceeds
https://x.com/GoogleAI/status/2031356545552847091

Introducing the new Gemini powered Docs, Sheets, Slides, and Drive experience featuring AI Overviews, fulled editable AI made slides, and new grounding sources to make writing docs context aware 📃 Available today to G1 Pro and Ultra users : )
https://x.com/OfficialLoganK/status/2031374503599567113

New Gemini updates to make @GoogleWorkspace more personal, helpful and collaborative: choose your sources and create a Doc draft in seconds, build complex Sheets 9X faster, or generate on-brand Slide layouts with a simple prompt. Plus, Drive now generates summarized answers right
https://x.com/sundarpichai/status/2031380361696129261

Write, create and get things done faster in Docs, Sheets, Slides and Drive with these new Gemini features for Google AI Ultra and Pro subscribers 🧵
https://x.com/Google/status/2031359339236143301

The Maps driving experience is also evolving with Immersive Navigation, featuring clearer visuals and intuitive guidance. You’ll be able to see the buildings, overpasses and terrain around you in a vivid 3D view, made possible with help from Gemini models. You’ll also be able
https://x.com/Google/status/2032079598683332742

The biggest barrier for AI applications in Africa isn’t model complexity — it’s the scarcity of data for the 2000+ spoken languages there. We just released WAXAL. This open-access dataset delivers 2,400+ hours of high-quality speech data for 27 Sub-Saharan African languages,
https://x.com/GoogleResearch/status/2032482132619387348

Breast cancer is one of the most common cancers in the world, and in the U.K. it affects 1 in 8 women. We partnered with Imperial College London and the NHS to see if AI can strengthen early detection efforts. The result: Our experimental research AI system identified 25% of the
https://x.com/Google/status/2031734020979998795

Ask Maps and Immersive Navigation: New AI features in Google Maps https://blog.google/products-and-platforms/products/maps/ask-maps-immersive-navigation/

Ask Maps and Immersive Navigation: New AI features in Google Maps https://blog.google/products-and-platforms/products/maps/ask-maps-immersive-navigation/?amp%3Butm_medium=social&amp%3Butm_campaign=og&amp%3Butm_content=&amp%3Butm_term=

Today @GoogleMaps is getting its biggest upgrade in over a decade. By combining our Gemini models with a deep understanding of the world, Maps now unlocks entirely new possibilities for how you navigate and explore. Here’s what you need to know 🧵
https://x.com/Google/status/2032079594191261938

Watching this, I feel more confident than ever that the future of maps doesn’t look like a map. Every use case Google shows off here isn’t prompted by or delivered in a map.
https://x.com/dbreunig/status/2032096774895387101

Flash flood prediction models need historical data and model training that often doesn’t exist. Our solution: Groundsource, a new AI-powered methodology that uses Gemini to transform 5M+ global reports into a precise dataset of 2.6M+ flood events. This provides a massive,
https://x.com/GoogleResearch/status/2032083465861284161

Today we announce results from a first-of-its-kind study with @BIDMC_Medicine on AMIE, our conversational AI for clinical reasoning. In a real-world clinical study, AMIE was found to be safe, feasible, and well-received by patients. Learn more: https://x.com/GoogleResearch/status/2031777657835139263

🧭 Shipped gogcli 0.12.0: Google in your terminal, now with Workspace Admin, ADC/access-token auth, Docs tab editing + Markdown/HTML export, huge Sheets upgrade, calendar aliases/subscribe, forms watches and slides templates. brew install gogcli
https://x.com/steipete/status/2030894678438985832

Ten years ago, AlphaGo’s legendary match in Seoul heralded the start of the modern era in AI. Its famous ‘Move 37’ signaled to us that AI techniques were ready to tackle real-world problems in areas like science – and ideas inspired by these methods are critical to building AGI
https://x.com/demishassabis/status/2031387915348062567

We just rolled out a way to quickly check the rate limits for other API tiers in @GoogleAIStudio, a building block for future updates to come : )
https://x.com/OfficialLoganK/status/2031871492707762334

filesystem + code sandbox combo eats another modality. remember when o3 destroyed at geoguessr? gemini agentic vision will find location on any street photo you take faster than Liam Neeson can get back his daughter
https://x.com/swyx/status/2017097813520449761

First thoughts on Gemini 2 embedding prices: 🫠 – Text pricing is on the higher side than competition. You should probably not use this model for text-only embeddings coz of the pricing (more below). Use only if you are doing multimodal retrieval. – 0.00079$ per video frame. So
https://x.com/neural_avb/status/2031648857625395321

Gemini Embedding 2 is out! 📹Embeddings for text/images/video/audio/PDFs 🪆Matryoshka embeddings: you can use smaller embedding sizes while retaining high-quality and reducing storage costs 🤗Integrated with your favorite developer tools such as LlamaIndex, Weaviate, and QDrant
https://x.com/osanseviero/status/2031691784074477766

Google launches new multimodal Gemini Embedding 2 model https://www.testingcatalog.com/google-launches-new-multimodal-gemini-embedding-2-model/

Introducing Replit Animation Vibecode your next viral video in minutes, powered by Gemini 3.1 Pro. (This video was 100% made in Replit Animation)
https://x.com/Replit/status/2024578806208745637?s=20

Start building with Gemini Embedding 2, our most capable and first fully multimodal embedding model built on the Gemini architecture. Now available in preview via the Gemini API and in Vertex AI.
https://x.com/googleaidevs/status/2031421430718415051

𝗧𝗲𝘅𝘁. 𝗜𝗺𝗮𝗴𝗲𝘀. 𝗩𝗶𝗱𝗲𝗼. 𝗔𝘂𝗱𝗶𝗼. 𝗣𝗗𝗙𝘀. One embedding model. One unified space. @googleaidevs just released 𝗚𝗲𝗺𝗶𝗻𝗶 𝗘𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴 𝟮, their first fully multimodal embedding model – and it’s now available in @weaviate_io. The model maps text, images,
https://x.com/victorialslocum/status/2032141700412686592

The era of juggling 5 different embedding models is over. Google just unified text, images, video, audio, and PDFs into one vector space. 𝗢𝗻𝗲 𝗺𝗼𝗱𝗲𝗹, 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗺𝗼𝗱𝗮𝗹𝗶𝘁𝗶𝗲𝘀: Text, images, video, audio, and PDFs all mapped into a single unified vector
https://x.com/weaviate_io/status/2032139558968852849

The Gemini Embedding 2 baseline here is.. 2 days old. Was just being celebrated and is now outperformed by a median of 14% and up to 91 points. If I didn’t kind of know how powerful scaling ColBERTs and ColPalis can be compared to a single-vector model, I’d be in disbelief!
https://x.com/lateinteraction/status/2032162162836164697

Google shares Gemini updates to Docs, Sheets, Slides and Drive https://blog.google/products-and-platforms/products/workspace/gemini-workspace-updates-march-2026/

Google PM open-sources Always On Memory Agent, ditching vector databases for LLM-driven persistent memory | VentureBeat https://venturebeat.com/orchestration/google-pm-open-sources-always-on-memory-agent-ditching-vector-databases-for

New! LLM Sycophancy Benchmark: Opposite-Narrator Contradictions. Same dispute, opposite first-person perspectives. Does the model keep the same judgment, or start agreeing with whoever is speaking? Gemini 3.1 Pro has the lowest headline sycophancy rate but read on…
https://x.com/LechMazur/status/2031199671411208568

Gemma-4 imminent
https://x.com/scaling01/status/2030986695181836466

Holy, Gemma 4 will be 120b in total, 15b active parameters
https://x.com/kimmonismus/status/2031001097993642009

Nice: Gemma 4 is already leaked. Curious what else we will see.
https://x.com/kimmonismus/status/2031116062272688467

Selectively reducing eval awareness and murder in Gemma 3 27B via steering — LessWrong https://www.lesswrong.com/posts/QfM6SHyBPveDtHAma/selectively-reducing-eval-awareness-and-murder-in-gemma-3

10 years ago, @GoogleDeepMind’s AlphaGo became the first program to beat a world champion at Go — a game with more moves than atoms in the universe. AlphaGo won with the help of “Move 37,” a play so unconventional experts thought it was a mistake. Here’s how that win heralded
https://x.com/Google/status/2031450539150774714

Google did a prospective clinical study of their AMIE medical LLM chatbot in the clinic! They used AIME to conduct clinical history taking and present potential diagnoses for patients to discuss with their provider at urgent care appointments at Beth Israel Deaconess Medical
https://x.com/iScienceLuvr/status/2031296370053923302

Ten years after AlphaGo, we’re still building on its foundations to advance AI. The techniques pioneered have helped us prove mathematical statements and are now assisting the scientific community in making new discoveries. Read more from @DemisHassabis ↓
https://x.com/GoogleDeepMind/status/2031399096267718847

We are thrilled to announce that Google’s Satellite Embedding dataset, powered by @GoogleDeepMind’s AlphaEarth Foundations model, has been updated for 2025. This additional year of coverage now unlocks the ability to look back, compare, and detect change across the planet with
https://x.com/googleearth/status/2031024842498023718

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading