Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Photorealistic wide shot of six completed Ionic limestone columns with classical entablature carved ‘RAG’ in Roman serif, golden hour light on Mizzou quad, through the colonnade a warmly-lit classical library interior is visible with organized rows of scrolls and leather volumes on archival shelves, depth and clarity showing retrieval-ready knowledge behind the monument gateway, no people, natural shadows, beige limestone and warm amber library glow.

Google Gemini’s Deep Research can look into your emails, drive, and chats | The Verge https://www.theverge.com/ai-artificial-intelligence/814878/google-ai-gemini-deep-research-personalized

Introducing the File Search Tool in Gemini API https://blog.google/technology/developers/file-search-gemini-api/

Now, Gemini’s Deep Research can pull in info from @Gmail, @GoogleDrive, and Chat when you connect your @GoogleWorkspace account to give you more context-aware reports. To try it, just select “Deep Research” in Gemini on desktop and choose your sources. Coming to mobile soon.”” / X https://x.com/GeminiApp/status/1986472318873555058

We’ve launched the File Search Tool, a fully managed RAG system integrated into the Gemini API that simplifies grounding models with your private data to deliver more accurate, verifiable responses. – $0.15/m tokens for indexing, free storage and embedding generation at query https://x.com/_philschmid/status/1986506204240347520

You can now deploy any ML model, RAG, or Agent as an MCP server. And it takes just 10 lines of code. Here’s a breakdown, with code (100% private):”” / X https://x.com/_avichawla/status/1985595667079971190

In a world where AI chatbots are increasingly common, most still suffer from the same limitation: They’re text in, text out. But what if your AI could dynamically decide not just what to say, but how to show it? Elysia, our open source agentic RAG app, dynamically decides https://x.com/weaviate_io/status/1986463667160822206

Super happy to see the next iteration of ViDoRe: ViDoRe v3 is built on human-created examples, covers more realistic RAG scenarios (including open-ended and multi-hop queries), and should be your new default benchmark for multimodal retrieval! 👀 Congrats to the team!”” / X https://x.com/tonywu_71/status/1986047154620633370

Key Results/Insights: LIGHT consistently outperforms both long-context LLMs and RAG baselines across all conversation lengths, with improvements growing as context increases: +49-60% at 100K-1M tokens, and dramatic +107-156% gains at 10M tokens where no model natively supports”” / X https://x.com/omarsar0/status/1985348807849300249

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading