Image created with OpenAI GPT-Image-1. Image prompt: over-the-top 1990s pro-wrestling promo poster, notebook-sketch ring featuring “Retrieval Ripper” yanking documents from a filing cabinet to clothesline foe; paper tornado, grainy print texture, vivid neon titles

AudioRAG is becoming real! Just built a demo with ColQwen-Omni that does semantic search on raw audio, no transcription needed. Drop in a podcast, ask your question, and it finds the exact chunks where it happens. You can also get a written answer. What’s exciting: it skips https://x.com/fdaudens/status/1946226098905169967

Now it’s possible to do RAG with any-to-any models 🔥 Learn how to search in a video dataset and generate using OmniEmbed, an all modality retriever, and Qwen2.5-Omni, any-to-any model in this notebook 🤝 https://x.com/mervenoyann/status/1947285360926494911

Cohere Labs – Catalyst Grants https://cohere.com/research/grants

How do RAG systems retrieve the right context? In this clip from our new Retrieval Augmented Generation course, you’ll get a high-level look at how retrievers use both keyword and semantic search, along with metadata filtering to find relevant documents, and why hybrid search https://x.com/DeepLearningAI/status/1948488412996006073

LLMs often botch private graph QA because a single bad link breaks the path. BYOKG-RAG (“bring-your-own” Knowledge Graph) fixes that by having the model suggest entities, paths, and Cypher, then letting specialised tools retrieve real graph chunks and feeding them back for one https://x.com/rohanpaul_ai/status/1945822543182709243

RT @bibryam: RAG Patterns more resources: https://x.com/rachel_l_woods/status/1944206536424739230

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading