Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the square faceted glass perfume bottle with warm amber liquid, crystal stopper, white background, soft shadow, and glass refractions exactly as shown. Replace the label text with ‘RAG’ in the same elegant black serif typography. Add a delicate sterling silver chain draped around the bottle neck holding a tiny dainty pendant shaped like a miniature filing cabinet with one drawer slightly open, rendered in high-fashion jewelry aesthetic–small, precise, and refined like a Tiffany charm.

33 hours of audio transcribed in 12 minutes! @CohereLabs just released Cohere Transcribe – 2B open-source ASR, 66 eps of 1940s CBS Suspense from @internetarchive on A100 via @huggingface Jobs + Buckets mount 161x realtime! Script + all transcripts are public
https://x.com/vanstriendaniel/status/2037548103272632497

Very hyped by the new Cohere Transcribe model 🌍 Works surprisingly well on bad quality audio when the mic doesn’t cooperate. 2B params, 14 supported languages and it’s Apache 2.0. try the official Hugging Face demo ⬇️
https://x.com/victormustar/status/2037572662659104976

LLM Knowledge Bases Something I’m finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating
https://x.com/karpathy/status/2039805659525644595

Cohere has released Cohere Transcribe: an open weights model achieving 4.7% on AA-WER, based on 3 datasets including our proprietary AA-AgentTalk dataset The 2B parameter model is based on a conformer encoder-decoder architecture. It was trained from scratch on 14 languages
https://x.com/ArtificialAnlys/status/2038678855213568031

// Graph Augmented Associative Memory for Agents // Long-term memory for agents is still an unsolved problem. Flat RAG loses structural relationships, and knowledge graphs miss conversational associations. New research proposes combining both through a hierarchical approach.
https://x.com/dair_ai/status/2039072251199549573

Fine-grained authorization for RAG is one of the most underestimated problems in production AI. If your agent can retrieve documents, it needs to enforce who’s allowed to see them, not just at the role level. With @auth0 FGA and
https://x.com/thinkshiv/status/2039836920243486790

For a long while, “RAG” was up there too. To experts, RAG was just the name of one really nice paper, among dozens of 2019-2020 era approaches that conditioned pretrained LMs on retrieved text and trained them accordingly. It’s not synonymous with the whole problem space lol!
https://x.com/lateinteraction/status/2039382845689348271

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading