Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided reference image, keep the pure white landscape field, the galaxy-punchout starfield treatment inside every letterform, the exact vertical type hierarchy, the bold condensed grotesque and light geometric font pairing, the parenthetical ‘(we could be)’ tagline, and the ‘FEATURING.’ label, but replace ‘HEROES’ with ‘RAG’ in the same condensed grotesque galaxy-punchout, replace ‘ALESSO’ with ‘SOURCE OF TRUTH’ in the same light geometric all-caps galaxy-punchout, and replace ‘TOVE LO’ with ‘VECTOR SEARCH’ in the same condensed grotesque galaxy-punchout. Maintain identical letter tracking, generous margins, and landscape aspect ratio with no illustrations or decoration.

Its noticeable how much of the whole practice of working with AI – the prompts, the skill files, the connectors, retrieval work, the markdown files, etc. – is a substitute for the real problem of continual learning. If that ends up being solved, a lot of things will change fast.
https://x.com/emollick/status/2044792241000943867

Late-interaction retrieval models are widely used for their strong performance, but their representations can be utilized beyond just retrieval. Our new paper demonstrates that these representations can effectively replace raw document text in RAG tasks.
https://x.com/Julian_a42f9a/status/2045200413402493064

The new generation of open state-of-the-art single and multi-vector retrieval models is here It’s time, DenseOn with the LateOn 🎶 @LightOnIO releases models that leap past existing ones, and everything you need to do the same!
https://x.com/antoine_chaffin/status/2046609241918579019

We’re releasing LateOn and DenseOn today. Two open retrieval models, 149M parameters each. LateOn (ColBERT, multi-vector): 57.22 NDCG@10 on BEIR. DenseOn (dense, single-vector): 56.20. Both beat models up to 4× larger We’re open-sourcing the weights under Apache 2.0 🧵👇
https://x.com/raphaelsrty/status/2046609364929187845

Nice paper combining the strength of Skills and RAG. Most RAG systems retrieve on every query, whether the model needs help or not. This is wasteful when the model already knows the answer, and often too late when it does not. New research introduces Skill-RAG, a
https://x.com/omarsar0/status/2046249336162632155

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading