“Learn to build a local agentic RAG application for report generation using open-source LLMs! 🚀 Our friends at @AIMakerspace are hosting a live event next week (November 27) to teach you: 🔧 How to set up an “on-prem” LLM app stack 📊 LlamaIndex Workflows 🤖 Llama-Deploy 🏢 and 

elvis on X: “Bi-Mamba: Towards Accurate 1-Bit State Space Models Presents Bi-Mamba, a scalable 1-bit Mamba architecture designed for more efficient LLMs with multiple sizes across 780M, 1.3B, and 2.7B. Bi-Mamba achieves performance comparable to its full-precision counterparts (e.g., FP16 https://x.com/omarsar0/status/1858878654736199850 

“Excited to release SmolTalk: the secret recipe behind the best-in-class performance of SmolLM2 

“Two weeks later, this is now the state-of-the-art in local text to video models, still on my computer, still completely off-line. Pretty rapid progress. 

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading