Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Minimalist data center with single server rack glowing blue LEDs in vast empty white room, polished concrete floors reflecting cold light, dramatic shadows, architectural brutalism, pristine and untouched, cinematic composition with bold white sans-serif text LOCAL overlaid prominently
Introducing Mistral 3 | Mistral AI https://mistral.ai/news/mistral-3
Introducing Mistral Code | Mistral AI https://mistral.ai/news/mistral-code
Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵 https://x.com/MistralAI/status/1995872766177018340
Magistral | Mistral AI https://mistral.ai/news/magistral
Mistral Small 3 | Mistral AI https://mistral.ai/news/mistral-small-3
Mistral Small 3.1 | Mistral AI https://mistral.ai/news/mistral-small-3-1
Voxtral | Mistral AI https://mistral.ai/news/voxtral
In 2025 small open-source models died (or are rather run on personal consumer hardware) Small: <15B Medium: 15 – 70B Large: >70B https://x.com/scaling01/status/1996976642208440371
American open-source is making a comeback in 2026 Arcee just started cooking Trinity Large which will be released in early 2026 It will have 420@13B params and is trained on 2048 B300 with 20T tokens https://x.com/scaling01/status/1995616210109825447
Introducing Trinity Mini from @arcee_ai, an open-weight 26B sparse MoE model that activates just 3B parameters per token while delivering frontier-class reasoning. AI natives can now use Trinity Mini on Together AI — and benefit from reliable inference for production-scale https://x.com/togethercompute/status/1995594629505573338
Introducing Trinity, the start of a new open-weight MoE family. Rolling out today today: Trinity-Mini (26B-A3B) Trinity-Nano-Preview (6B-A1B) Download on HuggingFace. Free for limited time on OpenRouter. https://x.com/arcee_ai/status/1995600354374025395
Today, we are introducing Trinity, the start of an open-weight MoE family that businesses and developers can own. Trinity-Mini (26B-A3B) Trinity-Nano-Preview (6B-A1B) Available Today on Huggingface. https://x.com/latkins/status/1995592664637665702
🎉 Congratulations to the Mistral team on launching the Mistral 3 family! We’re proud to share that @MistralAI, @NVIDIAAIDev, @RedHat_AI, and vLLM worked closely together to deliver full Day-0 support for the entire Mistral 3 lineup. This collaboration enabled: • NVFP4 https://x.com/vllm_project/status/1995890057224618154
Europe still has one frontier model maker that can generally keep pace with Chinese open weights models, though no reasoner for Mistral 3 yet means they are behind the curve of actual performance – DeepSeek r1 got 71.5% on GPQA Diamond (& 1-shot, not 5-shot) back in January. https://x.com/emollick/status/1996068920596594932
I want to especially thank @MistralAI for releasing the base models for Mistral 3. Fewer companies are sharing base models and this opens many use cases from custom instruct to non-instruct cases”” / X https://x.com/QuixiAI/status/1996272948378804326
Meet the Ministral 3 models from @MistralAI! – 3B, 8B, and 14B models – Instruct, reasoning, and base variants – Supports tool use and vision input – Open-weights, Apache 2.0 licensed https://x.com/lmstudio/status/1995908228526604451
Mistral 3 is now available on Ollama v0.13.1 (currently in pre-release on GitHub). 14B: ollama run ministral-3:14b 8B: ollama run ministral-3:8b 3B: ollama run ministral-3:3b Please update to the latest Ollama. https://x.com/ollama/status/1995885696360566885
Mistral releases Ministral 3, their new reasoning and instruct models! 🔥 Ministral 3 comes in 3B, 8B, and 14B with vision support and best-in-class performance. Run the 14B models locally with 24GB RAM. Guide + Notebook: https://x.com/UnslothAI/status/1995874975631503479
NEW: @MistralAI released a fantastic family of multimodal models, Ministral 3. You can fine-tune them for free on Colab using TRL ⚡️, supporting both SFT and GRPO https://x.com/SergioPaniego/status/1996257877871509896
NEW: @MistralAI releases Mistral 3, a family of multimodal models, including three start-of-the-art dense models (3B, 8B, and 14B) and Mistral Large 3 (675B, 41B active). All Apache 2.0! 🤗 Surprisingly, the 3B is small enough to run 100% locally in your browser on WebGPU! 🤯 https://x.com/xenovacom/status/1995879338583945635
Run Mistral Large 3 on Ollama’s cloud: ollama run mistral-large-3:675b-cloud”” / X https://x.com/ollama/status/1996682858933768691
Super nice to see Mistral Large 3 as the #1 OSS model for coding on lmarena 🥳😎🙌 And the spoiler alert! 👀👀”” / X https://x.com/sophiamyang/status/1996587296666128398
Support for running Mistral Large 3 locally will be available in Ollama soon.”” / X https://x.com/ollama/status/1996683156817416667
The Bert-Nebulon Alpha Stealth model is live now as @MistralAI’s new Mistral Large 3! Try the full release now on OpenRouter: https://x.com/OpenRouterAI/status/1995904288560988617
The world’s best small models–Ministral 3 (14B, 8B, 3B), each released with base, instruct and reasoning versions. https://x.com/MistralAI/status/1995872768601325836
Mistral Large 3 debuts as the #1 open source coding model on the @arena leaderboard. We’d love for you to try it! More on coding in a few days… 👀 https://x.com/MistralAI/status/1996580307336638951
Mistral AI raises 1.7B€ to accelerate technological progress with AI | Mistral AI https://mistral.ai/news/mistral-ai-raises-1-7-b-to-accelerate-technological-progress-with-ai
.@Stanford researchers showed what happens when you shrink a multimodal model. They look specifically at how reducing the size of the LLM inside a multimodal model affects the model’s overall abilities. ➡️ The part that suffers most is vision. And perception really collapses. https://x.com/TheTuringPost/status/1994548273387032753





Leave a Reply