Image created with OpenAI GPT-Image-1. Image prompt: 1966 Kodachrome photo-look, thin white frame, forest-green title band in upper left with stacked yellow/white serif text reading “MISTRAL”, faux ring-wear crease across centre scene featuring a rolled nautical chart labelled “Mistral” fluttering in the breeze; gentle film grain, overcast daylight

Mistral Compute | Mistral AI https://mistral.ai/news/mistral-compute

The @Gradio agents and MCP hackathon kick-off is today! Very cool collaborative event with @AnthropicAI, @modal_labs, @nebiusai, @MistralAI, @hyperbolic_labs, @SambaNovaAI, @llama_index, @OpenAI. Open standards are key for a healthy AI community and there is amazing potential in https://x.com/MoritzLaurer/status/1929851886854652104

Excited to introduce @MistralAI Mistral Code! 🤖 The most customizable AI-powered coding assistant for enterprises! 🛠️ Frontier coding models, fully customizable & tunable to your codebase 🌟 State-of-the-art coding assistance & agentic coding, in your control 🔧 One platform, https://x.com/sophiamyang/status/1930283932815372427

Magistral 4-bit DWQ is up on Hugging Face. Use it with mlx-lm or in @lmstudio: https://x.com/awnihannun/status/1932547785162961291

magistral is an amazingly powerful model, and on le chat with 1000+ tok/s inference, it’s ui-breaking fast https://x.com/qtnx_/status/1932442022574723407

Announcing Magistral — our @MistralAI first reasoning model — excelling in domain-specific, transparent, and multilingual reasoning. https://x.com/sophiamyang/status/1932451856447586312

Announcing Magistral, our first reasoning model designed to excel in domain-specific, transparent, and multilingual reasoning. https://x.com/MistralAI/status/1932441507262259564

Magistral | Mistral AI https://mistral.ai/news/magistral

Mistral just released their reasoning models: Magistral-Small and Magistral-Medium Magistral Small is open-source and based on the 24B Mistral-Small 3.1, and can run on a single RTX 4090. Unfortunately it gets crushed by Qwen3-32B and Qwen3-30B-A3B Download Link: https://x.com/scaling01/status/1932445360380612712

Mistral really cooked – 24B, Based on Mistral Small 3.1, Multilingual, 128K context (40k effective), Apache 2.0 licensed! 🔥 Works on MLX, llama.cpp, transformers, vllm and more ⚡ https://x.com/reach_vb/status/1932449015657836730

The Mistral team at it again with Magistral! GRPO with edits: 1. Removed KL Divergence 2. Normalize by total length (Dr. GRPO style) 3. Minibatch normalization for advantages 4. Relaxing trust region Paper: https://x.com/danielhanchen/status/1932451325398413518

RT @MistralAI: We’re proud to announce Mistral Compute—an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative…”” / X https://x.com/qtnx_/status/1932799532070547810

We’re proud to announce Mistral Compute—an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative that will ensure that all nation states, enterprises, and research labs globally remain at the forefront of AI innovation. Read more in the thread. https://x.com/MistralAI/status/1932798814840332307

Mistral got hit by export restrictions again! They couldn’t evaluate the latest DeepSeek and Qwen I am risking being detained by going around export restrictions and checking the numbers on Qwen TL;DR Qwen 4B is ~close to their model and the small 30B MoE is better, let’s not https://x.com/dylan522p/status/1932563462963507589

Mistrals paper is the best practical paper on doing reasoning rl since deepseek r1 paper fwiw will do a writeup later to go through it 🤓 if i get the time”” / X https://x.com/Teknium1/status/1932580993132790232

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading