Image created with Flux Pro v1.1 Ultra. Image prompt: Giant “100” as pure white negative‑space cutout dominating the frame; minimalist poster style; sonar rings and a directed search beam cutting across the zeros; electric‑blue backdrop; high contrast, crisp edges, soft studio light, no other text, no logos
🤖 DeepSeek-V3.1 just landed on Together AI 671B hybrid model with: ⚡ Fast mode for routine tasks 🧠 Thinking mode for complex problems Our infrastructure is built for massive MoE models like this. 99.9% uptime means your reasoning workflows actually work in production. https://x.com/togethercompute/status/1960835568574578736
Introducing DeepSeek-V3.1: our first step toward the agent era! 🚀 🧠 Hybrid inference: Think & Non-Think — one model, two modes ⚡️ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528 🛠️ Stronger agent skills: Post-training boosts tool use and”” / X https://x.com/deepseek_ai/status/1958417062008918312
DeepSeek-V3.1 Is 2x Cheaper than GPT-5 https://analyticsindiamag.com/ai-news-updates/deepseek-v3-1-is-2x-cheaper-than-gpt-5/
DeepSeek V3.1 dropped last week and is showing a 9.9% diff edit failure rate in real-world usage. That’s higher than Qwen 3 Coder (6.1%) but still solid for an open-source model for an early release. What do you think? How’s it been in Cline so far? https://x.com/cline/status/1960565950442578418
Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama’s new app, CLI, API, and SDKs. Ollama’s Turbo mode that’s in preview has also been updated to support the model! https://x.com/ollama/status/1960463433515852144




