“QwQ-32B evals on par with Deep Seek R1 680B but runs fast on a laptop. Delivery accepted. Here it is running nicely on a M4 Max with MLX. A snippet of its 8k token long thought process: https://x.com/awnihannun/status/1897394318434034163
“@deepseek_ai Congrats on the release! I put some notes here: https://x.com/reach_vb/status/1895429111470031063
DeepSeek on X: “🚀 Day 6 of #OpenSourceWeek: One More Thing – DeepSeek-V3/R1 Inference System Overview Optimized throughput and latency via: 🔧 Cross-node EP-powered batch scaling 🔄 Computation-communication overlap ⚖️ Load balancing Statistics of DeepSeek’s Online Service: ⚡ 73.7k/14.8k” / X
https://x.com/deepseek_ai/status/1895688300574462431
DeepSeek’s open-source week and why it’s a big deal | PySpur – AI Agent Builder https://www.pyspur.dev/blog/deepseek_open_source_week
“This is what hybrid AI is all about: The power of the cloud when you need it, and the speed and efficiency of on-device AI when you don’t. With DeepSeek R1’s 7B and 14B distilled models now available on Snapdragon-powered Copilot+ PCs, we’re taking another leap forward. The NPU https://x.com/yusuf_i_mehdi/status/1896653214805811266
“DeepSeek R1 is joint #1 with GPT 4.5 on hard prompts with style control. Congrats to OpenAI team.” / X https://x.com/teortaxesTex/status/1896591303150104784
“Experimental. Try it out. I am personally still an R1 1776 guy. Some people like o3-mini. It would be good to understand what people think of Sonnet 3.7-powered reasoning searches.” / X https://x.com/AravSrinivas/status/1898070189465649597
“Wait wut? DeepSeek dropped it’s own Fire-Flyer File System (3FS) 🔥 Benchmarks: > 3FS achieves 6.6 TiB/s read throughput, significantly higher than rest > Unlike eventual consistency models common in distributed systems, 3FS uses CRAQ for strong consistency > KVCache https://x.com/reach_vb/status/1895427876985422322




