Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: Using the provided reference image, keep the pure white landscape background, vertical type stack, and galaxy-punchout starfield letterform treatment exactly, but replace ‘HEROES’ with ‘ALIBABA’ in the same bold condensed grotesque all-caps, replace ‘ALESSO’ with ‘OPEN GIANT’ in the light geometric all-caps, and replace ‘TOVE LO’ with ‘QWEN’ in the condensed grotesque all-caps, keeping ‘(we could be)’ and ‘FEATURING.’ unchanged with identical tracking, weights, and Milky Way texture.
We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. Unlike our previous post trained models, this model has been trained to be good at search and tool calls simultaneously, allowing us to unify the tool call router and
https://x.com/AravSrinivas/status/2047019688920756504
🚀 Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇 What’s new: 🧠 Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks 💡 Strong
https://x.com/Alibaba_Qwen/status/2046939764428009914
Qwen 3.6-Max-Preview solves AIME-2026 #15 after like 30 minutes of thinking, but on first try. Preview or not, it’s more baked than DeepSeek-Expert. Other tests validate this impression. It doesn’t screw up. Alibaba Qwen is, after all, a frontier lab.
https://x.com/teortaxesTex/status/2046166258853269990
Tencent Hy Research
https://hy.tencent.com/hy3-preview
Tencent, Alibaba to back DeepSeek at $20B+ valuation: report — TFN
Tencent, Alibaba to back DeepSeek at $20B+ valuation: report
Kimi K2.6 wrote an inference engine for Qwen3.5 0.5B in Zig and managed to beat LM Studio’s token per second by 20%, running for 12 hours and with 4000+ tool calls
https://x.com/nrehiew_/status/2046254256194474221
Qwen
https://qwen.ai/blog?id=qwen3.6-max-preview
Qwen3.6 Plus lands at #7 in Code Arena with a score of 1476 – up +16 points since the Preview. The new score also moves @AlibabaGroup to #3 lab in Code Arena. In the Text Arena, Qwen3.6 Plus lands at #36, a +13 point improvement since Preview. Congrats to the Qwen team on the
https://x.com/arena/status/2046268995163258958
🚀 Introducing Qwen3.6-Max-Preview, an early preview of our next flagship model Highlights: ⚡️ Improved agentic coding capability over Qwen3.6-Plus 📖 Stronger world knowledge and instruction following 🌍 Improved real-world agent and knowledge reliability performance Smarter,
https://x.com/Alibaba_Qwen/status/2046227759475921291
Guys, I am absolutely astounded. The Qwen 3.6 27b is like a jump to Qwen 4 from Qwen 27B 3.5. I just did a full suite of front end design tests and agentic benchmarks, made entirely by it. VERDICT: They’re so much better than I thought they’d be, like I’m completely astounded. I
https://x.com/KyleHessling1/status/2046986423736451327
llama-server -hf ggml-org/Qwen3.6-27B-GGUF –spec-default
https://x.com/ggerganov/status/2046988075302064209
Qwen 3.6 27B model is available on Ollama! Use it with all the integrations in Ollama or chat with the model. Chat with the model: ollama run qwen3.6:27b OpenClaw: ollama launch openclaw –model qwen3.6:27b Claude Code: ollama launch claude –model qwen3.6:27b More
https://x.com/ollama/status/2047066252523507916
We ran Qwen3.6-35B-A3B GGUF KLD benchmarks of all our dynamic quants and other providers. 1. Nearly all Unsloth quants for mean KLD, 90%, 99.9% KLD are on the Pareto Frontier for KLD vs Disk Space. 2. MXFP4_MOE is an outlier for all. 3. We’ll also make some smaller quants soon!
https://x.com/danielhanchen/status/2045169369723064449
Sharing my current setup to run Qwen3.6 locally in a good agentic setup (Pi + llama.cpp). Should give you a good overview of how good local agents are today: # Start llama.cpp server: llama-server \ -hf unsloth/Qwen3.6-35B-A3B-GGUF:Q4_K_XL \ –jinja \
https://x.com/victormustar/status/2045068986446958899
[2604.15804] Qwen3.5-Omni Technical Report
https://arxiv.org/abs/2604.15804
🎉 Day-0 vLLM support for Qwen3.6-27B! Congrats to @Alibaba_Qwen on the new 27B dense model release. Looking forward to more of the Qwen3.6 series. 👀 📖 Recipe:
https://x.com/vllm_project/status/2046943674890871019
LM Performance:With only 27B parameters, Qwen3.6-27B outperforms the Qwen3.5-397B-A17B (397B total / 17B active, ~15x larger!) on every major coding benchmark — including SWE-bench Verified (77.2 vs. 76.2), SWE-bench Pro (53.5 vs. 50.9), Terminal-Bench 2.0 (59.3 vs. 52.5), and
https://x.com/Alibaba_Qwen/status/2046939775924584577
Qwen3.5-Omni Technical Report | alphaXiv
https://www.alphaxiv.org/abs/2604.15804
Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model
https://simonwillison.net/2026/Apr/22/qwen36-27b/
Qwen3.6-35B-A3B just dropped. Red Hat AI has an NVFP4 quantized checkpoint ready. 35B params, 3B active, quantized with LLM Compressor. Preliminary GSM8K Platinum: 100.69% recovery (slightly above baseline). Early release. Let us know what you think!
https://x.com/RedHat_AI/status/2045153791402520952
The new Qwen3.6-27B just gave me definitely the best pelican riding a bicycle I’ve had from a 16.8GB model file!
https://x.com/simonw/status/2046995047720378458
We then experiment with 4 different training methods for the Minimal Code Editing task using Qwen3 4B. We find that SFT only works when trained on the same set of evaluation corruptions. It collapses otherwise, indicating that it fails to learn the general minimal coding style
https://x.com/nrehiew_/status/2046963050427879488
VLM Performance:Qwen3.6-27B is natively multimodal, supporting both vision-language thinking and non-thinking modes in a single unified checkpoint — the same as Qwen3.6-35B-A3B. It handles images and video alongside text, enabling multimodal reasoning, document understanding,
https://x.com/Alibaba_Qwen/status/2046939788184547610
We’ve published new research on how we post-train models for accurate search-augmented answers. Our SFT + RL pipeline improves search, citation quality, instruction following, and efficiency. With Qwen models, we match or beat GPT models on factuality at a lower cost.
https://x.com/perplexity_ai/status/2047016400292839808





Leave a Reply