Image created with GPT Image 1. Image prompt: split-screen collage of burnt-orange blaze and cobalt surf, Republic split-color palette, minimalist graphic design inspired by New Order’s ‘Republic’, metaphor for bilingual ribbon language models, flat color, subtle texture, 1980s Saville typography style
ngl i respect the qwen team so much for throwing thirty six TRILLION tokens on a 600M, equal part impressive and hilarious”” / X https://x.com/qtnx_/status/1922398353985241438
Qwen just dropped optimised GPTQ, GGUF & AWQ for Qwen3 🔥 https://x.com/reach_vb/status/1921956656226668964
We’re officially releasing the quantized models of Qwen3 today! Now you can deploy Qwen3 via Ollama, LM Studio, SGLang, and vLLM — choose from multiple formats including GGUF, AWQ, and GPTQ for easy local deployment. Find all models in the Qwen3 collection on Hugging Face and https://x.com/Alibaba_Qwen/status/1921907010855125019
Autonomous AI Agent framework uses Qwen 3 with MCP to build and deploy a documentation website from GitHub repository. All of this from a simple prompt in just 2 minutes. 100% Opensource. https://x.com/Saboo_Shubham_/status/1919800022566351345
NICE! @PrimeIntellect open sourced Intellect 2 – 32B reasoning model post-trained using GRPO via distributed asynchronous RL – beats QwQ 32B – Apache 2.0 licensed💥 Works with transformers, llama.cpp, vllm and more! ⚡ https://x.com/reach_vb/status/1921948704061202725
AM-Thinking-v1 looks like a strong 32B reasoning model. It outperforms DeepSeek-R1 and rivals Qwen3-235B-A22B. All built on top of open-source. The 32B scale is a great size for deployment and fine-tuning. Best part: the model is open-sourced! https://x.com/omarsar0/status/1922668488826741061
AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale Performs on par with Qwen3-235B-A22B and Seed1.5-Thinking while being built entirely from the open-source Qwen2.5-32B base model and publicly available queries https://x.com/arankomatsuzaki/status/1922483522549252200
Qwen3 model family overview: full benchmarks for all 8 Qwen3 models in both reasoning and non-reasoning modes Key results: ➤ Qwen3 235B-A22B (Reasoning): The largest Qwen3 model scores 62 on the Artificial Analysis Intelligence Index, becoming the most intelligent open weights https://x.com/ArtificialAnlys/status/1922317655643717887
New GRPO notebook for Qwen3 Base! It’s much harder to RL base models since GRPO first needs to learn formatting like <think></think> By “”priming”” on some formatted samples, we bypass this issue & create good LoRA priors before GRPO vLLM 0.8.5 is also supported now with Unsloth!”” / X https://x.com/danielhanchen/status/1922345308916216087
Alibaba introduced Qwen3, a family of eight open large language models, including two mixture-of-experts (MoE) models and six dense models ranging from 32B to 0.6B parameters. All support an optional reasoning mode and multilingual capabilities across 119 languages. https://x.com/DeepLearningAI/status/1920614690813550930
Spin up Qwen3 @Alibaba_Qwen + SGLang @lmsysorg on H100 in one command:”” / X https://x.com/skypilot_org/status/1922341585250881967
@Alibaba_Qwen Great job guys!!!”” / X https://x.com/reach_vb/status/1922322833847300156
@Alibaba_Qwen Great work Qwen team! 💪”” / X https://x.com/Yuchenj_UW/status/1922294726209724656
🚀 One line. A full webpage. No hassle. Introducing Web Dev – the ultimate tool for building stunning frontend webpages & apps using simple prompts in Qwen Chat. 🎨 Just say, “”create a twitter website”” — and boom! Instant code, ready to go. No coding required. Just your https://x.com/Alibaba_Qwen/status/1920848175457591406
Please check out our Qwen3 Technical Report. 👇🏻 https://x.com/Alibaba_Qwen/status/1922265772811825413



