Image created with Flux Pro v1.1 Ultra. Image prompt: Riverwalk tech showcase pop-up with multilingual vendor screens reflecting on the water; a large title reading “Alibaba” set across the top in bold grotesque sans, high contrast; cloud dashboards compare commerce and AI workloads; cosmopolitan, precise UI, river reflections
American companies are losing market share to chinese open-source companies! Anthropic’s coding market share on OpenRouter went from 46% in July down to 32% in a month the reason for it? Qwen3-Coder https://x.com/scaling01/status/1956858471682617553
bangs successfully removed with 8-step Qwen Image Edit [Fast] too 💨 using Qwen Image Lightning LoRA, now on Spaces👇 https://x.com/linoy_tsaban/status/1957762030393544847
🚀 Qwen Chat Desktop for Windows is here! 💻 All the power of Qwen Chat — now with MCP support for smarter, faster agents. ⚡ Run up MCP Servers, supercharge your productivity, and stay in control. 📥 Download now → https://x.com/Alibaba_Qwen/status/1956399490698735950
There’s been a lot of Discourse about Qwen’s rejection of hybrid paradigm. “”Did DeepSeek fall for the hybrid meme?”” But hybrids make *so much sense* if you’re building a fast, economical SWE agent, which is exactly what 3.1 is for. It’s all been for Aider, Claude Code, MCPs. https://x.com/teortaxesTex/status/1958437173948023127
Excited to release: Jupyter Agent 2 The agent can load data, execute code, plot results inside Jupyter faster than you can scroll! 🤖 Powered by Qwen3-Coder ⚡️ Running on Cerebras ⚙️ Executed in E2B ↕️ Upload your files All videos are in *real time*! https://x.com/lvwerra/status/1957832240416580024
I tried @Alibaba_Qwen Qwen3-Coder today inside @cline . Very impressed. It helped me solve a tricky deployment: putting a Dockerized vibe-coded project onto https://x.com/chunhualiao/status/1956957519315956074
Ovis is one of the best, most creative and overlooked VLM series A yet another Alibaba division. https://x.com/teortaxesTex/status/1956306172576690610
>V3.1-Base I guess this confirms they’ve moved on to hybrid models, Anthropic-style (and contra Qwen). I am not amused with how it works. But I was also disappointed with V2.5 (original), their merge of chat and code; ultimately, it worked. Another reason to expect V4, not R2. https://x.com/teortaxesTex/status/1957818879205351851
🎨✨ From simple sketches to stunning 3D interiors — powered by Qwen-Image-Edit! All designs are community contributions, showcasing how AI transforms architectural visions into realistic, stylish, and precise creations. Try it now: https://x.com/Alibaba_Qwen/status/1958744976772198825
📸 Just showed Qwen Chat Vision Understanding how to “”see”” and understand a meal — and it didn’t just identify the food, it analyzed what, where, weight and even how many calories! From a simple photo, we extracted detailed insights: ✅ Object detection ✅ Weight estimation ✅ https://x.com/Alibaba_Qwen/status/1956618027769971070
🖼️ 🚨 Image Edit Leaderboard Update: Qwen-Image-Edit is now the #1 open model for Image Edit in the Arena (Apache 2.0). The model by @alibaba_qwen debuts at #6 overall on the Image Edit leaderboard tied with Gemini 2.0 Flash Preview. https://x.com/lmarena_ai/status/1958206842657743270
🖼️ Image Edit Model Update Qwen-Image-Edit, developed by @Alibaba_Qwen, is now available in the Arena. This model brings image editing capabilities, and we encourage you to test it with your most complex prompts. https://x.com/lmarena_ai/status/1957878222986821711
🚀 Excited to introduce Qwen-Image-Edit! Built on 20B Qwen-Image, it brings precise bilingual text editing (Chinese & English) while preserving style, and supports both semantic and appearance-level editing. ✨ Key Features ✅ Accurate text editing with bilingual support ✅ https://x.com/Alibaba_Qwen/status/1957500569029079083
🚀 Small but mighty update to Vision Understanding in Qwen Chat — now with native 128K context and stronger performance across vision, video, and 3D tasks! 🔥 Key Upgrades: ✅ Significant boost in math & reasoning ✅ More accurate object recognition ✅ OCR support for 30+ https://x.com/Alibaba_Qwen/status/1956289523421470855
NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale “”Autoregressive models—generating content step-by-step like reading a sentence—excel in language but struggle with images. Traditionally, they either depend on costly diffusion models or https://x.com/iScienceLuvr/status/1956321483183329436
Qwen Image Edit works too well with lightx2v LoRA to run with just 8 and 4 steps, wtf? in my experience, 8 steps keeps the quality of the edits at the same level as the original model, at a 12x speedup 💨 (ofc i built a demo for it) https://x.com/multimodalart/status/1958217824629092568
Qwen-Image Edit in ComfyUI”” / X https://x.com/Alibaba_Qwen/status/1957991583649001555
Qwen-Image-Edit is out in anycoder for image editing in your vibe coded apps Built on 20B Qwen-Image, it brings precise bilingual text editing (Chinese & English) while preserving style, and supports both semantic and appearance-level editing. https://x.com/_akhaliq/status/1957519569016238268
Qwen-Image-Edit is the new open weights leader in Image Editing, with quality comparable to GPT-4o and FLUX.1 Kontext [max] Qwen-Image-Edit is the image editing variant of the recent Qwen-Image release from Alibaba, also released under the Apache 2.0 license with weights https://x.com/ArtificialAnlys/status/1958712568731902241
Qwen-Image-Edit: Image Editing with Higher Quality and Efficiency | Qwen https://qwenlm.github.io/blog/qwen-image-edit/
Relighting images with Qwen Edit impressive directional control and color temperature manipulation w/o additional finetuning crazy how we needed a dedicated model for this not long ago https://x.com/linoy_tsaban/status/1958176756185325931
Thank you! Qwen-Image-Edit is now available in anycoder!”” / X https://x.com/Alibaba_Qwen/status/1957709912202682588
👀🚨 Vision Leaderboard update! Two new models have entered the Vision Top 20 this week: 🔸Qwen-vl-max-2025 by @alibaba_qwen lands at #10 (tied with gemini-1.5-pro & gpt-5-nano-high) 🔸Step 3 by @StepFun_ai ranks at #19 (tied with step-lo-turbo) Congrats to both 🎉 this is https://x.com/lmarena_ai/status/1958957107946168470
Wow — Qwen-Image-Edit just debuted at #2 in the Image Editing Arena 🏆 ELO 1098, with performance on par with GPT-4o — and all at open weights under Apache 2.0. Thanks to @ArtificialAnlys Try it now: https://x.com/Alibaba_Qwen/status/1958725835818770748
Nvidia dropping model that rivals qwen 3 8b, with data, with base model, not that bad of a license (could be better to be clear) a big win, love to see it. Hopefully is well integrated into open tools and “”easy to finetune”” etc, which is hard to measure”” / X https://x.com/natolambert/status/1957517030929887284
GPT-5 behind chinese models like Kimi-K2 and Qwen3-235B on coding https://x.com/scaling01/status/1956404452442681829
GPT-5-mini high shows no improvement over o4-mini and behind top chinese models like Kimi-K2, GLM-4.5, Qwen3-235B and DeepSeek-R1 https://x.com/scaling01/status/1956405559978029061
AI everywhere — love seeing Qwen3 powering cars & robots on-device with Qualcomm NPU! 🚀 Thanks to NEXA AI 🙌”” / X https://x.com/Alibaba_Qwen/status/1958800193970954657
Qwen 3 instruct is now on Baseten Model APIs. Our model performance team has worked quite a bit of magic to reach ~95tps for Qwen 3 Instruct. This gives you blazing fast responses for a state of the art reasoning model. https://x.com/basetenco/status/1956475210582090030
The @Alibaba_Qwen team patched two improvement fixes after we released. We thought of doing a patch release for that. So, please update to the latest: 0.35.1. Notes: https://x.com/RisingSayak/status/1958057896731897940
Knobs that matter α tunes performance vs efficiency; accuracy rises fast until ~0.6 while cost stays low until ~0.4 then climbs. Implementation uses k‑means with k=60, Qwen3‑embedding‑8B (4096‑d) and top‑p=4 nearest clusters at inference. https://x.com/omarsar0/status/1958897532890943884
Quick hacks for tool calling and thinking flag support for DeepSeek V3.1 in SGLang: https://t.co/EoUWKu4MEE Then run with: –tool-call-parser deepseekv31 –reasoning-parser qwen3 And in request body: “”chat_template_kwargs””: {“”thinking””: true} This is up on @chutes_ai now, but”” / X https://x.com/jon_durbin/status/1958488353478758599
🐞 We hit a bug in the inference code for Qwen-Image-Edit on Diffusers, which caused some odd cases. ✅ Fixed now and thanks to Diffusers for the quick merge — give it another try! 🔗 Try it now: https://x.com/Alibaba_Qwen/status/1957840853277290703
AI Toolkit now supports fine tuning Qwen Image Edit and supports caching the text embeddings with the control images. I already trained a 3 bit ARA for it, which will allow you to train a LoRA at 1024 on a 5090 when caching the text embeddings. More in 🧵 https://x.com/ostrisai/status/1958932936620900666
It’s out friends! Really great to see the state of things in image edits, video fidelity being pushed further and further, thanks to the community! This release also features new fine-tuning scripts for Qwen-Image and Flux Kontext (with support for image inputs). So, get busy https://x.com/RisingSayak/status/1957668389935096115
nano-banana, qwen-image-edit, what else? Try @StepFun_ai NextStep-1-Large-Edit – 14B AR model – Apache 2 license – Demo available on @huggingface – Pretrain model also made available Link below https://x.com/Xianbao_QIAN/status/1957749693485838448
qwen image edit is back at #1 trending model at @huggingface 👑 https://x.com/multimodalart/status/1958229738398634171
Qwen-Image pruning experiment. Going from 60 to 30 blocks, 20B params to 10B params. Removed block idx 2, 3, 4, 5, 7, 8, 10, 11, 12, 13, 14, 15, 16, 21, 23, 24, 40, 41, 42, 43, 44, 45, 49, 50, 51, 52, 53, 54, 55, 56 https://x.com/ostrisai/status/1957748358451503166




