Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Cinematic wide shot of a grand Chinese marketplace styled as the Emerald City with floating lanterns and ornate stalls, every vendor and product outlined with glowing emerald segmentation lines, moody theatrical lighting with deep shadows, dramatic Wicked movie aesthetic
supervision-0.27.0 lets you parse and visualize Qwen3-VL object detection results prompt: person between albert and marie. to answer this, Qwen3 needs prior visual knowledge of Marie Curie and Albert Einstein, and it needs to understand reference terms like between. https://x.com/skalskip92/status/1990433442434031737
🚨Leaderboard Update New model provider in the Arena: @DeepCogito has released Cogito v2.1 (MIT licensed) 🔹Top 10 Open Source Model for WebDev, rank #10 🔹Tie ranks #18 overall for WebDev This puts Cogito v2.1 on par with community favorites like Qwen 3 Coder Plus & Kimi K2 https://x.com/arena/status/1991211903331496351
I love Qwen3-VL but for some reason 2B model blows up 80GB VRAM on simple SFT (NF4 QLoRA with flash installed)”” / X https://x.com/mervenoyann/status/1990172603437175147
10,000,000 users creating with Qwen Chat — and we’re just getting started. From here, let’s begin — https://x.com/Alibaba_Qwen/status/1990322403994657091
This weekend I evaluated the latest Qwen3-VL models for semantic object detection and built a HF Space to compare Qwen3/2.5/2 side by side. 👉 https://x.com/darius_morawiec/status/1990225022766719335
Today, we are officially open-sourcing a set of high-quality speculator models on the @huggingface Hub. Our first release includes Llamas, Qwens, and gpt-oss. In practice, you can expect 1.5-2.5× speedups on average, with some workloads seeing more than 4× improvements! https://x.com/_EldarKurtic/status/1991160711838359895





Leave a Reply