Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: High-end product photograph of a tall glass parfait with layered matcha green, red bean, and vanilla soft-serve topped with the classic curl and a small golden mooncake stamped with a ‘Q’, wrapped in a red paper sleeve with bold brushstroke ‘QWEN’ lettering and a subtle gold ’75 — Est. 1951 Milford, DE’ band, soft directional studio light, shallow depth of field, glossy macro detail on honey-caramel drips and crushed almond cookie, landscape composition on a creamy backdrop.

🚀 Introducing FlashQLA: high-performance linear attention kernels built on TileLang. ⚡ 2-3× forward speedup. 2× backward speedup. 💻 Purpose-built for agentic AI on your personal devices. 💡Key insights: 1. Gate-driven automatic intra-card CP. 2. Hardware-friendly algebraic
https://x.com/Alibaba_Qwen/status/2049462666734026923

$3/million output tokens. Qwen 3.5 Plus is basically a frontier model. Let that sink in.
https://x.com/MatthewBerman/status/2049562998575075526

Alibaba’s Qwen3.6 27B is the new open weights leader under 150B parameters scoring 46 on the Artificial Analysis Intelligence Index, but uses ~3.7x the output tokens and costs ~21x more than Gemma 4 31B (39) to run the full Intelligence Index @Alibaba_Qwen has released two open
https://x.com/ArtificialAnlys/status/2049881951260283097

Pi + local models are definitely really cool! Short demo to clean up my Desktop: > terminal 1: llama-server -hf unsloth/Qwen3.5-9B-GGUF:UD-Q4_K_XL > terminal 2: simply type “”pi”” and start talking to it
https://x.com/NielsRogge/status/2049128153658839324

Qwen
https://qwen.ai/blog?id=qwen-scope

Qwen 3.6 Flash
https://x.com/scaling01/status/2048730112636473792

Today we’re releasing Qwen-Scope 🔭, an open suite of sparse autoencoders for the Qwen model family. It turns SAE features into practical tools: 🎯 Inference — Steer model outputs by directly manipulating internal features, no prompt engineering needed 📂 Data — Classify &
https://x.com/Alibaba_Qwen/status/2049861145574690992

This is where we are right now. And i’m not gonna lie it feels pretty magical 🧚‍♀️ Qwen3.6 27B running inside of Pi coding agent via Llama.cpp on the MacBook Pro For non-trivial tasks on the @huggingface codebases, this feels very, very close to hitting the latest Opus in Claude
https://x.com/julien_c/status/2047647522173104145

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading