Image created with gemini-3.1-flash-image-preview with claude-opus-4.7. Image prompt: High-end product photograph of a tall vanilla soft-serve sundae in a blue-and-white willow-patterned porcelain bowl, drizzled with golden caramel ribbons and lychee syrup, topped with red bean, mochi pearls, and a small dipped chocolate cone tilted like a pagoda, wrapped in a crisp paper sleeve with bold hero lettering reading ‘ALIBABA’ in a fusion of Dairy Queen script and brush calligraphy. Soft directional studio light, glossy macro detail on toppings, shallow depth of field, landscape composition, a tiny red spoon beside it stamped ‘Est. 1951 — Milford, DE 75’.
How do people seek guidance from Claude? We looked at 1M conversations to understand what questions people ask, how Claude responds, and where it slips into sycophancy. We used what we found to improve how we trained Opus 4.7 and Mythos Preview.
https://x.com/AnthropicAI/status/2049927618397614466
Tencent has released Hy3-preview, an open weights reasoning model scoring 42 on the Artificial Analysis Intelligence Index, trailing recent open weights peers Hy3-preview is the latest model from @TencentHunyuan. It is a 295B total / 21B active parameter Mixture-of-Experts
https://x.com/ArtificialAnlys/status/2049852417316143393
Yesterday, we shared a chart showing 80% of Claude users live in $100k+ households, more than any other major AI service. But Claude’s user base is smaller than other AI services, so this isn’t the same as being the most popular service among high-income households.
https://x.com/EpochAIResearch/status/2047423836904460328
🚀 Introducing FlashQLA: high-performance linear attention kernels built on TileLang. ⚡ 2-3× forward speedup. 2× backward speedup. 💻 Purpose-built for agentic AI on your personal devices. 💡Key insights: 1. Gate-driven automatic intra-card CP. 2. Hardware-friendly algebraic
https://x.com/Alibaba_Qwen/status/2049462666734026923
$3/million output tokens. Qwen 3.5 Plus is basically a frontier model. Let that sink in.
https://x.com/MatthewBerman/status/2049562998575075526
Alibaba’s Qwen3.6 27B is the new open weights leader under 150B parameters scoring 46 on the Artificial Analysis Intelligence Index, but uses ~3.7x the output tokens and costs ~21x more than Gemma 4 31B (39) to run the full Intelligence Index @Alibaba_Qwen has released two open
https://x.com/ArtificialAnlys/status/2049881951260283097
Pi + local models are definitely really cool! Short demo to clean up my Desktop: > terminal 1: llama-server -hf unsloth/Qwen3.5-9B-GGUF:UD-Q4_K_XL > terminal 2: simply type “”pi”” and start talking to it
https://x.com/NielsRogge/status/2049128153658839324
Qwen
https://qwen.ai/blog?id=qwen-scope
Qwen 3.6 Flash
https://x.com/scaling01/status/2048730112636473792
Today we’re releasing Qwen-Scope 🔭, an open suite of sparse autoencoders for the Qwen model family. It turns SAE features into practical tools: 🎯 Inference — Steer model outputs by directly manipulating internal features, no prompt engineering needed 📂 Data — Classify &
https://x.com/Alibaba_Qwen/status/2049861145574690992
This is where we are right now. And i’m not gonna lie it feels pretty magical 🧚♀️ Qwen3.6 27B running inside of Pi coding agent via Llama.cpp on the MacBook Pro For non-trivial tasks on the @huggingface codebases, this feels very, very close to hitting the latest Opus in Claude
https://x.com/julien_c/status/2047647522173104145





Leave a Reply