Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the exact compositional architecture with subject dominating left third in tight close-crop, deep blue-purple cinematic lighting, wispy smoke bleeding rightward, and emotional gravity of post-party melancholy. Replace the central figure with a sleek modern smartphone displaying a Chinese AI chat interface, angled slightly toward camera with a cracked screen corner, glitter scattered across the glass surface catching light, resting in or near a shallow reflection as if discarded, maintaining the same bittersweet atmosphere and HBO prestige drama aesthetic. Overlay category name in thin all-lowercase white Helvetica Neue Light on the right two-thirds misty area.

GLM-5.1 by @Zai_org is now #3 in Code Arena – surpassing Gemini 3.1 and GPT-5.4, and now on par with Claude Sonnet 4.6. The first frontier level open model to break into the top 3. It’s a major +90 point jump over GLM-5, and +100 over Kimi K2.5 Thinking. Huge congrats to
https://x.com/arena/status/2042611135434891592

GLM-5.1 is here! Try it on OpenClaw🦞🦞🦞 ollama launch openclaw –model glm-5.1:cloud Claude Code ollama launch claude –model glm-5.1:cloud Chat with the model ollama run glm-5.1:cloud
https://x.com/ollama/status/2041556572334428576

🎉 Congrats to @Zai_org on releasing GLM-5.1, SGLang is ready to support on day-0! GLM-5.1 is a next-gen flagship built for agentic engineering: 🏆 SWE-Bench Pro: #1 open source, #3 globally 🔨 Terminal-Bench 2.0: top-ranked on real-world terminal tasks ⏳ Long-Horizon: runs
https://x.com/lmsysorg/status/2041553264685334588

🎉 Day-0 support for GLM-5.1 in vLLM! Congrats to @Zai_org on this next-gen flagship model built for agentic engineering, with stronger coding and sustained long-horizon task performance. Get started 👇 📖 Recipe:
https://x.com/vllm_project/status/2041559268185526375

🚀 GLM-5.1 is now live on Novita AI @Zai_org’s next-gen flagship for agentic engineering, with day-0 support from Novita. ✨ Leads on SWE-Bench Pro, NL2Repo, and Terminal-Bench ✨ Stays effective over long horizons: hundreds of rounds, thousands of tool calls ✨ Function
https://x.com/novita_labs/status/2041558437843365932

GLM-5.1 can now be run locally!🔥 GLM-5.1 is a new open model for SOTA agentic coding & chat. We shrank the 744B model from 1.65TB to 220GB (-86%) via Dynamic 2-bit. Runs on a 256GB Mac or RAM/VRAM setups. Guide:
https://t.co/LgWFkhQ5rr GGUF:
https://x.com/UnslothAI/status/2041552121259249850

GLM 5.1 is SOTA on SWE-Bench Pro. Not “”SOTA among open models””. SOTA.
https://x.com/nrehiew_/status/2041553534664200408

GLM 5.1 just became the #1 open-weight model on the Vals Index, unseating Kimi K2.5, and is #6 on the overall index.
https://x.com/ValsAI/status/2041570865721307623

GLM-5.1 by @Zai_org just launched in the Text Arena, and is now the #1 open model. It outperforms the next best open model, its predecessor, GLM-5, by +11 points and +15 over Kimi K2.5 Thinking. It shows strength in: – #1 open model in Longer Query (#4 overall) – #1 open model
https://x.com/arena/status/2041641149677629783

GLM-5.1 from @Zai_org is live on OpenRouter! GLM-5.1 shows a strong jump in long horizon task completion end to end. The model works independently to plan, execute, iterate, and improve upon its work throughout the task, delivering high quality results.
https://x.com/OpenRouter/status/2041551251708793154

GLM-5.1 is now available in Windsurf! Try it out and let us know what you think
https://x.com/windsurf/status/2042696652042178872

GLM-5.1 is the new open SOTA on SWE-Bench Pro Comes with an MIT license. Congrats @Zai_org!
https://x.com/NielsRogge/status/2041902317264322702

GLM-5.1: Towards Long-Horizon Tasks
https://z.ai/blog/glm-5.1

With GLM-5.1,
https://t.co/nvW0zf0SAH maintains the #1 open model rank in Code Arena and is now within ~20 points of the top overall while outperforming Claude Sonnet 4.6, Opus 4.5, GPT-5.4 High, and Gemini-3.1 Pro. Open models are now competitive at the frontier.
https://x.com/arena/status/2042643933768151485

Introducing GLM-5.1 from @Zai_org on Together AI. AI natives can now use GLM-5.1 on Together and benefit from reliable inference for production-scale agentic engineering and long-horizon coding workflows.
https://x.com/togethercompute/status/2042002522798235935

We added GLM 5.1 to our leaderboard! A small improvement over GLM 5, at the cost of slightly bigger reasoning chains. The improvement is enough though to push it above Step-3.5-Flash to become the new best open model on MathArena!
https://x.com/j_dekoninck/status/2041565875858239504

Often discuss my three-level vision for opening GLM to the community: First, we focus on accessibility by lowering the barrier to entry and removing unnecessary constraints so developers can truly explore the model. Second, we provide a robust baseline that empowers everyone to
https://x.com/ZixuanLi_/status/2042495832755151068

Introducing GLM-5.1: The Next Level of Open Source – Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. – Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
https://x.com/Zai_org/status/2041550153354519022

Strong release! GLM-5.1 is a DeepSeek-V3.2-like architecture (including MLA and DeepSeek Sparse Attention) but with more layers. And the benchmarks look better throughout! Looks like THE flagship open-weight model now.
https://x.com/rasbt/status/2041864806534086881

Wow, GLM-5.1 beat Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on SWE-Bench Pro (58.4 vs 57.3 / 57.7 / 54.2) as an open-weight MIT-licensed model! The “open-source AI vs closed-source AI” gap is still ~6 months.
https://x.com/Yuchenj_UW/status/2041559747065999664

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading