Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Jia Zhangke style observational realism photograph of a concrete bus shelter in transitional Beijing neighborhood with weathered bilingual signage, a chestnut horse standing naturally inside the shelter, one person in work clothes reading a phone, overcast flat daylight, desaturated grays and faded teals, half-demolished buildings in background, large white Chinese text overlay reading 智谱AI, documentary stillness, muted postindustrial palette, patient composition.

GLM-5 scores 48.2% on WeirdML, beating Claude Sonnet 4.5 and tying gpt-oss-120b (high) for the best open model. This is a clear advance but still far from Opus-4.6 at 78% and gpt-5.2 at 72%.”” https://x.com/htihle/status/2023734346943775179

[2602.15763] GLM-5: from Vibe Coding to Agentic Engineering https://arxiv.org/abs/2602.15763

// From Vibe Coding to Agentic Engineering // GLM-5 is a foundation model designed to transition from vibe coding to agentic engineering. The model introduces novel asynchronous agent RL algorithms that enable learning from complex, long-horizon interactions. It also adopts DSA”” https://x.com/omarsar0/status/2024122246688878644

🚀 Zhipu AI GLM-5: A Real Step Into the Top Tier? Zhihu contributor toyama nao offers a concise verdict: “”A hard road upward — the stairway to godhood.”” 🔮From recovery to contention Over the past six months (4.5 → 5.0), Zhipu has climbed back into China’s first tier and now”” https://x.com/ZhihuFrontier/status/2022161058321047681

Funny to see truthy-dpo show up randomly in /v1/completions (hallucination) requests to GLM-5, guess that dataset is still semi-useful! https://x.com/jon_durbin/status/2022291772617945546

GLM-5 is “”Bigger, faster, better, and cheaper.”” @louszbd from @Zai_org broke down GLM-5 on @thursdai_pod with @altryne. New RL framework, DeepSeek sparse attention, 744B params, fully open source under MIT. Catch the full interview in the link below!”” https://x.com/wandb/status/2022389206572765697

GLM-5 Tech Report https://x.com/scaling01/status/2024050011164520683

Introducing GLM-5 from @Zai_org, the best-in-class open-source model for systems engineering and long-horizon agents. AI natives can now use GLM-5 on Together AI and benefit from reliable inference for production-scale reasoning, coding, and agent workflows.”” https://x.com/togethercompute/status/2022354579858289052

Presenting the GLM-5 Technical Report! https://t.co/ZTYEe7oM0Y After the launch of GLM-5, we’re pulling back the curtain on how it was built. Key innovations include: – DSA Adoption: Significantly reduces training and inference costs while preserving long-context fidelity -“” https://x.com/Zai_org/status/2023951884826849777

Really nice tech report, huge props to @Zai_org for still releasing these as they are very valuable for the open-source community. Nice to see many similarities with our recipe for intellect-3, excited for the further work on the RL recipe, already have some stuff cooking up”” https://x.com/Grad62304977/status/2024170939248714118

Re-OCR’d the complete 1771 Encyclopaedia Britannica (2,724 pages) with a single command on @huggingface Jobs. – 0.9B model (GLM-OCR) ~$0.002/page ~$5 total on an L4 GPU Before (old Tesseract ocr) → After:”” https://x.com/vanstriendaniel/status/2024445900102258846

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading