Image created with Flux Pro v1.1 Ultra. Image prompt: Giant “100” as pure white negative‑space cutout dominating the frame; minimalist poster style; stylized llama silhouette built from tokens grazing across the cutout; rust‑cream backdrop; high contrast, crisp edges, soft studio light, no other text, no logos
A big milestone for Hermes. We did a lot of work to make a frontier level openmodel that does not dictate what expression you can elicit from the model. Super strong at math, coding, STEM, and creativity. Model Weights: https://x.com/Teknium1/status/1960420619620901135
Hermes 4 – Nous Research https://hermes4.nousresearch.com/
Hermes 4 technical breakdown: ▫️ Open Source LLM ▫️ Fine-tune of Llama 3.1 ▫️ 405B & 70B params ▫️ Hybrid reasoning ▫️ Trained on 3.5 million reasoning samples ▫️ Trained using 192 NVIDIA B200 GPUs ▫️ Uncensored ▫️ Steerable, aligned to the user ▫️ Creativity enhanced (like”” / X https://x.com/vectro/status/1960734604601569560
Nous Research presents Hermes 4, our latest line of hybrid reasoning models. https://x.com/NousResearch/status/1960416954457710982
Fourth model launch of the day 🔥 – introducing Hermes 4, from @NousResearch Hermes 4 is trained for steerability and lower refusal rates, topping RefusalBench and beating Grok 4 https://x.com/OpenRouterAI/status/1960436262923592065
Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama’s new app, CLI, API, and SDKs. Ollama’s Turbo mode that’s in preview has also been updated to support the model! https://x.com/ollama/status/1960463433515852144
🚀 LLM Compressor v0.7.0 is here! This release brings powerful new features for quantizing large language models, including transform support (QuIP, SpinQuant), mixed precision compression, improved MoE handling with Llama4 support, and more. Full blog: https://x.com/vllm_project/status/1960432740672921934




