Image created with Ideogram V2. Image prompt: A vibrant spring meadow with exaggerated blooming flowers in bright colors. Hidden comically in the middle is Meta’s Llama (the AI model mascot) wearing an unconvincing plant costume but its llama head and colorful design clearly visible. Code snippets float around in transparent bubbles. A few real llamas stand nearby looking confused at their AI counterpart. Woodland animals examine documentation papers that have blown around. The Meta logo peeks out from under a leaf pile. The whole scene is bathed in golden sunshine with lens flares. Vibrant colors and high detail. The word “LLAMA” integrated into the scene.

“Meta released two open-weight vision-language models, Llama 4 Scout and Llama 4 Maverick, and previewed a third, Llama 4 Behemoth. Built on a mixture-of-experts (MoE) architecture, these models offer greater efficiency by activating only a subset of parameters during inference. https://x.com/DeepLearningAI/status/1911841914590015586

“Meta released the Llama 4 family of natively multimodal, open-source models—with context windows up to 10M tokens! Currently, the series has two MoE models: 109B param Scout and 400B param Maverick, and a third, 2T param Behemoth, currently in training https://x.com/adcock_brett/status/1911450182937346285

“GPT-4.1 benchmark results: GPT-4.1 scores worse than GPT-4, Opus and Llama-3.1-70B (lol) GPT-4.1 API version is WORSE than Optimus Alpha and Quasar Alpha (so results of quasar were just fluff hype) GPT-4.1 mini scores worse than Qwen2.5 32B, Llama-4 Maverick and Claude 3 Haiku https://x.com/scaling01/status/1911847193465471374

“Llama 4 quietly dropped from 1417 to 1273 ELO, on par with DeepSeek v2.5 https://x.com/casper_hansen_/status/1911332387817931161

“start agents with giants like Llama 4, with only one line of code 🔥 @huggingface Inference Providers 🤝 smolagents https://x.com/mervenoyann/status/1912527990015078777

“The relative weakness of Llama 4, given the stellar team & enormous resources (Meta spent more cash on H100 chips last year than was spent inflation adjusted on the Manhattan Project) is surprising Still very possible that future releases & reasoners close the gap, but for now…” / X https://x.com/emollick/status/1911898989978665131

“The release version of Llama 4 has been added to LMArena after it was found out they cheated, but you probably didn’t see it because you have to scroll down to 32nd place which is where is ranks https://x.com/pigeon__s/status/1910705956486336586

“We’re partnering with @CerebrasSystems to bring the fastest Llama 4 experience right to you! 🔥 Join us tomorrow in our hackathon to build real-time systems, code agents/ assistants AND more! Bonus: We’re giving 20USD free inference credits and one month Pro subscription to all https://x.com/huggingface/status/1910801830126174632

“OpenAI have announced the availability of GPT-4.1 in the API, and of course we have day 0 support! To get it just install the latest version of our OpenAI integration: pip install -U llama-index-llms-openai Learn more here: https://x.com/llama_index/status/1911863053257445713

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading