Image created with Ideogram v3. Image prompt: Late‑90s boy‑band cover “Beyond – ∞”: group inside glowing blue infinity loop; sleek black turtlenecks; virtual horizon grid; chrome Eurostile ‘Beyond’ title.

“MARK ZUCKERBERG: “WE BUILT A NEW THING FOR YOU” “There’s almost 1 billion people using Meta AI across our apps now, so we made a new standalone Meta AI app for you to check out. Meta AI is designed to be your personal AI. You open the app and you can talk to it about whatever https://x.com/MarioNawfal/status/1917247516107772296

“The Leaderboard Illusion – Identifies systematic issues that have resulted in a distorted playing field of Chatbot Arena – Identifies 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release https://x.com/arankomatsuzaki/status/1917400711882797144

[AINews] Llama 4’s Controversial Weekend Release • Buttondown https://buttondown.com/ainews/archive/ainews-llama-4s-controversial-weekend-release/

“Research reveals gaming of Chatbot Arena : companies test multiple private variants and cherry-pick results while hoarding 63% of community data. https://x.com/fdaudens/status/1917671335758594474

“Search in WhatsApp You can now send a WhatsApp message to 1-800-ChatGPT (+1-800-242-8478) to get up-to-date answers and live sports scores. Accessible everywhere ChatGPT is available. https://x.com/OpenAI/status/1916947244852646202

“💬 🤖 WhatsApp Agent Template Build AI agents for WhatsApp with this comprehensive template. LangGraph’s supervisor architecture enables text and image handling with enterprise-grade security and monitoring. Start building a WhatsApp AI agent today! Repo link in reply 👇 https://x.com/LangChainAI/status/1915797952418734096

“You can now forward any WhatsApp message to Perplexity: +1 (833) 436-3285 and get it fact checked instantly. This is super useful when WhatsApp groups are filled with a ton of forwarded messages which could be misleading. https://x.com/AravSrinivas/status/1917977286713758073

“That’s a wrap on the LlamaCon 2025 keynote! In just over two years, Llama has surpassed 1 billion downloads and established itself as the open ecosystem leader in AI. We’re continuing to support the growth and development of the Llama ecosystem with today’s announcements: The https://x.com/AIatMeta/status/1917278290441822674

“We just announced a major leap forward in AI inference: Groq is partnering with Meta to accelerate the official Llama API giving developers the fastest way to run the latest Llama models with no tradeoffs (starting with Llama 4). What developers get with Groq + Meta: 👉 Speeds https://x.com/JonathanRoss321/status/1917621705503080554

“At LlamaCon 2025, Meta announced: —Standalone Meta AI app with a social ‘discover’ feed to take on ChatGPT —Llama API free preview —Lama Guard 4 (12B), LlamaFirewall, and Prompt Guard —Colab with Groq and Cerebras for faster inference https://x.com/rowancheung/status/1917473779069968505

How Meta understands data at scale – Engineering at Meta https://engineering.fb.com/2025/04/28/security/how-meta-understands-data-at-scale/

“Major updates from LlamaCon! We’re advancing AI security with new open-source Llama protection tools and new AI- powered solutions for the defender community. Developers can now access: — Llama Guard 4, a customizable safeguard that supports protections for text and image” / X https://x.com/AIatMeta/status/1917271400118902860

“Meta released Llama Guard 4 and new Prompt Guard 2 models 🔥 Llama Guard 4 is a new model to filter model inputs/outputs both text-only and image 🛡️ use it before and after LLMs/VLMs! Prompt Guard 2 22M & 86M are smol models to prevent model jailbreaks and prompt injections ⚔ https://x.com/mervenoyann/status/1917503204826255730

“Don’t sleep on this! 🔥 @Meta dropped swiss army knives for vision with A2.0 license ❤️ > image/video encoders for vision language and spatial understanding (object detection etc) > VLM outperforms InternVL3 and Qwen2.5VL 🔥 > Gigantic video and image datasets 👏 https://x.com/mervenoyann/status/1915723394701467909

“Meet Solo Tech, one of the 10 international recipients of the second Llama Impact Grants. Solo Tech uses Llama to offer offline, multilingual AI support for underserved rural communities with limited internet access. This grant will help them to equip 50 rural centers with AI https://x.com/AIatMeta/status/1917727629601616030

“Today at LlamaCon, we announced the 10 international recipients of the second Llama Impact Grants! The Llama Impact Grants are aimed at fostering innovation and creating economic opportunities through open-source AI. This year’s recipients showcase a diverse range of solutions, https://x.com/AIatMeta/status/1917274585189568870

META: Unauthorized Experiment on CMV Involving AI-generated Comments : r/changemyview https://www.reddit.com/r/changemyview/comments/1k8b2hj/comment/mpk1u3c/

“📷 Hello Singapore! Meta is at #ICLR2025 EXPO 📷 Meta will be in Singapore this week for #ICLR25! Stop by our booth to chat with our team or learn more about our latest research. Things to know: 📷 Find us @ Booth #L03 (Rows 3-4, Columns L-M) in Hall 2. 📷 We’re sharing 50+ https://x.com/AIatMeta/status/1915437886209745338

Meta AI https://ai.meta.com/meta-ai/

“who’s up making things? built this instagram reel quotes automation. now time to turn it into a replit app. list of quotes -> short form video -> auto post to insta & tiktok. https://x.com/niconley/status/1913601082799927551

[2410.17564v1] DisenGCD: A Meta Multigraph-assisted Disentangled Graph Learning Framework for Cognitive Diagnosis https://arxiv.org/abs/2410.17564v1

““We built this place on open source…” Meta Chief Product Officer Chris Cox took to the stage to kick off LlamaCon 2025, reflecting on our long legacy of open source contributions. 🧵 https://x.com/AIatMeta/status/1917353526088589409

Everything we announced at our first-ever LlamaCon https://ai.meta.com/blog/llamacon-llama-news/

“Qwen3-235B Base seems to be benefiting from its 94 layers compared to Llama-4 Mavericks 48 layers or DeepSeeks 61 layers, which are both much larger models https://x.com/scaling01/status/1916986267700506700

Meta previews an API for its Llama AI models | TechCrunch https://techcrunch.com/2025/04/29/meta-previews-an-api-for-its-llama-ai-models/

“meta.llama4-reasoning-17b-instruct-v1:0 https://x.com/btibor91/status/1917232574344384522

“The Leaderboard Illusion – Identifies systematic issues that have resulted in a distorted playing field of Chatbot Arena – Identifies 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release https://x.com/arankomatsuzaki/status/1917400711882797144?s=46

“It turns out that Meta had 27 different models on LM Arena prior to the launch of Llama 4, but they announced it as if they had one model that topped the leaderboard. An extreme example of benchmark hacking (which other labs also do to lesser degrees). https://x.com/emollick/status/1917435868702257538

“> Qwen3 drops > 235B total / 22B active > neck to neck with LLaMA-4-Maverick zucc in absolute shambles https://x.com/ns123abc/status/1916971024509280503

“Dynamic Qwen3 GGUFs are here! – Run them in llama.cpp, lmstudio and ollama nowww! 💥 https://x.com/reach_vb/status/1916982114462900726

“Qwen’s distillation teacher MoE has fewer total parameters (235B) than Meta’s Llama 4 Behemoth has active (288B). As a result its still much smaller distills dunk on Scout viciously. Should have trained that behemoth ass to a fit condition first, huh https://x.com/teortaxesTex/status/1916971319800823932

facebookresearch/MILS: Code release for “LLMs can see and hear without any training” https://github.com/facebookresearch/MILS

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading