a llama wearing a beret. large text label reads “Open Source” –chaos 20 –ar 4:3 –style raw –personalize ytt1577 –v 6.1
“Switzerland now requires all government software to be open source
Open source AI is the path forward | Hacker News
Switzerland federal government requires releasing its software as open source | ZDNET
“A picture worth billions of parameters! How open-weight models are closing the gap with closed-source ones. h/t @maximelabonne
“📈EDA-GPT Your OpenSource Data Analysis Companion EDA GPT streamlines the data analysis process, allowing users to effortlessly explore, visualize, and gain insights from their data. Has a great looking UI and lots of configurability!
Apple
Apple shows off open AI prowess: new models outperform Mistral and Hugging Face offerings | VentureBeat
Cohere
AI startup Cohere sees valuation soar to $5.5B after new funding round | Seeking Alpha
“Today, we’re introducing Rerank 3 Nimble: the newest foundation model in our Cohere Rerank model series, built to enhance enterprise search and RAG systems, which is ~3x faster than Rerank 3 while maintaining a high level of accuracy. It’s available only on Amazon Sagemaker.
Cohere raises $500M to beat back generative AI rivals | TechCrunch
Hugging Face
“The @huggingface Hub serves over 6 petabytes and nearly 1 billion requests daily! And AI is just getting started.🚀 Huge shoutout to our amazing infra and Hub team! 👏
Meta/Llama
Meta attacks OpenAI’s business model as the AI race shifts – The Verge – https://www.theverge.com/2024/7/26/24206274/the-ai-race-big-shift-models-to-products
Pretty much what Leopold said… “Mark Zuckerberg argues that it doesn’t matter that China has access to open weights, because they will just steal weights anyway if they’re closed. Pretty remarkable.
“A lot of models were at least partially trained on GPT-4 outputs (against their policy). It is why so many identify as GPT-4 when pushed and tell the same jokes as GPT-4. A subtle effect of this new policy is the next generations of models will have a more Llama-like personality” / X
Mark Zuckerberg – Open Source AI is the Path Forward In the early… | Facebook
“The 70b is really encroaching on the 405b’s territory. With that close performance the utility of big models would be just to distill from it.” / X
Llama: “This is a big one. A frontier-class model available for all. Expect very cheap intelligence (of a sort) applied to all sorts of key problems as fine-tuned models are developed. But also expect governments & scammers to quickly breach the guardrails and use it in sketchier ways.” / X
“OpenAI models have not been improving significantly, means meta models will be caught up in open weights” / X
“Kind of wild to think meta is training and releasing open weights models faster than OpenAI can release closed models. Dude is already cooking llama 4” / X
“The power of GPT-4 in the palm of our hands. A truly historic moment. Congrats to the Meta team!!” / X
“Huge congrats to @AIatMeta on the Llama 3.1 release! Few notes: Today, with the 405B model release, is the first time that a frontier-capability LLM is available to everyone to work with and build on. The model appears to be GPT-4 / Claude 3.5 Sonnet grade and the weights are” / X
The Llama 3 Herd of Models | Research – AI at Meta
“Exclusive: Meta just released Llama 3.1 405B — the first-ever open-sourced frontier AI model, beating top closed models like GPT-4o across several benchmarks. I sat down with Mark Zuckerberg, diving into why this marks a major moment in AI history. Timestamps: 00:00 Intro
Meta releases Llama 3.1 open-source AI model to take on OpenAI – The Verge
Llama 3.1
Introducing Llama 3.1: Our most capable models to date
“Meta shakes up the AI space with Llama 3.1! 🦙💥 Key highlights: 1. 🏆 405B model claims to match or beat GPT4 & Claude 3.5 2. 🔓 New license allows using outputs to train other LLMs Tech specs: • 🌍 8 languages supported (English, French, German, Hindi, Italian, Portuguese,” / X
Download Llama
“Starting today, open source is leading the way. Introducing Llama 3.1: Our most capable models yet. Today we’re releasing a collection of new Llama 3.1 models including our long awaited 405B. These models deliver improved reasoning capabilities, a larger 128K token context
“We’ve also updated our license to allow developers to use the outputs from Llama models — including 405B — to improve other models for the first time. We’re excited about how this will enable new advancements in the field through synthetic data generation and model distillation” / X
“Potential benchmark leaks for a new series of Llama 3.1 models, including a 405 bn param version. Unconfirmed, however the 70Bb one matching GPT-4 levels. Specially, at its size of being 6x smaller. Also to note that this is the base model not instruct. And many of these
“Things people are overlooking from Llama 3.1 release – More permissive license: allows training on model outputs – Prompt Guard: first BERT-based model that can classify prompt injection and jailbreaking – Multilingual: German, French, Hindi, Thai, …
“Tomorrow is a pivotal day in the world of AI, with the release of LLaMA 3.1 405b. Suddenly, the world will have access to an open-source model considered SOTA. It BEATS GPT4o on many benchmarks. What a time to be alive.
“Build your own mixture-of-agents using LlamaIndex! In this video @1littlecoder introduces “mixture of agents” – a novel approach using multiple local language models to potentially outperform single models, even surpassing GPT-4 on some benchmarks. It includes a step-by-step
“Now we know which expert to consult. Macro economics problem? Ask Llama. Micro is more of a Claude or ChatGPT thing.
“Llama 3.1 is eating away at any advantage these closed models have… literally you have access to state of the art model through together api or fireworks for $5/1M. Then you have access to those weights and can fine tune the model and deploy the smaller version yourself… using” / X
“If this image is real comparing Llama-3.1 405/70/8b against gpt4o – we have SOTA Frontier Models available Open Source now:
“Llama-3 400b benchmarks got leaked on Reddit. It looks like it beat GPT-4o, which is fantastic. Not sure if they also beat Sonnet 3.5, but I think they come pretty close We will know more once we run the model on Livebench AI. It would be super cool if an open-source model
“Llama-3 405b is officially dropping tomorrow. Seeing rumors that it’s already out on the Internets. Size: 820 GB.” / X
“Our Llama 3.1 405B is now openly available! After a year of dedicated effort, from project planning to launch reviews, we are thrilled to open-source the Llama 3 herd of models and share our findings through the paper: 🔹Llama 3.1 405B, continuously trained with a 128K context
“The training of Llama 3.1 required more computation than some of the thresholds above which some regulators claim AI becomes dangerous. Quote from the piece: “thus far, those who argued for strict regulation of AI models based on compute thresholds, and who supported policies to” / X
“🤖💡Just tried out @AymericRoucher’s new LLaMA-3.1 70B agent for data analysis. Impressive stuff. 🚢📊 Fed it the Titanic passenger dataset with minimal instructions. The agent autonomously dug in, tested hypotheses, and reached some intriguing conclusions: “Lower class
“All the current models are getting into a good grad school, especially in the humanities. This includes the small, open Llama 3.1 70B model. Nice gains over the previous generations. (Yes, human tests are never a great way to judge models, but still interesting)
“Our big release today lets you get structured extraction capabilities in any LLM-powered ETL, RAG, and/or agent pipeline – with full support for async and streaming ✨ Simply define a Pydantic object, attach it to your LLM (`as_structured_llm(…)`). This allows you to do
“Zuck believes there could be more AI agents than people in the world. During our conversation, I asked Mark about his long-term vision of AI and AGI in the future. His response: Zuckerberg: “Our vision is that there should be a lot of different AI out there and AI services,
“Mark is an incredible CEO. What particularly stood out to me from our conversation was how deeply he thinks about open source and how it’ll benefit Meta (and the world) in the long term. A few of my favorite moments/quotes: On the future of AI agents: “I think we’re going to” / X
Mark Zuckerberg on Llama 3.1, Open Source, AI Agents, Safety, and more – YouTube
Exclusive interview with Mark Zuckerberg
Open Source AI Is the Path Forward | Meta
“What can you do with Llama quality and Groq speed? You can do Instant. That’s what. Try Llama 3.1 8B for instant intelligence on
Alex Cheema – e/acc on X: “2 MacBooks is all you need. Llama 3.1 405B running distributed across 2 MacBooks using @exolabs_ home AI cluster https://t.co/MLm47UR0B7” / X – https://twitter.com/ac_crypto/status/1815969489990869369
“Running GPT-4 class models locally off a thunderbolt 4 network of macbook pros / mac pros is pretty fucking dope. That level of intelligence – totally your own, with your own private data, all day inference with extremely low latency. It’s pretty wild how good we have it
“The mix of speed + smarts on display has me shook. Rare AI demo to have this effect 🤯 This is what iteration at the speed of thought looks like. Like, how can this tech not absolutely transform knowledge work?
“If these stats are real, arguably the top AI model is going to be open weights and available for free for all as of this week Every national government, organization & company worldwide will have access to the same set of AI capabilities as everyone else. It will be interesting
“Wow. After only one day of Llama 3.1 405b, French startup Mistral AI dropped LARGE 2. It’s ANOTHER open-source flagship AI model that scores close to Llama 3.1 405b and even surpasses it on coding benchmarks while being much smaller at 123b. Benchmarks vs. Llama 3.1 405b: –
“Mistral just dropped Mistral Large 2, their new flagship model 🔍 128k context window 🌎 11 languages: French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean 💻 80+ coding languages 🧠 123 billion parameters 📜 Mistral Research
“How to fine-tune and evaluate a @MistralAI language model to detect factual inconsistencies and hallucinations in text summaries with @weights_biases:
“Due to popular demand, I’ve updated this figure to include DeepSeek-V2 and Mistral Large 2. It’s also more zoomed for readability.
Mistral shocks with new open model Mistral Large 2, taking on Llama 3.1 | VentureBeat
Large Enough | Mistral AI | Frontier AI in your hands
“Today, we release Mistral Large 2, the new version of our largest model. Mistral Large 2 is a 123B-parameter model with a 128k context window. On many benchmarks (notably in code generation and math), it is superior or on par with Llama 3.1 405B. Like Mistral NeMo, it was trained” / X
“You can use Mistral Large 2 on Le Chat — it’s free!
“On HumanEval and on MultiPL-E, Mistral Large 2 outperforms Llama 3.1 405B instruct, and scores just below GPT-4o. On MATH (0-shot, without CoT) it only falls behind GPT-4o. (2/N)
“On Multilingual MMLU, the performance of Mistral Large 2 significantly outperforms Llama 3.1 70B base (+6.3% average over 9 languages) and is on par with Llama 3 405B (-0.4% below). (3/N)
Mistral
“”By the end of this blog post, you will have – learnt all the new goodies accompanying the latest macOS release – AND successfully run a 7B parameter model using less than 4GB of memory on your Mac.” Game-changer for local AI? Can’t wait to try this! Brillant work by
Qwen
“Synthetic data can beat its teacher! The AI-MO team released their winning dataset with an additional fine-tuned @Alibaba_Qwen 2 model that approaches or surpasses @OpenAI GPT-4o and @AnthropicAI Claude 3.5 in match competitions. 👀 There was a sentiment that fine-tuned models
Other Open Source News
“In my experience, the accuracy gap between open-source and proprietary is negligible now and open-source is cheaper, faster, more customizable & sustainable for companies! No excuse anymore not to become an AI builder based on open-source AI (vs outsourcing to APIs)! https://twitter.com/ClementDelangue/status/1816036185799619066

Heads up! You’ve scrolled to the end of this category. There may have been just one or two links (above), so go back up and double check to be sure you didn’t quickly scroll down past it.
Be Sure To Read This Week’s Main Post:
This week’s executive overview and top links are here:
AI News #43: Week Ending 07/26/2024 with Executive Summary and Top 97 Links
The post you just read is an deep dive extension of my weekly newsletter, This Week In AI, an executive summary of the top things to know in AI. Each week, I create an accessible overview for laypeople to feel confident they are conversant with the week’s AI developments. I include a curated list of must-click links of the week, to offer everyone a hands-on opportunity to explore the most intriguing updates in artificial intelligence across various categories, including robotics, imagery, video, AR/VR, science, ethics, and more. Beyond the overview, I post these topic-based deeper dives (below). If you haven’t read this week’s overview, I recommend starting there.
- Agents/Copilots
- Amazon
- Apple
- Artificial General Intelligence (AGI)
- Augmented and Virtual Reality (AR/VR)
- Autonomous Vehicles
- AI Audio
- Business and Enterprise AI
- Chips and Hardware
- Consumer Products
- Education
- Ethics/Legal Security
- Images/Photos
- International AI News
- Locally Run AI Models
- Mobile
- Meta
- Microsoft
- OpenAI
- Open Source
- Podcasts/YouTube
- Publishing and News
- Retrieval-Augmented Generation (RAG) News
- Robots and Embodiment
- Safe Intelligence, Inc.
- Science and Medicine
- Video
- Vision/Multimodality
- X/Twitter/Grok
- Tech and Development
Credits/Sources

Most of these weekly links come from just a few prolific oversharing sources. Please follow them, as they work hard to find the news each week and they make it a lot easier for me to compile.
- Robert Scoble: https://x.com/Scobleizer
- Ethan Mollick: https://www.linkedin.com/in/emollick/
- Alan Thompson: https://lifearchitect.ai/
- Theoretically Media: https://www.youtube.com/@TheoreticallyMedia
- The Rundown: https://www.therundown.ai/
- Bilawal Sidhu: https://twitter.com/bilawalsidhu/
- TLDR: https://tldr.tech/ai
- Jeremiah Owyang: https://twitter.com/jowyang
- Nick St. Pierre: https://twitter.com/nickfloats
- Dr. Jim Fan: https://twitter.com/DrJimFan
- All About AI: https://www.youtube.com/@AllAboutAI
- Marshall Kirkpatrick: https://aitimetoimpact.com/
- AI News (Smol Talk): https://buttondown.email/ainews/archive/
- Andrej Karpathy: https://x.com/karpathy
- Brett Adcock: https://x.com/adcock_brett
- Florent Daudens: https://x.com/fdaudens
- Ate-a-Pi: https://x.com/8teAPi
- Francesco Marconi: https://x.com/fpmarconi
For previous issues, please visit the archives!

Thanks for reading!





Leave a Reply