“With Llama 3.2 we released our first-ever lightweight Llama models: 1B & 3B. These models empower developers to build personalized, on-device agentic applications with capabilities like summarization, tool use and RAG where data never leaves the device.
Meta will not immediately join EU’s AI Pact ahead of new law | Reuters
Llama 3.2 goes small and multimodal · Ollama Blog
“These lightweight Llama models were pretrained on up to 9 trillion tokens. One of the keys for Llama 1B & 3B however was using pruning & distillation to build smaller and more performant models informed by powerful teacher models. Pruning enabled us to reduce the size of extant
“Zuck’s AI strategy in a nutshell: – Free frontier model for devs – Undercut closed-source rivals – Crowdsource best use cases for multimodal AI – Skip cloud wars; instead monetize biz/creator agents – Harvest fresh data/content for Meta ecosystem – Profit Zuck’s XR strategy in
“3. Meta is rolling out experimental AI features for Reels, including automatic video dubbing and lip-syncing allowing anyone to create content across any language. Hat tip to MrBeast.
“8. Meta announced Orion: new AR glasses that ‘weave both AR and AI into everyday life’ Glasses just make sense for AI wearables” / X
“Meta announced Orion: “the world’s most advanced AR glasses ever made” > Meta AI and Orion are multimodal and can understand everything you see > Comes with eye, hand, and neural tracking > Video calls turn friends into lifelike avatars next to you 🤯
“7. Meta is also rolling out new AI improvements to the Ray-Ban Meta glasses! Highlights: -Can remember things you see and set reminders -Multimodal and can now scan QR codes -Can see what you do in real-time through video (!) -LIVE language translation
“Meta just announced a ton of new AI announcements across Meta AI, Llama, Ray-Bans, and more. Here’s everything important announced live from here @ Meta Connect: 1. Meta AI is getting its own voice mode!
“Got to hang out with Zuck and some of the world’s most talented creators this week. Thank you Meta for the incredible event and hospitality! A few from the notes: 1. Zuck is even more cool in person. When I saw him he dapped me up and we immediately started nerding about AI
“6. According to Meta, running the models locally can make prompts and responses “feel instantaneous” since the processing is done locally. But most importantly, since the processing is local, your data stays on your device and private.
“3. Meta is rolling out experimental AI features for Reels, including automatic video dubbing and lip-syncing allowing anyone to create content across any language. Hat tip to MrBeast.
“We’re seeing exciting performance from these new models, with results that outperform Gemma 2 2.6B and Phi 3.5-mini models on a range of tasks even at smaller sizes!
Worst idea in human history
“4. Meta is testing ‘Imagined for you’ AI-generated content that will show up on users Facebook and Instagram Feeds. You can “tap a post to take the content in a new direction” or “swipe to see more content imagined for you in real-time” AI-generated content, tailored to each
“Some early impressions of the ChatGPT Advanced Voice Mode: It’s very fast, there’s virtually no latency from when you stop speaking to when it responds. When you ask it to make noises it always has the voice “perform” the noises (with funny results). It can do accents, but when
“$META’s iPhone moment
“Here’s a sneak peek at Meta’s new small form glasses, called Orion. They’re fully standalone and feature eye, hand, and even neural tracking. Can’t wait to try these!
Meta Connect 2024: Quest 3S, Llama 3.2, & More | Meta Quest Blog | Meta Store
“Teleportation time! ICYMI photorealistic 3D scans have come to Quest VR headsets via a new Meta app called Hyperscape. Currently just a few sample scenes to stream to your headset, but soon there will be a capture app that supports 3D Gaussian splatting.
Llama 3.2
meta-llama (Meta Llama)
“The lightweight Llama 3.2 models shipping today include support for @Arm, @MediaTek & @Qualcomm to enable the developer community to start building impactful mobile applications from day one.
“I just pulled the numbers on vision-language benchmarks for Llama-3.2-11B (vision). Surprisingly, the open-source community at large isn’t behind in the lightweight model class! Pixtral, Qwen2-VL, Molmo, and InternVL2 all stand strong. OSS AI models have never been stronger. The
“At Connect, we announced Hyperscape technology, bringing photorealistic spaces into the metaverse with your mobile phone (scanning not yet available). We’ve launched a demo app so you can experience these high-fidelity spaces yourself!
“📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more! What’s new? • Llama 3.2 1B & 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for @Arm, @MediaTek & @Qualcomm on day one. •
Llama 3.2: Revolutionizing edge AI and vision with open, customizable models
meta-llama/llama-stack: Composable building blocks to build Llama Apps
“A few technical insights on our lightweight Llama 3.2 1B & 3B models. 🦙🧵” / X
“My analysis of Llama 3.2: 1. New 1B and 3B text only LLMs 9 trillion tokens 2. New 11B and 90B vision multimodal models 3. 128K context length 4. 1B and 3B used some distillation from 8B and 70B 5. VLM 6 billion img, text pairs 6. CLIP MLP GeLU + cross attention Long analysis:
“🚀 Big news! We’re thrilled to announce the launch of Llama 3.2 Vision Models & Llama Stack on Together AI. 🎉 Free access to Llama 3.2 Vision Model for developers to build and innovate with open source AI.





Leave a Reply