Flux[dev]: Facebook corporate headquarters. A sunny day in Silicon Valley. A futuristic robot, sleek smooth humanoid design. Smooth, glossy black faceplate with no visible facial features, high-tech, minimalist appearance. The robot’s body is matte black or dark gray, with articulated joints and mechanical parts that resemble those of a human, including fingers. Mark Zuckerburg poses with the robot. In the background, “Facebook” is written above the entrance to the building.
Zuckerberg touts Meta’s latest video vision AI with Nvidia CEO Jensen Huang | TechCrunch
“Although there is probably too much AI hype these days, I am excited about my Ray-Ban smart glasses for many reasons (e.g., listening to music, live streaming, image capture etc.). The “killer app” is that these glasses are now powered by Meta’s Llama AI model! 1/3
Call for Applications: Llama 3.1 Impact Grants
“Our SAM 2 pod with @nikhilaravi is out! Fun SAM1 quote from guest cohost @josephofiowa: “I recently pulled statistics from the usage of SAM in @RoboFlow over the course of the last year. And users have labeled about 49 million images using SAM on the hosted side of the RoboFlow
“SAM 2 from Meta FAIR is the first unified model for real-time, promptable object segmentation in images & videos. Using the model in our web-based demo you can segment, track and apply effects to objects in video in just a few clicks. Try SAM 2 ➡️
“Idefics3-Llama is out! 💥 It’s a multimodal model based on Llama 3.1 that accepts arbitrary number of interleaved images with text with a huge context window (10k tokens!) 😍 Link to demo and model in the next one 😏
“It’s curious how Llama 405b’s performance drops by 5 percentage points when using standard simple-evals prompts instead of its native Llama 3.1 prompts. Other models show much less sensitivity to this prompt change and fall nicely along the 45-degree line.
“📣 Today we’re opening a call for applications for Llama 3.1 Impact Grants! Until Nov 22, teams can submit proposals for using Llama to address social challenges across their communities for a chance to be awarded a $500K grant. Details + application ➡️
“New smol-vision tutorial dropped: QLoRA fine-tuning IDEFICS3-Llama 8B on VQAv2 🐶 Learn how to efficiently fine-tune the latest IDEFICS3-Llama on visual question answering in this notebook 📖 Link in the next one 🤗
“@huggingface This is the direct successor of Meta-Llama-3-120B-Instruct, a self-merge of Llama 3 70B that produced great results in tasks like creative writing.
“The methods from this paper were able to reliably jailbreak the most difficult target models with prompts that appear similar to human-written prompts. Achieves attack success rates > 93% for Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining model-measured perplexity <
Meta is reportedly offering millions to use Hollywood voices in AI projects
Meta courts celebs like Awkwafina to voice AI assistants ahead of Meta Connect – The Verge
“📌 Fantastic paper from @AIatMeta – substantial FLOPs savings while maintaining or improving performance with modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa introduces modality-specific expert
![Flux[dev]: Facebook corporate headquarters. A sunny day in Silicon Valley. A futuristic robot, sleek smooth humanoid design. Smooth, glossy black faceplate with no visible facial features, high-tech, minimalist appearance. The robot's body is matte black or dark gray, with articulated joints and mechanical parts that resemble those of a human, including fingers. Mark Zuckerburg poses with the robot. In the background, "Facebook" is written above the entrance to the building.](https://ethanbholland.com/wp-content/uploads/2024/09/meta-2.png)




Leave a Reply