A virtual Llama is in cyberspace created from cascading code in the style of The Matrix. The Llama is wearing VR glasses.
“Meta is shutting down 3rd party AR filters on Instagram and the Spark AR authoring tool for it. Why? Esp, when Snap and TikTok are crushing it on this front. Possible reasons: 1. Complete focus on VR (Quest) and light-weight AR (Raybans), thus a reduced focus on mobile AR
“Meta’s Connect hardware plan: a cheaper Quest 3S, Orion AR glasses prototype preview and RayBan smart glasses in new colors and styles + new AI features
“As part of the release of Llama 3.1, we also released new trust & safety research including CyberSecEval 3. We’ve published our research on this work to continue the conversation on empirically measuring LLM cybersecurity risks & capabilities. Paper ➡️
Exclusive | Mark Zuckerberg Says White House Was ‘Wrong’ to Pressure Facebook on Covid – WSJ
“New research paper from Meta FAIR – Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model. @violet_zct, @liliyu_lili and team introduce this recipe for training a multi-modal model over discrete and continuous data. Transfusion combines next token
“Lots of great insights in @cursor_ai ‘s latest blog on how they modified diff format and speculative edits with fine-tuned Llama 70B to get a 4-5x speed up over GPT4-o! “fast-apply model surpasses GPT-4 and GPT-4o performance and pushes the pareto frontier on the accuracy /
Announcing Higgs Llama V2
“💫 Check out Cerebras Inference! @cerebrassystems Inference is launched and with incredibly fast inference. How fast are we talking? 1850t/s for Llama3.1-8b and 450 t/s for Llama3.1-70b, with Cerebras you can make your RAG experiences incredibly fast! Try out this Chat with your
“Verified by @ArtificialAnlys, @CerebrasSystems Inference is capable of serving Llama 3.1 70B at 450 tokens/sec and Llama 3.1 8B at 1,850 tokens/sec!” / X
“Introducing Cerebras Inference ‣ Llama3.1-70B at 450 tokens/s – 20x faster than GPUs ‣ 60c per M tokens – a fifth the price of hyperscalers ‣ Full 16-bit precision for full model accuracy ‣ Generous rate limits for devs Try now:
Bringing Llama 3 to life – Engineering at Meta
Meta leads open-source AI boom, Llama downloads surge 10x year-over-year | VentureBeat
With 10x growth since 2023, Llama is the leading engine of AI innovation
“Open source AI is the way forward and today we’re sharing a snapshot of how that’s going with the adoption and use of Llama models. Read the full update here ➡️





Leave a Reply