Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Photorealistic architectural photography of six Ionic limestone columns on a university quad with a classical entablature spanning the top, the frieze carved with a bas-relief depicting a miniature version of the same columned monument in recursive detail, the word META inscribed in Roman serif letters in the architrave above, golden hour lighting casting soft shadows across the carved stone texture, wide landscape composition with red brick buildings and green lawn in background, clear blue sky.
Agents Rule of Two: A Practical Approach to AI Agent Security https://ai.meta.com/blog/practical-ai-agent-security/
EdgeTAM, real-time segment tracker by Meta is now in @huggingface transformers with Apache-2.0 license 🔥 > 22x faster than SAM2, processes 16 FPS on iPhone 15 Pro Max with no quantization > supports single/multiple/refined point prompting, bounding box prompts https://x.com/mervenoyann/status/1986785795424788812
Meta, Google, Apple – they’re all building AI replicas that capture your face, expressions, movements, personality. This goes way beyond Face ID. They’re basically creating a version of you that knows you better than you know yourself. The fidelity is remarkable too. We went https://x.com/bilawalsidhu/status/1985398951407722901
Meta estimates that it earns 10% of its revenue from scams, report says | TechCrunch https://techcrunch.com/2025/11/06/meta-estimates-that-it-earns-10-of-its-revenue-from-scams-report-says/
A detailed look into the new WebUI of llama.cpp https://x.com/ggerganov/status/1985727389926555801
LlamaBarn v0.10.0 (beta) is out – feedback appreciated https://x.com/ggerganov/status/1986072781889347702
New Llama.cpp UI is a blessing for the local AI world 🌎 – Blazing fast, beautiful, and private (ofc) – Use 150,000+ GGUF models in a super slick UI – Drop in PDFs, images, or text documents – Branch and edit conversations anytime – Parallel chats and image processing – Math and https://x.com/victormustar/status/1985742628776706151
congrats to llama 3 large for winning the LLM trading contest by not participating https://x.com/yifever/status/1986064968262062088
How much RAM do you need to run tiny models? Jamba Reasoning 3B runs on just 2.25 GiB, the lightest among small models like Qwen (@Alibaba_Cloud), Llama (@Meta), Granite (@IBM), and Gemma (@GoogleDeepMind). 👉 Try Jamba Reasoning 3B yourself: https://x.com/AI21Labs/status/1986439953539076169
Leaving Meta and PyTorch https://soumith.ch/blog/2025-11-06-leaving-meta-and-pytorch.md.html
Leaving Meta and PyTorch I’m stepping down from PyTorch and leaving Meta on November 17th. tl;dr: Didn’t want to be doing PyTorch forever, seemed like the perfect time to transition right after I got back from a long leave and the project built itself around me. Eleven years https://x.com/soumithchintala/status/1986503070734557568
OpenAI Readies Itself for Its Facebook Era — The Information https://www.theinformation.com/articles/openai-readies-facebook-era
Qwen3-VL Accuracy Differences on Ollama vs MLX Video: https://x.com/andrejusb/status/1985612661447331981
I am very sympathetic to the delays in publishing papers, but I think we need to be careful with “”AI can’t do this”” claims when our empirical evidence pre-dates even o1 class Reasoners. The strongest model here is GPT-4 (which does better) and the next best is Llama 2 70B(!!)… https://x.com/emollick/status/1985610450709434527





Leave a Reply