Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Wide-angle observational realism shot of a muted gray concrete courtyard with half-demolished industrial buildings, a large rectangular reflecting pool in foreground showing perfect mirror reflection of a chestnut fire horse standing at water’s edge, overcast flat daylight, desaturated teal and concrete gray palette, workers in background observing quietly, symmetrical composition, documentary stillness, large white Chinese cinema poster text overlay reading META in top third of frame, Jia Zhangke aesthetic with patient framing and one surreal rupture
Introducing Manus in Your Chat : Your Personal Agent, Everywhere You Are https://manus.im/blog/manus-agents-telegram
Meta Builds AI Infrastructure With NVIDIA | NVIDIA Newsroom https://nvidianews.nvidia.com/news/meta-builds-ai-infrastructure-with-nvidia
Meta expands Nvidia deal to use millions of AI data center chips https://www.cnbc.com/2026/02/17/meta-nvidia-deal-ai-data-center-chips.html
Manus AI launched 24/7 Agent via Telegram and got suspended https://www.testingcatalog.com/manus-ai-launched-24-7-agent-via-telegram-and-got-suspended/
Taalas runs Llama 3 8B at 16k tokens per second per user. That’s almost an order of magnitude increase even compared to SRAM-based systems like Cerebras. Key idea: each chip is specialized to a given model. The chip is the model. The chat demo is pretty wild:”” https://x.com/awnihannun/status/2024671348782711153
Georgi’s llama.cpp really kicked off the whole local model thing in my opinion – it made original Llama usable on personal computers, I wrote about it back in March 2023 https://x.com/simonw/status/2024855027517702345
Large language models are having their Stable Diffusion moment https://simonwillison.net/2023/Mar/11/llama/#llama-cpp
Ollama 0.16.3 is out with Cline and Pi integrations out of the box. Try it with: @cline: ollama launch cline Pi: ollama launch pi”” https://x.com/ollama/status/2024978932127187375
BREAKING: Llama.cpp joins Hugging Face 🤯”” https://x.com/victormustar/status/2024842175532413016
GGML and llama.cpp join HF to ensure the long-term progress of Local AI https://huggingface.co/blog/ggml-joins-hf





Leave a Reply