“Opensource alternative to Manus AI Agent is blowing up on GitHub. OWL is an Autonomous AI Agent framework that can research, browse and code with multi-agent collaboration. Works with Claude Sonnet 3.7, DeepSeek, GPT-4o and even local LLMs with Ollama. 100% Opensource. https://x.com/Saboo_Shubham_/status/1899286098679132529
“BOOM! My early access to @ManusAI_HQ has blown my mind. I just made a Tesla stock dashboard and research page. I never seen this power in my life in an agentic AI. Bests OpenAI $20,000 per month platform for 99% less cost. And you get a Linux instance. Manus=Tesla is a BUY. https://x.com/BrianRoemmele/status/1898562276950683866
“Got access and it’s true… Manus is the most impressive AI tool I’ve ever tried. – The agentic capabilities are mind-blowing, redefining what’s possible. – The UX is what so many others promised… but this time it just works. prompt: “code a threejs game where you control a https://x.com/victormustar/status/1898505307896131708
“We’re excited to announce a strategic partnership with LG CNS to co-develop secure agentic AI solutions for South Korean enterprises. This marks another step in our global expansion to unlock real value for businesses across markets! https://x.com/cohere/status/1899083562495713516
“Manus, the new AI product that everyone’s talking about, is worth the hype. This is the AI agent we were promised. Deep Research+Operator+Computer Use+Lovable+memory. Asked it to “Do a professional analysis of Tesla stock ” and it did ~2wks of professional-level work in ~1hr! https://x.com/deedydas/status/1898444603071795378
“manus is the craziest AI agent! it was asked to: • Plan 2-month family trip • Australia → NZ → Argentina → Antarctica watch it self-assign tasks, browse the web, research, and then creates a stunning itinerary with stays, budget & a food guide!🤯 https://x.com/LamarDealMaker/status/1898454061277458498
“Got access to Manus AI prompt: make a three.js endless runner game https://x.com/_akhaliq/status/1898862611535405242
“Tencent just announced a 30x acceleration in model generation speed across the entire Hunyuan3D 2.0 family, reducing the processing time from 30 seconds to just 1 second, available on Hugging Face https://x.com/_akhaliq/status/1902199977096499424
“16/ 4 weeks on content in 2 minutes with Manus!? 🤯 Creates separate docs with each 𝕏 post/thread saved in as drafts + copy over to Typefully or any post scheduler and automate your social media growth. https://x.com/AtomSilverman/status/1901708774177755619
“@MistralAI Amazingly multilingual and long context capabilities: https://x.com/sophiamyang/status/1901676699361882439
“@MistralAI Available on @huggingface: https://x.com/sophiamyang/status/1901677007278092508
“@MistralAI Outperform comparable models – instruct benchmarks: https://x.com/sophiamyang/status/1901676305025774020
“Announcing @MistralAI Small 3.1: multimodal, multilingual, Apache 2.0, the best model in its weight class. 💻 Lightweight: Runs on a single RTX 4090 or a Mac with 32GB RAM, perfect for on-device applications. 🗣️ Fast-Response Conversations: Ideal for virtual assistants and other https://x.com/sophiamyang/status/1901675671815901688
“I tested a bunch of models on instruction following on patents yesterday. Findings: – Mistral Small 3 is better than Gemini Flash 2.0 – Mistral models are pretrained on way more patents, evident by their lower perplexity scores” / X https://x.com/casper_hansen_/status/1901540769040683214
“Softbank has signed an agreement with Perplexity to be an authorized reseller of Perplexity Enterprise Pro and deploy their 7,000-member sales team to scale Perplexity’s adoption in Japan. This comes after internally adopting Perplexity and evaluating it against other tools. 🇯🇵 https://x.com/AravSrinivas/status/1901763358019482076
“Mistral’s endpoint is priced amongst the cheapest models with pricing of $0.1/$0.3 per million input/output tokens, equivalent to Mistral Small 3. https://x.com/ArtificialAnlys/status/1902017029147865535
“@MistralAI @huggingface We also support enterprise deployments with private and optimized inference infrastructure. Check out our blog post for details: https://x.com/sophiamyang/status/1901677325588078774
“Mistral have released Mistral Small 3.1, adding image input and a 128k token context window to Mistral Small 3 Key results and info: ➤ @MistralAI Small 3.1 scores an Artificial Analysis Intelligence Index of 35, in line with Mistral 3 and other models such as GPT-4o mini and https://x.com/ArtificialAnlys/status/1902017023917666351
Mistral Small 3.1 | Mistral AI https://mistral.ai/news/mistral-small-3-1
“Nice video on @MistralAI Small 3.1 from @1littlecoder 🫶 https://x.com/sophiamyang/status/1902038297620443612
“🔥 Mistral-Small-3.1 (24B) is already #3 on @huggingface after just 1 day! Brings SOTA vision + 128k context while fitting on a 32GB MacBook. Crushes benchmarks in reasoning, multilingual & visual tasks. Apache 2.0 licensed 🚀 https://x.com/fdaudens/status/1902111100503572865
“@MistralAI @huggingface Available on @MistralAI La Plateforme `mistral-small-latest` : https://x.com/sophiamyang/status/1901677125918134276
“one of the greatest Chinese advantages that I notice, and that gets little attention, is how much less afraid their boomers are of learning 2 tech. The state machine jumping on R1 adoption is illustrative. It implies they’re not so limited by population aging as often argued.” / X https://x.com/teortaxesTex/status/1902545539725463758
“We’ve just unveiled ERNIE 4.5 & X1! 🚀 As a deep-thinking reasoning model with multimodal capabilities, ERNIE X1 delivers performance on par with DeepSeek R1 at only half the price. Meanwhile, ERNIE 4.5 is our latest foundation model and new-generation native multimodal model. https://x.com/Baidu_Inc/status/1901089355890036897
“@MistralAI Outperform comparable models – multimodal instruct benchmarks: https://x.com/sophiamyang/status/1901676575965282395
“Wow! Mistral just dropped a 24B SOTA Multilingual, Multimodal LLM with 128K context AND Apache 2.0 license 🔥 https://x.com/reach_vb/status/1901670885188071545
“Mistral AI released Small 3.1, a SOTA multilingual and multimodal LLM —24B (can run on a laptop) —128k token context window —Outperforms Gemma 3 and GPT-4o Mini on most benchmarks —Inference speed of 150 tokens/sec —Open-source under Apache 2.0 license https://x.com/rowancheung/status/1901887637465809285
“Introducing Mistral Small 3.1. Multimodal, Apache 2.0, outperforms Gemma 3 and GPT 4o-mini. https://x.com/MistralAI/status/1901668499832918151




