Image created with gemini-2.5-flash-image with claude-sonnet-4-5-20250929. Image prompt: A cinematic photograph of a modern smartphone on an antique English courtroom bench, its illuminated screen showing classical scales of justice in elegant line art, surrounded by leather-bound law books and a white barrister’s wig, lit by warm window light streaming through gothic stonework suggesting Lincoln’s Inn, composition emphasizing the bridge between mobile technology and timeless legal tradition.
Google Gemini is the top free iPhone app https://9to5google.com/2025/09/13/gemini-top-free-apple-app-store/
Made it to no.1 in the App Store. Congrats to the @GeminiApp team for all their hard work, and this is just the start, so much more to come!”” / X https://x.com/demishassabis/status/1966931091346125026
Happy to land this data-efficient model! Our team is dedicated to building cutting-edge, efficient reasoning models. We are excited to release MobileLLM-R1, a series of sub-billion parameter reasoning models. Collaborating w/ @zechunliu, Changsheng Zhao et al.”” / X https://x.com/erniecyc/status/1966511167053910509
We have released small-scale reasoning models MobileLLM-R1 (0.14B, 0.35B, 0.95B) that are trained from scratch with just 4.2T pre-training tokens (10% of Qwen3), while its reasoning performance is on-par with Qwen3-0.6B. Thanks the three core contributors for their great work!”” / X https://x.com/tydsh/status/1967476530826854674
Meta MobileLLM-R1-140M, which can run 100% locally in your browser, no server inference required vibe coded a chat app powered by transformers.js in anycoder https://x.com/_akhaliq/status/1967460621802438731
Thanks @_akhaliq for sharing our work! MobileLLM-R1 marks a paradigm shift. Conventional wisdom suggests that reasoning only emerges after training on massive amounts of data, but we prove otherwise. With just 4.2T pre-training tokens and a small amount of post-training,”” / X https://x.com/zechunliu/status/1966560134739751083
Meta just dropped MobileLLM-R1 on Hugging Face a edge reasoning model with fewer than 1B parameters 2×–5× Performance Boost over other fully open-source models: MobileLLM-R1 achieves ~5× higher MATH accuracy vs. Olmo-1.24B, and ~2× vs. SmolLM2-1.7B. Uses just 1/10 the https://x.com/_akhaliq/status/1966498058822103330
So it looks like Claude got there first: an actually smart phone assistant that can take complex requests that involve both common sense and complicated constraints. It is still beta feeling though & I found I needed to use the bigger Opus model as Sonnet was not smart enough. https://x.com/emollick/status/1966170169367232556
ever wondered how the text search on your phone image gallery works? @AIatMeta released MetaCLIP2, and we’ve added it to @huggingface transformers 🔥 it’s a multilingual model that can understand image + text! find the notebook for text-to-image search on the next ⤵️ https://x.com/mervenoyann/status/1966544046744011242
The nano banana effect 🍌”” / X https://x.com/bilawalsidhu/status/1966921687271961082




