Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Animation cel style illustration of a muscular blue-skinned genie with friendly expression emerging from ornate golden oil lamp, magical teal wisps and sparkles flowing toward a large glowing Windows logo window frame, deep purple background, Disney-quality hand-drawn aesthetic with bold outlines and cinematic lighting, jewel tone color palette, clean composition with horizontal space for text overlay.
Claude in Excel | Claude https://claude.com/claude-in-excel
16M impressions in 24 hours. if you’ve ever tried Claude in Sheets or Claude in Excel you will know how much more intelligent it is compared to Gemini in Sheets i have two current measures of Google-GDM product integration right now: – how long does it take Google to put a non”” https://x.com/swyx/status/2015207720237089146
Nvidia, Microsoft, Amazon in Talks to Invest Up to $60 Billion in OpenAI — The Information https://www.theinformation.com/articles/nvidia-microsoft-amazon-talks-invest-60-billion-openai
Source: Amazon could invest up to $50B in OpenAI in coming weeks https://www.cnbc.com/2026/01/29/amazon-openai-investment-jassy-altman.html
Microsoft Foundry https://ai.azure.com/?ocid=cmmptzv9sfq
Maia 200: The AI accelerator built for inference – The Official Microsoft Blog https://blogs.microsoft.com/blog/2026/01/26/maia-200-the-ai-accelerator-built-for-inference/
It’s a big day. Our Superintelligence team will be the first to use Maia 200 as we develop our frontier AI models.”” https://x.com/mustafasuleyman/status/2015825111769841744
Our newest AI accelerator Maia 200 is now online in Azure. Designed for industry-leading inference efficiency, it delivers 30% better performance per dollar than current systems. And with 10+ PFLOPS FP4 throughput, ~5 PFLOPS FP8, and 216GB HBM3e with 7TB/s of memory bandwidth”” https://x.com/satyanadella/status/2015817413200408959
$281b From One Customer | Tomasz Tunguz https://tomtunguz.com/281b-from-one-customer/
The next evolution: VLA+ models Just yesterday @MSFTResearch released Rho-alpha (ρα) – their first robotics model, built on the Phi family. While most Vision-Language-Action (VLA) models stop at vision and language, Rho-alpha adds: ▪️ Tactile sensing to feel objects during”” https://x.com/TheTuringPost/status/2014284149872644351





Leave a Reply