Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image hoodornament.jpg, preserve the deep midnight navy car hood, shallow depth-of-field sky background, chrome pedestal base, dramatic upward angle, and automotive ad lighting exactly as shown. Replace only the Mercedes star with a chrome triangular prism hood ornament of similar scale, mounted on the same pedestal, with rainbow light refracting through its facets. Add bold white sans-serif text reading MULTIMODALITY across the upper portion as a clean headline.

BREAKING 🚨: MiniMax released MiniMax M2.7, a new self-evolving model, achieving a score of 56.22% on SWE-Bench Pro. M2.7 was used for building complex agent harnesses during its own development. Users can now access MiniMax M2.7 via APIs and MiniMax Agent.
https://x.com/testingcatalog/status/2034250919345377604#m

During the iteration process, we also realized that the model’s ability to recursively evolve its harness is equally critical. Our internal harness autonomously collects feedback, builds evaluation sets for internal tasks, and based on this continuously iterates on its own
https://x.com/MiniMax_AI/status/2034315323109953605#m

Introducing MiniMax-M2.7, our first model which deeply participated in its own evolution, with an 88% win-rate vs M2.5 – Production-Ready SWE: With SOTA performance in SWE-Pro (56.22%) and Terminal Bench 2 (57.0%), M2.7 reduced intervention-to-recovery time for online incidents
https://x.com/MiniMax_AI/status/2034315320337522881#m

MiniMax Global Announces Full Year 2025 Financial Results – MiniMax News | MiniMax https://www.minimax.io/news/minimax-global-announces-full-year-2025-financial-results

Minimax M2.7 released! And its a big one Highlights: Self-evolving – first model that helped build itself, running 100+ autonomous optimization loops during its own RL training (30% internal improvement). Strong coder – 56.2% on SWE-Pro (near Opus 4.6), 55.6% on VIBE-Pro,
https://x.com/kimmonismus/status/2034269026353082422#m

MiniMax M2.7: Early Echoes of Self-Evolution – MiniMax News | MiniMax https://www.minimax.io/news/minimax-m27-en

MolmoPoint: Better pointing architecture for vision-language models | Ai2 https://allenai.org/blog/molmopoint

spent some time today playing with MolmoPoint it’s pretty crazy that we can use VLMs for multi-object tracking now instead of spelling out coordinates as text, it points by directly selecting parts of its own visual features prompt: “”Track blue players.””
https://x.com/skalskip92/status/2034606226902827228

NEW SOTA OCR MODEL DROPPED Congrats to @VikParuchuri and team for releasing Chandra OCR 2! – 85.9% on olmocr bench, making it first place 🏆 – 90+ language support – 4B model – Full layout information – Extracts + captions images and diagrams – Strong handwriting, math, form,
https://x.com/nathanhabib1011/status/2034565076963991910

Breaking: 1 trillion revenue for NVIDIA in 2027 Jensen Huang: “One year after last GTC, right here where I stand… I see, going down so much, through 2027. At least… one trillion dollars, you know? Now, does it make any sense? I’m certain computer demand will be much
https://x.com/TheTuringPost/status/2033622628385362068

Jensen just said NVIDIA’s $1T projection for 2025-27 covers only Blackwell and Rubin to keep it consistent with the previous projection. He mentioned he could have included Groq in that number: “”so if I would’ve included that, theoretically, not actually, but theoretically,
https://x.com/TheHumanoidHub/status/2033990614824665421

Nvidia targets data center revenue of $1+ trillion for 2025-2027. That’s already quite ridiculous, with the AI physical world only in its zeroth innings . $NVDA
https://x.com/TheHumanoidHub/status/2033627322331660784

A breakthrough in real-time video generation. As a research preview developed with @NVIDIA and shared at @NVIDIAGTC this week, we trained a new real-time video model running on Vera Rubin. HD videos generate instantly, with time-to-first-frame under 100ms. Unlocking an entirely
https://x.com/runwayml/status/2034284298769985914#m

NVIDIA GTC 2026 Keynote: Everything That Happened in 12 Minutes – YouTube https://www.youtube.com/watch?v=X2i_8O75_Os

Every time you get a cancer biopsy, the lab makes a tissue slide that costs about $5. It shows the shape of your cells under a microscope, and every cancer patient already has one on file. There’s a much fancier version of that test called multiplex immunofluorescence (basically
https://x.com/anishmoonka/status/2033344818475360562

We’re approaching the dawn of medical superintelligence – the moment when affordable, world-class medical knowledge and support is at your fingertips whenever you need it. I think people are still underestimating how profound this transformation is going to be. Today we’re
https://x.com/mustafasuleyman/status/2032092644483141928

LlamaParse Agentic Plus mode now delivers precise visual grounding with bounding boxes for the most challenging document elements. Our latest update brings major improvements to how we handle complex visual content: 📐 Complex LaTex formulas – accurately parse mathematical
https://x.com/llama_index/status/2034300076441633276#m

“a large jump in agentic” – we agree 🙌 M2.7 is a big step forward in agentic workflows, from tool use to real-world, multi-step execution. Now live on @OpenRouter 🚀
https://x.com/MiniMax_AI/status/2034356786413867182#m

🔍Follow Zhihu contributor toyama nao, a top large model reviewer, to evaluate @MiniMax_AI MiniMax-M2.7’s capabilities in detail!✨ 📌 Basic Info: MiniMax iterates monthly in the Agent-driven model track. As a minor version upgrade, M2.7 carries its new understanding of the
https://x.com/ZhihuFrontier/status/2034543142234628318

DEFAULT and FREE M2.7 on @zocomputer
https://x.com/MiniMax_AI/status/2034348503347171625#m

Early testers are saying that M2.7 has big improvements in emotional intelligence and character consistency 👀
https://x.com/MiniMax_AI/status/2034528945962696948

Great to see M2.7 live on @vercel_dev 🙌 We’re seeing a real shift from simple tool use → multi-step agentic workflows running in production. M2.7 is built for exactly that.
https://x.com/MiniMax_AI/status/2034357583797178841#m

Live Stream Alert with @OpenClaw Thursday 9PM ET We will share an in-depth look at MiniMax M2.7, including early developments in self-evolution and efficient solutions designed to support 100,000 OpenClaw running clusters. 🎁 MiniMax vouchers will also be distributed during
https://x.com/MiniMax_AI/status/2034520321466978488

M2.7 is already up😎 Try it on @kilocode.
https://x.com/MiniMax_AI/status/2034339731660759097#m

M2.7 now live on @yupp_ai 🌸 Feels like a good time to build something new.
https://x.com/MiniMax_AI/status/2034328337527783857#m

M2.7 now on @opencode ⚙️ give it a plan → it runs with it add the loop (check → fix → retry) and things start to feel very agentic
https://x.com/MiniMax_AI/status/2034361282527461473#m

Minimax 2.7 incoming!
https://x.com/kimmonismus/status/2033531736647463151

Minimax 2.7 is available in Hermes Agent through the Minimax Provider, try it today!
https://x.com/Teknium/status/2034658808870621274

MiniMax doubles in Hong Kong debut, marking yet another Chinese AI listing https://www.cnbc.com/2026/01/09/minimax-hong-kong-ipo-ai-tigers-zhipu.html

MiniMax has released MiniMax-M2.7, delivering GLM-5-level intelligence for less than one third of the cost MiniMax-M2.7 from @MiniMax_AI scores 50 on the Artificial Analysis Intelligence Index, an 8-point improvement over MiniMax-M2.5, which was released one month ago. This is
https://x.com/ArtificialAnlys/status/2034313314420019462#m

MiniMax launches M2.7 model on MiniMax Agent and APIs https://www.testingcatalog.com/minimax-launches-m2-7-model-on-minimax-agent-and-apis/

MiniMax M2.7 now live on @Trae_ai Excited to see what you ship. 🙌
https://x.com/MiniMax_AI/status/2034327432124350924#m

MiniMax M2.7: Early Echoes of Self-Evolution
https://x.com/MiniMax_AI/status/2034335605145182659

MiniMax M2.7🆚MiniMax M2.5 – Website about recently released video games The release of M2.7 should be close. MiniMax M2.5 was released two days after it appeared on the Arena
https://x.com/AiBattle_/status/2033503838284447758

MiniMax-M2.7 is now available on Ollama’s cloud. made for coding and agentic tasks 🖥️ Try it inside Claude Code: ollama launch claude –model minimax-m2.7:cloud 🦞 Use it with OpenClaw: ollama launch openclaw –model minimax-m2.7:cloud If you already have OpenClaw
https://x.com/ollama/status/2034351916097106424#m

Tracking unregistered dark ships is notoriously difficult and expensive. But a new automated system uses existing underwater internet cables to passively detect them. Here’s the breakdown:
https://x.com/yohaniddawela/status/2031705951552647195

You shouldn’t have to have a “meeting notes app.” You should have an “AI context & data app” that happens to have great meeting notes. Don’t overpay for things.
https://x.com/zachtratar/status/2034079952757547042#m

ByteDance also implemented attention over depth. They literally combined it with sequence attention.
https://x.com/rosinality/status/2033810580604158323

Gemini Embedding 2, our first fully multimodal embedding model, is now available in Public Preview via the Gemini API and Vertex AI. Developers can now map text, images, video, and audio in one centralized space, with one model, which simplifies complex tasks like semantic
https://x.com/Google/status/2033631279925891078

I’m excited to open source Chandra OCR 2! – 85.9% (sota) on olmocr bench – 90+ language support w/benchmarks – 4B model (down from 9B) – Full layout information – Extracts + captions images and diagrams – Strong handwriting, math, form, table support
https://x.com/VikParuchuri/status/2034317066048512392#m

DVD: Dynamic Video Depth”” TL;DR: Recovers temporally consistent depth from monocular videos using diffusion priors + geometric constraints, handling dynamic scenes and motion robustly.
https://x.com/Almorgand/status/2034349445601538057

🚀 Live from @NVIDIAGTC, we’re releasing Holotron-12B! Developed with @nvidia, it’s a high-throughput, open-source, multimodal model engineered specifically for the age of computer-use agents. Get started today! 🤗Hugging Face: https://t.co/SyAuqLIacS 📖Technical Deep Dive:
https://x.com/hcompany_ai/status/2033851052714320083

AI is already redesigning chip design itself! And the biggest bottleneck left is validation. Here is Bill Dally describing to @JeffDean how @nvidia uses AI to design chips: “We’re already using AI across multiple parts of the chip design process, and it’s delivering real
https://x.com/TheTuringPost/status/2034413469542588613

How NVIDIA Dynamo 1.0 Powers Multi-Node Inference at Production Scale | NVIDIA Technical Blog https://developer.nvidia.com/blog/nvidia-dynamo-1-production-ready/

With Nemotron 3 Nano 4B in the NVIDIA Nemotron 3 family, llama.cpp users get a compact model for action-taking conversational personas, available across NVIDIA GPU-enabled systems and @NVIDIA_AI_PC
https://x.com/ggerganov/status/2033947673825337477

The frontier has increasingly shifted to hybrid models – from Qwen to Kimi-Linear and now with NVIDIA’s Nemotron-3 Super – that rely on a strong linear sequence model. Today we release Mamba-3, the most powerful linear model to date.
https://x.com/tri_dao/status/2033948569502413245

NVIDIA thanks all its partners: the message? There is no way around NVIDIA. NVIDIA is the center of the revolution.
https://x.com/kimmonismus/status/2033615181415387610

Straight from NVIDIA GTC: Jensen Huang just unveiled a new vision for AI infrastructure For the first time, Rubin GPUs+Groq LPUs are paired: > 35× higher inference throughput > 10× more revenue from trillion-parameter models Architecture & why it’s needed
https://x.com/TheTuringPost/status/2033700480975520097

Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!
https://x.com/karpathy/status/2034321875506196585

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading