Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Photorealistic 35mm cinema shot of a child aged 6-8 sitting on plush rug in warm-lit bedroom, side angle view, surrounded by panoramic arc of TV screens displaying international news feeds and world maps with different languages, vintage globe with handwritten notes on floor catching screen glow, scattered international newspapers with foreign scripts, warm peach and cream tones contrasted with cool blue-white LED light, shallow depth of field, soft focus, cozy yet subtly disquieting atmosphere, bold text ‘INTERNATIONAL’ at top of frame.
BBVA and OpenAI collaborate to transform global banking | OpenAI https://openai.com/index/bbva-collaboration-expansion/
Tinker is now generally available. We also added support for advanced vision input models, Kimi K2 Thinking, and a simpler way to sample from models. https://x.com/thinkymachines/status/1999543421631946888
Tinker: General Availability and Vision Input – Thinking Machines Lab
https://thinkingmachines.ai/blog/tinker-general-availability/
Tinker: General Availability and Vision Input – Thinking Machines Lab https://thinkingmachines.ai/blog/tinker-general-availability/
Today we are releasing Tinker to everyone, and now with vision input! You can now finetune a frontier Qwen3-VL-235B on your own image+text data, bringing your own algorithm (sft, RL, something else?). We’ll take care of the GPU infra. Full update: https://x.com/rown/status/1999544121984245872
🎨 Qwen-Image-Layered is LIVE — native image decomposition, fully open-sourced! ✨ Why it stands out ✅ Photoshop-grade layering Physically isolated RGBA layers with true native editability ✅ Prompt-controlled structure Explicitly specify 3-10 layers — from coarse layouts to https://x.com/Alibaba_Qwen/status/2002034611229229388
🚨 Qwen Image Layered is live on fal! ✨ Photoshop-grade layering – Native Decomposition 👑 Physically isolated RGBA layers with true native editability 🎨 Explicitly specify layers, from coarse layouts to fine-grained details https://x.com/fal/status/2002055913390195137
Devstral 2 is now available in MLX on Apple Silicon. Run a local SOTA coding model on your MacBook. Please update to LM Studio 0.3.35 first! Have a nice weekend! 👾 https://x.com/lmstudio/status/1999648656958296119
Dolphin-v2 🐬 new document parsing model released by @ByteDanceOSS ✨ 3B – MIT license ✨ Works on any document: PDFs, scans, photos ✨ Understands 21 types of content: text, tables, code, formulas, figures & more ✨ Pixel-level precision via absolute coordinate prediction https://x.com/AdinaYakup/status/1999462500551786692
TIL Xiaomi MiMo-V2-Flash lead was previously one of the key researcher on deepseek V2! https://x.com/eliebakouch/status/2001006476245262395
💡 LMArena Deep Dive: DeepSeek v3.2 (Text Arena) Leaderboard rank doesn’t always tell the full story. As previously reported, DeepSeek released v3.2 two weeks ago. Its results varied across categories and, overall, ranked lower than earlier v3.1 and v3.2 Experimental versions. https://x.com/arena/status/2000637978662821942
Google Translate gets new Gemini AI translation models https://blog.google/products-and-platforms/products/search/gemini-capabilities-translation-upgrades/
Realtime speech to speech translation powered by Gemini, available in Google Translate now, coming to developers early next year : ) https://x.com/OfficialLoganK/status/1999994009452962073
This strange square 👇 is undoubtedly the most extraordinary work of literature in human history. Yet, unfortunately, barely anyone in the West has ever heard of it. There was this woman poet in 4th century China called Su Hui (蘇蕙), a child genius who had reportedly mastered https://x.com/RnaudBertrand/status/1999315488598622360
This is largely being ignored but it’s easily one of the biggest China news of the year. What China is doing with Hainan – a huge island (50 times the size of Singapore!) – is pretty extraordinary: they’re basically making it into a completely different jurisdiction from the”” / X https://x.com/RnaudBertrand/status/2002054459644674550
.@MistralAI’s Devstral 2 family of models are now available in Ollama. 24B: ollama run devstral-small-2 123B: ollama run devstral-2 Ollama’s cloud: ollama run devstral-2:123b-cloud https://x.com/ollama/status/1999590723373662612
(13) Mistral AI Podcast: Arthur Mensch, Co-founder and CEO (Episode 1) – YouTube https://www.youtube.com/watch?v=xgaLsQTFUEw
Mistral OCR 3 sets new benchmarks in both accuracy and efficiency, outperforming enterprise document processing solutions as well as AI-native OCR. https://x.com/MistralAI/status/2001669583296712970
Very happy to announce the release of our latest Mistral OCR, which significantly outperforms existing solutions! A lot of effort was done to improve handwritten content, low quality scans, and complex tables & forms commonly found in enterprise documents. https://x.com/GuillaumeLample/status/2001719413649617404
Introducing Mistral OCR 3 | Mistral AI https://mistral.ai/news/mistral-ocr-3
Introducing Mistral OCR 3, a new frontier in document intelligence! 🧵👇 https://x.com/MistralAI/status/2001669581275033741
🚀 Qwen Code v0.5.0 is here! ✨ What’s new: • VSCode Integration: Bundled CLI into VSCode release package with improved cross-platform compatibility • Native TypeScript SDK: Seamlessly integrate with Node/TS • Smart Session Management: Auto-saves and continue conversations •”” / X https://x.com/Alibaba_Qwen/status/2000556828690624685
Open models year in review What a year! We’re back with an updated open model builder tier list, our top models of the year, and our predictions for 2026. First, the winning models: 1. DeepSeek R1 (@deepseek_ai): Transformed the AI world 2. Qwen 3 Family (@AlibabaGroup): The new https://x.com/natolambert/status/2000299636863734026
ITS LIVE photoshop-grade layering physically isolated RGBA layers with native editability 🤯 https://x.com/linoy_tsaban/status/2002038877511377393
You can now fine-tune LLMs and deploy them directly on your phone! 🚀 We collabed with PyTorch so you can export and run your trained model 100% locally on your iOS or Android device. Deploy Qwen3 on Pixel 8 and iPhone 15 Pro at ~40 tokens/sec. Guide: https://x.com/UnslothAI/status/2001305185206091917
Low latency communication is crucial for tensor parallel inference which is now available on the latest mlx-lm (not on pypi yet). In the following video Devstral is generating a quicksort in C++ 1.7x faster on 2 M3 Ultras (right) vs on 1 (left). https://x.com/angeloskath/status/2001739468425040002





Leave a Reply