Image created with Flux Pro v1.1 Ultra. Image prompt: CU Boulder brand style — CU Gold & Black, Helvetica Neue, Flatirons, Tuscan-vernacular sandstone + red-tile roofs; CASE bulletin board, high-noon ambient light, straight-on view, buffalo silhouette watermark; integrate the category “OpenSource” via Flyer: tear-off sheet titled “OPEN SOURCE” with QR tabs; natural light, clean professional inspiring tone, crisp focus, subtle grain, editorial composition
Matrix-Game 2.0 — The FIRST open-source, real-time, long-sequence interactive world model Last week, DeepMind’s Genie 3 shook the AI world with real-time interactive world models. But… it wasn’t open-sourced. Today, Matrix-Game 2.0 changed the game. 🚀 25FPS. Minutes-long https://x.com/Skywork_ai/status/1955237399912648842
RT @Skywork_ai: Matrix-Game 2.0 — The FIRST open-source, real-time, long-sequence interactive world model Last week, DeepMind’s Genie 3 sh…”” / X https://x.com/slashML/status/1955320183976767673
🏆NVIDIA AI-Q, an NVIDIA Blueprint for building AI agents with advanced reasoning skills, is now the leading open and portable #AIagent for high-fidelity research on the Deep Research Bench leaderboard. ➡️ https://x.com/NVIDIAAIDev/status/1952429440551547332
OpenAI gpt-oss has over 5M downloads, 400+ fine-tunes and *the* most liked release this year so far! 🔥 Great job @OpenAI 🤗 https://x.com/reach_vb/status/1954909541805801799
OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only… or is it? turns out that underneath the surface, there is still a strong base model. so we extracted it. introducing gpt-oss-20b-base 🧵 https://x.com/jxmnop/status/1955436067353502083
GPT-OSS: – 5M downloads in <1 week on @huggingface 🚀 – 400 new models – already outpacing DeepSeek R1’s launch numbers, and that’s without counting inference calls – also the most-liked release of any major LLM this summer https://x.com/fdaudens/status/1954904546385273029
America needs to take open models more seriously. This summer the early lead in open model adoption of the US via Llama has been overtaken by Chinese models. With The American Truly Open Models (ATOM) Project we’re looking to build support and express the urgency of this issue. https://x.com/natolambert/status/1952370970762871102
Now that the era of the scaling “”law”” is coming to a close, I guess every lab will have their Llama 4 moment. Grok had theirs. OpenAI just had theirs too.”” / X https://x.com/jeremyphoward/status/1954346846845129158
RT @jandotai: Introducing Jan-v1: 4B model for web search, an open-source alternative to Perplexity Pro. In our evals, Jan v1 delivers 91%…”” / X https://x.com/ggerganov/status/1955191376217297057
What can OpenAI’s new open models do with the news? I built a News Agent to find out. It can answer questions about the news in real time, and every answer comes with original source links so you can dive deeper. Runs with Hugging Face inference providers, letting you compare https://x.com/fdaudens/status/1955296761582358828
figured out how to “”undo”” the RL and turn gpt-oss back into a base model will drop the weights tomorrow gn https://x.com/jxmnop/status/1955099965828526160
@jxmnop @johnschulman2 @srush_nlp Super cool stuff!! How can we empirically check how far away this model is from the real base model? What benchmarks do we expect this base model to do better on and what benchmarks do we expect it to do worse on [when compared to the unmodified gpt-oss model]?”” / X https://x.com/OfirPress/status/1955463664556769426
I’m thrilled to be joining @cohere in the role of Chief AI Officer, helping advance cutting-edge research and product development. Cohere has an incredible team and mission. Exciting new chapter for me!”” / X https://x.com/jpineau1/status/1955995736895594838
Big news today: we’ve raised $500M to grow @cohere, and have added some incredible new leaders to our team! We’re fortunate to build Cohere to be the world’s best choice for enterprises alongside some of the best investors, partners, and customers in the business. https://x.com/aidangomez/status/1955993896590152114
Today is a big day for @cohere. – We raised 500M to keep building frontier AI for the enterprise – @jpineau1 is joining us as Chief AI officer – Francois Chadwick is joining us as Chief Financial Officer. I am so proud of the team and so excited to keep making AI useful. https://x.com/nickfrosst/status/1956005330069983332
We’re excited to announce $500M in new funding to accelerate our global expansion and build the next generation of enterprise AI technology! We are also welcoming two additions to our leadership team: Joelle Pineau as Chief AI Officer and Francois Chadwick as Chief Financial”” / X https://x.com/cohere/status/1955993354745082336
Our secure agentic AI platform, North, is now widely available. https://x.com/cohere/status/1953078403860709547
Open-Sourcing Roblox Sentinel: Our Approach to Preemptive Risk Detection https://corp.roblox.com/newsroom/2025/08/open-sourcing-roblox-sentinel-preemptive-risk-detection
We’re excited to share that we have Day-0 support in Hugging Face Transformers for DINOv3 so people can easily leverage the full family of models. Find out more on @huggingface here: https://x.com/AIatMeta/status/1956027800500232525
SLAM just got a serious speed boost. Efficient LoFTR is now integrated into the @huggingface Transformers library. It’s 2.5× faster than the original LoFTR and can even outperform the SuperPoint + LightGlue pipeline. Image matching finds correspondences between two images https://x.com/IlirAliu_/status/1953874253062787073
Generate an SVG of a pelican riding a bicycle with Qwen3-Coder and Qwen-Image. Which one do you prefer? https://x.com/Alibaba_Qwen/status/1954879387465294304
Two weeks ago, we released jina-embeddings-v4-GGUF with dynamic quantizations. During our experiments, we found interesting things while converting and running GGUF embeddings. Since most of the llama.cpp community focuses on LLMs, we thought it’d be valuable to share this from https://x.com/JinaAI_/status/1955647947359867068
Character AI pivots from proprietary to open-source models after realizing they couldn’t compete with Big Tech’s billions. Now using Llama, Qwen & DeepSeek instead of building their own. https://x.com/fdaudens/status/1955629648920088754
Introducing Mistral Medium 3.1. Overall performance boost, tone improvement, smarter web searches. Try it now in Le Chat (default model) or via our API (`mistral-medium-2508`). https://x.com/MistralAI/status/1955316715417382979
Mistral Medium 3.1 is now available in anycoder as mistral-medium-2508 https://x.com/_akhaliq/status/1955621767302808012
🚨 Big news! We decided that @huggingface’s post-training library, TRL, will natively supports training Vision Language Models 🖼️ This builds on our recent VLM support in SFTTrainer — and we’re not stopping until TRL is the #1 VLM training library 🥇 More here 👉 https://x.com/QGallouedec/status/1956066332488950020
NSF and NVIDIA award Ai2 a combined $152M to support building a national level fully open AI ecosystem | Ai2 https://allenai.org/blog/nsf-nvidia
With fresh support of $75M from @NSF and $77M from @NVIDIA, we’re set to scale our open model ecosystem, bolster the infrastructure behind it, and fast‑track reproducible AI research to unlock the next wave of scientific discovery. 💡 https://x.com/allen_ai/status/1955966785175388288
initial gpt-oss download stats looking exciting!”” / X https://x.com/gdb/status/1954992508964155587
i thought the transformers gpt-oss MoE finetuning was broken, how did you get it working?”” https://x.com/jxmnop/status/1955347764130254863
my gpt-oss MFUmaxxer PR is here! ✅ cat/splice sink -> flexattn ✅ sin/cos pos embs -> complex freqs_cis ✅ moe for-loop -> grouped gemm ✅ checkpoint conversion ✅ matches huggingface fwd pass currently adding parallelism and ensuring training steps healthily ⬇️”” / X https://x.com/khoomeik/status/1955433361402724679
tldr: Fireworks, Deepinfra, and TogetherAI are the accurate inference providers for hosting gpt-oss-120b.”” / X https://x.com/jeremyphoward/status/1955438370274087369
You can run gpt-oss-20B on Google Colab thanks to @pcuenq @reach_vb 🤯 https://x.com/fdaudens/status/1953420511137931342
RT @ggerganov: whisper.cpp is coming to ffmpeg https://x.com/ggerganov/status/1955161982023131645
Pretty cool. I think 2025-2026 will be a stronger focus on these in open source tooling. I.e. having LLMs delegate knowledge-based queries to search, which in turn frees up model capacity to improve reasoning capabilities and tool use.”” / X https://x.com/rasbt/status/1955271338970546682
Proprietary models have much more token-efficient reasoning than open-source models https://x.com/scaling01/status/1956098555090714668
Jan: Open source ChatGPT-alternative that runs 100% offline – Jan https://jan.ai/
🚀 Qwen3-30B-A3B-2507 and Qwen3-235B-A22B-2507 now support ultra-long context—up to 1 million tokens! 🔧 Powered by: • Dual Chunk Attention (DCA) – A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence. • https://x.com/Alibaba_Qwen/status/1953760230141309354
introducing qqWen: our fully open-sourced project (code+weights+data+detailed technical report) for full-stack finetuning (pretrain+SFT+RL) a series of models (1.5b, 3b, 7b, 14b & 32b) for a niche financial programming language called Q All details below! https://x.com/brendanh0gan/status/1955641113693561071
Amazing! Jan-v1 — a powerful, fully local 4B model achieving 91% SimpleQA accuracy. Huge thanks to my friends for building it on Qwen3-4B-Thinking-2507.”” / X https://x.com/Alibaba_Qwen/status/1955263159280738738
🥇Qwen3-Coder, try it now in Qwen-Code”” / X https://x.com/Alibaba_Qwen/status/1955436295603490864
Qwen Image Edit is still cooking, but I couldn’t resist trying it — now I’ve got a Qwen Capybara rocking unlimited stickers! https://x.com/Alibaba_Qwen/status/1955656822532329626
Qwen Image is now quicker than ever on Qwen Chat. Try it now: https://x.com/Alibaba_Qwen/status/1955656265499316406
RT @angrypenguinPNG: Qwen-Image has been distilled to run in 8-steps. This means you get nearly the same image quality, with >50% less com…”” / X https://x.com/Alibaba_Qwen/status/1954337152298582288
Trained a sidechain LoRA to compensate for the quantization precision loss when quantizing Qwen Image to 3 bit. It works well. This can be active during training and should allow us to fine tune Qwen Image on <24GB of VRAM. This can be done to all models. https://x.com/ostrisai/status/1954373246997913853
Wow, that’s a brilliant use of AI! Qwen Chat Deep Research now supports image and file inputs. Try it now: https://x.com/Alibaba_Qwen/status/1955642787619381325
🚨 Open Model Leaderboard Update New open models entered the Text Arena, and the rankings by provider have reshuffled for August. – Qwen-3-235b-a22b-instruct from @Alibaba_Qwen takes the crown 🏆 – GLM-4.5 from @Zai_org and gpt-oss-120b by @openAI debut in the top 10! All the https://x.com/lmarena_ai/status/1955669431742587275
🖥️🤖 LangGraph CLI Connect to LangGraph Platform directly from the terminal! Featuring comprehensive management of assistants, threads, and runs with real-time streaming capabilities. Explore the CLI on GitHub 🚀 https://x.com/LangChainAI/status/1954226169412493544




