Image created with OpenAI GPT-Image-1. Image prompt: 1966 Kodachrome photo-look, thin white frame, forest-green title band in upper left with stacked yellow/white serif text reading “INTERNATIONAL”, twilight blue cast, title band shifts to navy scene featuring a row of miniature country flags along the pen; gentle film grain, overcast daylight
🚨 Alibaba, Tencent disable AI features during China’s gaokao exam AI features like image recognition in their chatbots (e.g., Qwen, Yuanbao, Kimi). Purpose: to prevent cheating via AI. → The move comes as China sees 13.4 million students sitting the exam this year. It’s a https://x.com/rohanpaul_ai/status/1932023557250515237
Chinese tech firms freeze AI tools in crackdown on exam cheats | China | The Guardian https://www.theguardian.com/world/2025/jun/09/chinese-tech-firms-freeze-ai-tools-exam-cheats-universities-gaokao
Apple doesn’t report benchmarks for their AIs, reporting on an ill-documented head-to-head evaluation But even by their standards, Apple’s latest on device models are mostly worse than the open Gemma 3-4B from Google or Qwen 3-4B And their server LLM is similar to Llama 4 Scout https://x.com/emollick/status/1932420903515590997
o3 was considerably less verbose in responses in our Artificial Analysis Intelligence Index eval set than Gemini 2.5 Pro & DeepSeek R1 but more than Claude 4 Opus https://x.com/ArtificialAnlys/status/1932489580592435301
ByteDance-Seed strikes again and destroys Veo 3 and you thought american labs have any chance at competing with chinese ones? paper: https://x.com/scaling01/status/1933048431775527006
Mistral Compute | Mistral AI https://mistral.ai/news/mistral-compute
Nvidia, HPE to build new supercomputer in Germany | Reuters https://www.reuters.com/sustainability/climate-energy/nvidia-hpe-build-new-supercomputer-germany-2025-06-10/
The @Gradio agents and MCP hackathon kick-off is today! Very cool collaborative event with @AnthropicAI, @modal_labs, @nebiusai, @MistralAI, @hyperbolic_labs, @SambaNovaAI, @llama_index, @OpenAI. Open standards are key for a healthy AI community and there is amazing potential in https://x.com/MoritzLaurer/status/1929851886854652104
Japanese Financial Benchmark “EDINET-Bench” Released Utilizing securities reports from EDINET, the Financial Services Agency’s electronic disclosure system, we have developed a Japanese financial benchmark to measure how well AI can handle advanced financial tasks.
https://x.com/SakanaAILabs/status/1931887596323717406
We are pleased to announce that we have signed a Memorandum of Understanding (MOU) with Hokuriku Financial Holdings, Inc. regarding a strategic partnership aimed at promoting regional finance and AI. https://x.com/SakanaAILabs/status/1932359607122628809
.@togethercompute API has the fastest DeepSeek v3 endpoint (2x faster than next best API endpoint) and almost 5x faster than DeepSeek API. See how to use it directly with @cline to make all your Cline workflows snappier!”” / X https://x.com/vipulved/status/1932601075754020876
Extract – a system built by the UK government, using our Gemini foundational model – will help council planners make faster decisions. 🚀 Using multimodal reasoning, it turns complex planning documents – even handwritten notes and blurry maps – into digital data in just 40s. https://x.com/GoogleDeepMind/status/1932032485254217799
UK government harnesses Gemini to support faster planning decisions https://blog.google/around-the-globe/google-europe/united-kingdom/uk-government-harnesses-gemini-to-support-faster-planning-decisions/
Excited to introduce @MistralAI Mistral Code! 🤖 The most customizable AI-powered coding assistant for enterprises! 🛠️ Frontier coding models, fully customizable & tunable to your codebase 🌟 State-of-the-art coding assistance & agentic coding, in your control 🔧 One platform, https://x.com/sophiamyang/status/1930283932815372427
How to Use Banned US Models in China https://www.chinatalk.media/p/the-grey-market-for-american-llms
Proud to announce Sakana AI’s partnership with Hokkoku Bank, based in the Ishikawa Prefecture of Japan. Sakana AI will deliver bank-specific AI-powered tools to Hokkoku Bank. We aim for this partnership to serve as a model case for other regional banks in Japan.”” / X https://x.com/hardmaru/status/1932370496483697105
I think India could become an AI superpower and we’re starting to see early signs!”” / X https://x.com/ClementDelangue/status/1931846782184497224
Magistral 4-bit DWQ is up on Hugging Face. Use it with mlx-lm or in @lmstudio: https://x.com/awnihannun/status/1932547785162961291
magistral is an amazingly powerful model, and on le chat with 1000+ tok/s inference, it’s ui-breaking fast https://x.com/qtnx_/status/1932442022574723407
Taiwan May exports hit record on AI demand and ahead of US tariffs | Reuters https://www.reuters.com/world/china/taiwan-may-exports-hit-record-ai-demand-ahead-us-tariffs-2025-06-09/
Announcing Magistral — our @MistralAI first reasoning model — excelling in domain-specific, transparent, and multilingual reasoning. https://x.com/sophiamyang/status/1932451856447586312
Announcing Magistral, our first reasoning model designed to excel in domain-specific, transparent, and multilingual reasoning. https://x.com/MistralAI/status/1932441507262259564
Magistral | Mistral AI https://mistral.ai/news/magistral
Mistral just released their reasoning models: Magistral-Small and Magistral-Medium Magistral Small is open-source and based on the 24B Mistral-Small 3.1, and can run on a single RTX 4090. Unfortunately it gets crushed by Qwen3-32B and Qwen3-30B-A3B Download Link: https://x.com/scaling01/status/1932445360380612712
Mistral really cooked – 24B, Based on Mistral Small 3.1, Multilingual, 128K context (40k effective), Apache 2.0 licensed! 🔥 Works on MLX, llama.cpp, transformers, vllm and more ⚡ https://x.com/reach_vb/status/1932449015657836730
The Mistral team at it again with Magistral! GRPO with edits: 1. Removed KL Divergence 2. Normalize by total length (Dr. GRPO style) 3. Minibatch normalization for advantages 4. Relaxing trust region Paper: https://x.com/danielhanchen/status/1932451325398413518
RT @MistralAI: We’re proud to announce Mistral Compute—an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative…”” / X https://x.com/qtnx_/status/1932799532070547810
We’re proud to announce Mistral Compute—an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative that will ensure that all nation states, enterprises, and research labs globally remain at the forefront of AI innovation. Read more in the thread. https://x.com/MistralAI/status/1932798814840332307
Mistral got hit by export restrictions again! They couldn’t evaluate the latest DeepSeek and Qwen I am risking being detained by going around export restrictions and checking the numbers on Qwen TL;DR Qwen 4B is ~close to their model and the small 30B MoE is better, let’s not https://x.com/dylan522p/status/1932563462963507589
Mistrals paper is the best practical paper on doing reasoning rl since deepseek r1 paper fwiw will do a writeup later to go through it 🤓 if i get the time”” / X https://x.com/Teknium1/status/1932580993132790232
Wow, end-to-end omni model from Ant Group: Ming Lite Omni – can hear, speak, and generate images – competitive to GPT4o 🔥 Some notes on the paper and the release: > GUI tasks: +9% accuracy over Qwen2.5VL-7B on AITZ(EM) > Audio understanding: 6/13 SOTA results on public https://x.com/reach_vb/status/1933458455794229317
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models | Qwen https://qwenlm.github.io/blog/qwen3-embedding/
So many models released past week in open AI, here’s the weekly picks 🥹 let’s gooo 📚 LLMs/Retrieval > @Alibaba_Qwen released Qwen3-Reranker-4B, a text retriever that it supports 100+ languages ranking #1 on MTEB > ..and Qwen3-Embedding models, new embeddings that come In 600M, https://x.com/mervenoyann/status/1933101803274477600



