Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: A pristine 1961 Ferrari 250 GT California Spyder in Rosso Corsa red driving on a polished elevated highway at golden hour, with traditional Chinese pagodas and red lanterns on the left side and modern glass skyscrapers with digital screens on the right, warm cinematic lighting, elegant reflections on wet pavement showing both architectural styles merged, studio-grade automotive photography, subtle depth of field, no people.
🤯 400 Token/S on a MacBook? Yes, you read that right! Shaohong Chen just fine-tuned the Qwen3-0.6B LLM in under 2 minutes using Apple’s MLX framework. This is how you turn your MacBook into a serious LLM development rig. A step-by-step guide and performance metrics inside! 🧵 https://x.com/ModelScope2022/status/1977706364563865805
Just shipped Privacy AI 1.3.2 This update adds full ‘MLX model support’ — you can now run ‘text and vision models locally’ using Apple’s MLX engine. Models can be downloaded directly from ‘Hugging Face’, and the new download manager supports ‘resume-on-failure’, ‘background https://x.com/best_privacy_ai/status/1977736637086920765
Qwen3-VL 30B-A3B at 4-bit precision, running on Apple silicon at 80 tok/s with MLX! @awnihannun @Prince_Canuma @ostensiblyneil @lmstudio https://x.com/vincentaamato/status/1977776546736713741
Qwen3-VL 235B is now live on Ollama’s cloud — free to try!”” / X https://x.com/Alibaba_Qwen/status/1978288558587674672
Qwen3-VL 235B is available on Ollama’s cloud! It’s free to try. ollama run qwen3-vl:235b-cloud The smaller models, and the ability to run fully on-device will be coming very soon! See examples and how to use the model on Ollama! 👇👇👇 https://x.com/ollama/status/1978225292784062817
> to celebrate this, we built some notebooks to fine-tune Qwen3-VL-4B with SFT/GRPO in a free Colab notebook 🥹 https://x.com/mervenoyann/status/1978153606462550220
Qwen3-VL is very good for JSON structured output and is insanely fast 💨 Thanks @Alibaba_Qwen team!”” / X https://x.com/andrejusb/status/1978076341158244835
The new dense Qwen3-VL models from @Alibaba_Qwen have day-0 support on MLX-VLM! Get started today: > pip install -U mlx-vlm Model weights: https://x.com/Prince_Canuma/status/1978164715848134699
Kaggle link: https://x.com/Alibaba_Qwen/status/1978290751436943415
We’re open-sourcing several core components from the Qwen3Guard Technical Report, now available for research and community use: 🔹 Qwen3-4B-SafeRL: A safety-aligned model fine-tuned via reinforcement learning using feedback from Qwen3Guard-Gen-4B. → Achieves significant safety”” / X https://x.com/Alibaba_Qwen/status/1978732145297576081
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac. https://x.com/lmstudio/status/1978205419802616188
🚀Qwen3-VL-235B-A22B-Instruct is now #1 on OpenRouter for image processing — 48% market share! 🎉Huge thanks to our amazing community. https://x.com/Alibaba_Qwen/status/1977566109198151692
These are the Qwen3-VL models I have most been looking forward to – in 4B and 8B sizes, I expect these will work well even on just CPUs”” / X https://x.com/simonw/status/1978151711987372227
Qwen3-VL has already become one of the most popular multimodal models supported by vLLM – Try it out!”” / X https://x.com/rogerw0108/status/1978158856611024913
Excited to announce the launch of Qwen3-VL-Flash on Alibaba Cloud Model Studio! 🚀 A powerful new vision-language model that combines reasoning and non-reasoning modes, outperforming open-source Qwen3-VL-30B-A3B and Qwen2.5-72B with faster responses, stronger capabilities, and https://x.com/Alibaba_Qwen/status/1978841775411503304



