Image created with Flux Pro v1.1 Ultra. Image prompt: CU Boulder brand style — CU Gold & Black, Helvetica Neue, Flatirons, Tuscan-vernacular sandstone + red-tile roofs; Varsity Lake footbridge, spring bloom, wide establishing view, subtle Flatirons contour lines; integrate the category “International” via Overlay: world map meridians and dot network with headline “INTERNATIONAL”; natural light, clean professional inspiring tone, crisp focus, subtle grain, editorial composition

China Shows Off Armed Attack Robots https://futurism.com/china-armed-attack-wolves

U.S. Government to Take Cut of Nvidia and AMD A.I. Chip Sales to China – The New York Times https://www.nytimes.com/2025/08/10/technology/us-government-nvidia-amd-chips-china.html

America needs to take open models more seriously. This summer the early lead in open model adoption of the US via Llama has been overtaken by Chinese models. With The American Truly Open Models (ATOM) Project we’re looking to build support and express the urgency of this issue. https://x.com/natolambert/status/1952370970762871102

Citi Oversaw $1 Billion for Trust US Tied to Sanctioned Oligarch – Bloomberg https://www.bloomberg.com/news/articles/2025-08-11/citi-oversaw-1-billion-for-trust-us-tied-to-sanctioned-oligarch?srnd=phx-technology&embedded-checkout=true

Generate an SVG of a pelican riding a bicycle with Qwen3-Coder and Qwen-Image. Which one do you prefer? https://x.com/Alibaba_Qwen/status/1954879387465294304

腾讯混元 https://vision.hunyuan.tencent.com/zh?tabIndex=0

Grokが世界を繋ぎます”” / X https://x.com/elonmusk/status/1955457039620247861

Character AI pivots from proprietary to open-source models after realizing they couldn’t compete with Big Tech’s billions. Now using Llama, Qwen & DeepSeek instead of building their own. https://x.com/fdaudens/status/1955629648920088754

Introducing Mistral Medium 3.1. Overall performance boost, tone improvement, smarter web searches. Try it now in Le Chat (default model) or via our API (`mistral-medium-2508`). https://x.com/MistralAI/status/1955316715417382979

Mistral Medium 3.1 is now available in anycoder as mistral-medium-2508 https://x.com/_akhaliq/status/1955621767302808012

Indian stocks are now covered on Perplexity Finance! Enjoy! 🇮🇳 📈 https://x.com/AravSrinivas/status/1955489224511328514

Perplexity Finance has expanded to India. Across desktop, mobile web, and mobile apps, all Perplexity users now have access to: – Synthesis of Indian markets & latest news – Live stock prices for BSE & NSE equities – Bull case & bear case across key issues – Explanations of https://x.com/jeffgrimes9/status/1955487020647850437

🚀 Qwen3-30B-A3B-2507 and Qwen3-235B-A22B-2507 now support ultra-long context—up to 1 million tokens! 🔧 Powered by: • Dual Chunk Attention (DCA) – A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence. • https://x.com/Alibaba_Qwen/status/1953760230141309354

introducing qqWen: our fully open-sourced project (code+weights+data+detailed technical report) for full-stack finetuning (pretrain+SFT+RL) a series of models (1.5b, 3b, 7b, 14b & 32b) for a niche financial programming language called Q All details below! https://x.com/brendanh0gan/status/1955641113693561071

Amazing! Jan-v1 — a powerful, fully local 4B model achieving 91% SimpleQA accuracy. Huge thanks to my friends for building it on Qwen3-4B-Thinking-2507.”” / X https://x.com/Alibaba_Qwen/status/1955263159280738738

🥇Qwen3-Coder, try it now in Qwen-Code”” / X https://x.com/Alibaba_Qwen/status/1955436295603490864

Qwen Image Edit is still cooking, but I couldn’t resist trying it — now I’ve got a Qwen Capybara rocking unlimited stickers! https://x.com/Alibaba_Qwen/status/1955656822532329626

Qwen Image is now quicker than ever on Qwen Chat. Try it now: https://x.com/Alibaba_Qwen/status/1955656265499316406

RT @angrypenguinPNG: Qwen-Image has been distilled to run in 8-steps. This means you get nearly the same image quality, with >50% less com…”” / X https://x.com/Alibaba_Qwen/status/1954337152298582288

Trained a sidechain LoRA to compensate for the quantization precision loss when quantizing Qwen Image to 3 bit. It works well. This can be active during training and should allow us to fine tune Qwen Image on <24GB of VRAM. This can be done to all models. https://x.com/ostrisai/status/1954373246997913853

Wow, that’s a brilliant use of AI! Qwen Chat Deep Research now supports image and file inputs. Try it now: https://x.com/Alibaba_Qwen/status/1955642787619381325

🚨 Open Model Leaderboard Update New open models entered the Text Arena, and the rankings by provider have reshuffled for August. – Qwen-3-235b-a22b-instruct from @Alibaba_Qwen takes the crown 🏆 – GLM-4.5 from @Zai_org and gpt-oss-120b by @openAI debut in the top 10! All the https://x.com/lmarena_ai/status/1955669431742587275

#CodingWithGLM 🤝 @Kilo_Code It’s happening! We’re kicking off the #CodingWithGLM series, bringing GLM-4.5 to the dev tools you use every day. First up, the incredible @Kilo_Code🚀 Unlock lightning-fast code generation, smart refactoring, and instant explanations right inside https://x.com/Zai_org/status/1955627932543840510

GLM-4.5V is now available on Anycoder. Thanks AK! @_akhaliq https://x.com/Zai_org/status/1955092307843154093

The GLM-4.5 tech report is worth reading”” / X https://x.com/bigeagle_xd/status/1954763239738519618

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading