Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the deep midnight navy car hood, chrome pedestal base, shallow depth-of-field sky background, dramatic upward camera angle, and automotive advertisement lighting exactly as shown. Replace only the Mercedes star with a single chrome hood ornament in the shape of a miniature house with pitched roof and chimney, rendered in polished metal at realistic ornament scale, mounted on the same pedestal. Add bold white sans-serif text reading LOCAL across the upper portion of the image.

Did a small local anime server tool powered by Hermes Agent (@NousResearch). You can: – fully sync your anime list – download torrents from different sources – add tracking & scheduled downloads – auto-manage disk usage – serve to any device within your local wifi and more!
https://x.com/rodmarkun/status/2033307437088850102

GPT-5.4 mini approaches the performance of the larger GPT-5.4 model on several evaluations, including SWE-Bench Pro and OSWorld-Verified.
https://x.com/OpenAIDevs/status/2033953828387885470

GPT-5.4 mini is available today in the API, Codex, and ChatGPT. In the API, it has a 400k context window. In Codex, it uses only 30% of the GPT-5.4 quota, letting you handle simpler coding tasks for about one-third of the cost. GPT-5.4 nano is only available in the API.
https://x.com/OpenAIDevs/status/2033953840312291603

GPT-5.4-mini 2.25 times more expensive than GPT-5-mini $0.75 Input $4.5 Output 400k
https://x.com/scaling01/status/2033955279079907511

Introducing GPT-5.4 mini and nano | OpenAI https://openai.com/index/introducing-gpt-5-4-mini-and-nano/

We’re introducing GPT-5.4 mini and nano, our most capable small models yet. GPT-5.4 mini is more than 2x faster than GPT-5 mini. Optimized for coding, computer use, multimodal understanding, and subagents. For lighter-weight tasks, GPT-5.4 nano is our smallest and cheapest
https://x.com/OpenAIDevs/status/2033953815834333608

Good news: I got Qwen3.5-397B-FP8 running on my 8x mi210 server. Bad news: at 6 tokens per second.
https://x.com/QuixiAI/status/2033342155414982952

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading