Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the exact square faceted perfume bottle composition with warm amber-gold liquid, crystal stopper, pure white background, soft shadow, and clean white label, but replace the label text with ‘NVIDIA’ in the same black serif font style. Add a delicate sterling silver chain draped naturally around the bottle neck holding one small dainty pendant: a miniature sterling silver GPU chip with fine etched grid lines, sized like a high-fashion charm bracelet piece. Maintain the high-key studio lighting and glass refractions exactly as shown in the reference.

It was such a pleasure to test NVIDIA’s Level 2 driving system in a Mercedes-Benz on real streets in San Francisco while talking to Ali Kani, who knows literally everything about NVIDIA’s AV’s efforts. The driving was so smooth. So smooth that sometimes it felt we were in a
https://x.com/TheTuringPost/status/2039104195161473169

Nemotron Super / Ultra Arcee Trinity Large (soon) Gemma 4 (eventually) Reflection’s first models (maybe) GPT OSS 2? (maybe) Thinky? Other neolabs? Things looking up for open models built in the US in 2026. We had 0 for a bit there.
https://x.com/natolambert/status/2039499358325129530

Long context windows are now available for select models on Tinker! – 128k tokens for Kimi K2.5 and GPT-OSS-120B – 256k for Nemotron 3 Super 120B and Qwen3.5 397B. For more details and pricing, see our full model lineup:
https://x.com/tinkerapi/status/2039424320393621649

Delivered performance, not peak chip specifications, drives AI factory productivity. Rigorous benchmarks are the only way to see past the noise. In MLPerf Inference v6.0, NVIDIA extreme co-design delivered the highest token output across the broadest range of models and
https://x.com/nvidia/status/2039419585254875191

Dissecting Nvidia Blackwell – Tensor Cores, PTX Instructions, SASS, Floorsweep, Yield Microbenchmarking, tcgen05, 2SM MMA, UMMA, TMA, LDGSTS, UBLKCP, Speed of Light, Distributed Shared Memory, GPC Floorsweeps, SM Yield
https://x.com/SemiAnalysis_/status/2039102080959566038

Huge thanks to @NVIDIAAI for supporting full-time engineering work on OpenClaw hardening. A lot of careful security and reliability improvements landed over the last few releases, and that investment is paying off.
https://x.com/openclaw/status/2039100191324979580

As usual, we open-source everything, MIT license:
https://t.co/qZiy0FgTg8 Code:
https://t.co/4VpzxgWQGp Paper:
https://t.co/8E15zjokkM CaP-X is brought to you by NVIDIA, Berkeley, Stanford, and CMU. I’d like to thank the legend @Ken_Goldberg who co-advised the work, and the team
https://x.com/DrJimFan/status/2039360925606760690

Jim Fan proposes a three-part recipe for embodied AI, drawing parallels from the evolution of digital intelligence: – World Models (pre-training video-based physics) – Action Fine-Tuning (motor control) – Physical RL (infinite environments in neural simulation)
https://x.com/TheHumanoidHub/status/2039060404161425833

Manufacturers can’t trust their simulations. They design in a virtual world then spend weeks debugging on the real production line. That gap is costing time, money, and scale. At @NVIDIAGTC, I sat down with Craig McDonnell from @ABBRobotics to break down: → how they’re
https://x.com/IlirAliu_/status/2037099281694257298

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading