Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: A luminous jade-green chess knight piece stands tall at the center of an elegant wooden chess board, glowing with inner light and casting dramatic shadows over smaller grayscale pieces arranged around it, studio lighting with rim highlights emphasizing the green piece’s dominance, photorealistic render with shallow depth of field
NVIDIA shows off its first Blackwell wafer manufactured in the US https://www.engadget.com/big-tech/nvidia-shows-off-its-first-blackwell-wafer-manufactured-in-the-us-192836249.html
🤖 NVIDIA’s Gr00t N1.5 is now available in LeRobot! This is the result of a great collaboration between the @huggingface LeRobot team and @NVIDIARobotics ! Gr00t N1.5 highlights: 🦾 Cross-embodiment foundation model for robots 🧠 Multimodal inputs: vision, language, and https://x.com/LeRobotHF/status/1981334159801929947
Jensen, in his Computex keynote earlier this year: “”Humanoid robot is likely the only robot that is likely to work, because technology needs scale.”” NVIDIA GTC DC is one week away. https://x.com/TheHumanoidHub/status/1980722742124245324
vLLM 🤝 @nvidia = open, scalable, agentic AI you can run anywhere. 🧵 Strengthening our partnership with @nvidia: vLLM serves the NVIDIA Nemotron family. This new blog https://x.com/vllm_project/status/1981553870599049286
Sourcebot (@sourcebot_dev) helps developers and AI agents understand massive codebases. It’s used daily by engineers at some of the largest companies in the world, including NVIDIA, Red Hat, and Arista Networks. Congrats on the launch, @msukkarieh1 & @bshizzle28! https://x.com/ycombinator/status/1978883886093602872
Alibaba Cloud claims to slash Nvidia GPU use by 82% with new pooling system | South China Morning Post https://www.scmp.com/business/article/3329450/alibaba-cloud-claims-slash-nvidia-gpu-use-82-new-pooling-system
We just launched Mojo🔥 GPU Puzzles Edition 1, a hands-on guide that teaches GPU programming through 34 progressive challenges, not lectures. Learn by doing, from your first GPU threads to tensor cores. Works on NVIDIA, AMD, and Apple GPUs. https://x.com/Modular/status/1981455872137318556
This week, Baseten’s model performance team unlocked the fastest TPS and TTFT for gpt-oss 120b on @nvidia hardware. When gpt-oss launched we sprinted to offer it at 450 TPS… now we’ve exceeded 650 TPS and 0.11 sec TTFT… and we’ll keep working to keep raising the bar. We are https://x.com/basetenco/status/1981757270053494806
We ran performance tests on release day firmware and an updated Ollama version to see how Ollama performs! @NVIDIAAIDev Let’s go spark! https://x.com/ollama/status/1981486870963114121
.@romeovdean and I wrote a blog post to teach ourselves about the AI buildout. We were surprised by some of the things we learned: 1. There’s a huge fab CapEx overhang – with a single year of earnings in 2025, Nvidia could cover the last 3 years of TSMC’s ENTIRE CapEx. In 2025, https://x.com/dwarkesh_sp/status/1981074799758921843




