Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: A pristine 1961 Ferrari 250 GT California Spyder with deep emerald-green metallic paint featuring subtle circuit board pattern etchings across the body panels, gold circuit traces along the fenders, and small GPU chip details in the chrome grille, photographed in a dramatic studio with spotlights creating reflections on the glossy surface, cinematic automotive photography with soft shadows and premium lighting.
How Starcloud Is Bringing Data Centers to Outer Space | NVIDIA Blog https://blogs.nvidia.com/blog/starcloud/
Nvidia, Microsoft, BlackRock part of $40B Aligned Data Centers deal https://www.cnbc.com/2025/10/15/nvidia-microsoft-blackrock-aligned-data-centers.html
Announcing partnership with @Broadcom to build an OpenAI chip. This deal is on top of the @nvidia and @AMD ones we’ve announced over the past few weeks, and will allow us to customize performance for specific workloads. The world needs more compute.”” / X https://x.com/gdb/status/1977739645040378267
Google officially starts selling TPUs to external customers and competes directly with Nvidia now”” / X https://x.com/zephyr_z9/status/1978835094216343820
AMD debuts Helios rack-scale AI hardware platform at OCP Global Summit 2025 — promises easier serviceability and 50% more memory than Nvidia’s Vera Rubin | Tom’s Hardware https://www.tomshardware.com/tech-industry/amd-debuts-helios-rack-scale-ai-hardware-platform-at-ocp-global-summit-2025-promises-easier-serviceability-and-50-percent-more-memory-than-nvidias-vera-rubin
Nvidia unveils its vision for gigawatt ‘AI factories’ based on its Vera Rubin architecture – SiliconANGLE https://siliconangle.com/2025/10/13/nvidia-unveils-vision-gigawatt-ai-factories-based-vera-rubin-architecture/
🚀 vLLM just hit 60K GitHub stars! 🎉 From a small research idea to powering LLM inference everywhere — across NVIDIA, AMD, Intel, Apple, TPUs, and more — vLLM now supports almost all major text-generation models and native RL pipelines like TRL, Unsloth, Verl, and OpenRLHF. https://x.com/vllm_project/status/1977724334157463748
MacStudio you ask? Apple Engineering’s **actual** time spent on PyTorch support has’t given me confidence that PyTorch Mac experience would get anywhere close to NVIDIA’s any time soon, if ever. The Meta engineers continue to do a huge amount of heavy-lifting for improving the”” / X https://x.com/soumithchintala/status/1978848796953161754
Thanks to @NVIDIAAIDev, @xingyaow_ and I got to try an early NVIDIA DGX Spark system, which is this little GPU box that sits on your desktop. It’s sleek looking, fast, and quiet, but can easily run strong LMs like @Alibaba_Qwen 3 Coder, we wrote a little bit about using it:”” / X https://x.com/gneubig/status/1978067258506187238
since the day it was announced, i’ve been dying to get my hands on DGX Spark; a small but powerful machine i can put on my desk to run latest open models of almost any size. thanks to @nvidia, the dream came true a few weeks ago. look at this cutie sitting on my desk at NYU https://x.com/kchonyc/status/1978156587320803734
Thanks Jensen for the hand delivery of DGX Spark. Best delivery service ever. Amazing to see so much compute (1 petaflop!) in such a tiny form factor. https://x.com/gdb/status/1978273142695977391
Powering inference for the fastest growing AI companies like OpenEvidence, Writer, and Clay means being the first to use bleeding-edge model performance tooling in production. That’s why we were early adopters of NVIDIA Dynamo, giving us 50% lower latency and 60%+ higher https://x.com/basetenco/status/1978883986924634551
NVIDIA GTC Washington, DC, is just around the corner, happening October 27–29! I’ll be there in person, excited to join the Physical AI and Robotics sessions and, of course, Jensen’s keynote. Can’t wait to connect with many of you! https://x.com/TheHumanoidHub/status/1977777355197444308




