Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: A 1961 Ferrari 250 GT California Spyder in Rosso Corsa red positioned inside a semiconductor clean room with yellow-green lighting, silicon wafer replacing the hood ornament, microscopic circuit patterns subtly etched into the glossy paint surface, soft bokeh of wafer fabrication equipment in background, cinematic automotive photography with clean room aesthetic, polished chrome details catching specialized lighting, elegant composition with minimal background, landscape orientation.
Breaking Ground on Our New AI-Optimized Data Center in El Paso https://about.fb.com/news/2025/10/metas-new-ai-optimized-data-center-el-paso/
How Starcloud Is Bringing Data Centers to Outer Space | NVIDIA Blog https://blogs.nvidia.com/blog/starcloud/
Nvidia, Microsoft, BlackRock part of $40B Aligned Data Centers deal https://www.cnbc.com/2025/10/15/nvidia-microsoft-blackrock-aligned-data-centers.html
Announcing partnership with @Broadcom to build an OpenAI chip. This deal is on top of the @nvidia and @AMD ones we’ve announced over the past few weeks, and will allow us to customize performance for specific workloads. The world needs more compute.”” / X https://x.com/gdb/status/1977739645040378267
Exclusive: OpenAI set to finalize first custom chip design this year | Reuters https://www.reuters.com/technology/openai-set-finalize-first-custom-chip-design-this-year-2025-02-10/
We’re partnering with Broadcom to deploy 10GW of chips designed by OpenAI. Building our own hardware, in addition to our other partnerships, will help all of us meet the world’s growing demand for AI. https://x.com/OpenAINewsroom/status/1977724753705132314
We’re designing our own chips — taking what we’ve learned from building frontier models and bringing it directly into the hardware. Building our own hardware, in addition to our other partnerships, will help all of us meet the world’s growing demand for AI. In Episode 8 of the https://x.com/OpenAI/status/1977794196955374000
Really happy to be announcing the chips we’ve been cooking the past 18 months! OpenAI kicked off the reasoning wave with o1, but months before that we’d already started designing a chip tuned precisely for reasoning inference of OpenAI models. In January 2024, I joined OpenAI as”” / X https://x.com/itsclivetime/status/1977772728850817263
Today, we’re announcing a multi-year, multi generation strategic partnership with @OpenAI that puts AMD compute at the center of the global AI infrastructure buildout. ✅ 6GW of AI infrastructure ✅ Initial 1GW deployment of AMD Instinct MI450 series GPU capacity beginning 2H https://x.com/amd/status/1975155370860384576
OpenAI and Broadcom announce strategic collaboration to deploy 10 gigawatts of OpenAI-designed AI accelerators | OpenAI https://openai.com/index/openai-and-broadcom-announce-strategic-collaboration/
Apple unleashes M5, the next big leap in AI performance for Apple silicon – Apple https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for-apple-silicon/
Saw that DGX Spark vs Mac Mini M4 Pro benchmark plot making the rounds (looks like it came from @lmsysorg). Thought I’d share a few notes as someone who actually uses a Mac Mini M4 Pro and has been tempted by the DGX Spark. First of all, I really like the Mac Mini. It’s https://x.com/rasbt/status/1978608882156269755
Intel Announces “”Crescent Island”” Inference-Optimized Xe3P Graphics Card With 160GB vRAM – Phoronix https://www.phoronix.com/review/intel-crescent-island
The quality of AMD software now is totally different from when we started deeply using summer 2024. In 2024, we were running into many ROCm specific bugs. Today, the frequency in running ROCm bugs is orders of magnitude lower. AMD hardware is pretty good & the software is https://x.com/SemiAnalysis_/status/1977571931504153076
we’ve gotten some amazing lift out of applying our models to chip design:”” / X https://x.com/gdb/status/1977881545055830200
Seems like one of the key infra updates that frontier labs do for RL, helps mitigate the long tail problem of gpus working on just 1 completion https://x.com/natolambert/status/1977737413305790565
Together AI is expanding its business into buying GPUs to put into its own data centers, as its revenue more than doubled to $300M ARR over the summer. That growth has garnered multiple investment offers at $5B-$6B. w/@waynema @MilesKruppa @Katie_Roof https://x.com/steph_palazzolo/status/1978099327634473072
Of all the naming mistakes by AI labs, calling what they are building “datacenters” is a big one. “We are building the world’s largest supercomputer in your town” would have been better (along with real stuff like better explaining local water & power impacts, of course)”” / X https://x.com/emollick/status/1978132856879636618
Announcing the completely reimagined vLLM TPU! In collaboration with @Google, we’ve launched a new high-performance TPU backend unifying @PyTorch and JAX under a single lowering path for amazing performance and flexibility. 🚀 What’s New? – JAX + Pytorch: Run PyTorch models on https://x.com/vllm_project/status/1978855648176853100
New massive update Google TPUs Inference for open Models, like Gemma! `tpu-inference` isa new @vllm_project backend that delivers up to 5x performance gains over previous prototypes while supporting both frameworks via a single lowering path. – 🤝 Unifies PyTorch and JAX under https://x.com/_philschmid/status/1978889178067743210
Google officially starts selling TPUs to external customers and competes directly with Nvidia now”” / X https://x.com/zephyr_z9/status/1978835094216343820
10X Backbone: How Meta Is Scaling Backbone Connectivity for AI – Engineering at Meta https://engineering.fb.com/2025/10/16/data-center-engineering/10x-backbone-how-meta-is-scaling-backbone-connectivity-for-ai/
Meta partners up with Arm to scale AI efforts | TechCrunch https://techcrunch.com/2025/10/15/arm-partners-with-meta-to-scale-ai-efforts/
Banger paper from Meta and collaborators. This paper is one of the best deep dives yet on how reinforcement learning (RL) actually scales for LLMs. The team ran over 400,000 GPU hours of experiments to find a predictable scaling pattern and a stable recipe (ScaleRL) that https://x.com/omarsar0/status/1978865039529689257
Satya Nadella on X: “Another first for our AI fleet… a supercomputing cluster of NVIDIA GB300s with 4600+ GPUs and featuring next gen InfiniBand. First of many as we scale to hundreds of thousands of GB300s across our DCs, and rethink every layer of the stack across silicon, systems, and software https://t.co/EtNvnSAFr6″ / X
https://x.com/satyanadella/status/1976322455288545343
AMD debuts Helios rack-scale AI hardware platform at OCP Global Summit 2025 — promises easier serviceability and 50% more memory than Nvidia’s Vera Rubin | Tom’s Hardware https://www.tomshardware.com/tech-industry/amd-debuts-helios-rack-scale-ai-hardware-platform-at-ocp-global-summit-2025-promises-easier-serviceability-and-50-percent-more-memory-than-nvidias-vera-rubin
Nvidia unveils its vision for gigawatt ‘AI factories’ based on its Vera Rubin architecture – SiliconANGLE https://siliconangle.com/2025/10/13/nvidia-unveils-vision-gigawatt-ai-factories-based-vera-rubin-architecture/
🚀 vLLM just hit 60K GitHub stars! 🎉 From a small research idea to powering LLM inference everywhere — across NVIDIA, AMD, Intel, Apple, TPUs, and more — vLLM now supports almost all major text-generation models and native RL pipelines like TRL, Unsloth, Verl, and OpenRLHF. https://x.com/vllm_project/status/1977724334157463748
MacStudio you ask? Apple Engineering’s **actual** time spent on PyTorch support has’t given me confidence that PyTorch Mac experience would get anywhere close to NVIDIA’s any time soon, if ever. The Meta engineers continue to do a huge amount of heavy-lifting for improving the”” / X https://x.com/soumithchintala/status/1978848796953161754
Thanks to @NVIDIAAIDev, @xingyaow_ and I got to try an early NVIDIA DGX Spark system, which is this little GPU box that sits on your desktop. It’s sleek looking, fast, and quiet, but can easily run strong LMs like @Alibaba_Qwen 3 Coder, we wrote a little bit about using it:”” / X https://x.com/gneubig/status/1978067258506187238
since the day it was announced, i’ve been dying to get my hands on DGX Spark; a small but powerful machine i can put on my desk to run latest open models of almost any size. thanks to @nvidia, the dream came true a few weeks ago. look at this cutie sitting on my desk at NYU https://x.com/kchonyc/status/1978156587320803734
Thanks Jensen for the hand delivery of DGX Spark. Best delivery service ever. Amazing to see so much compute (1 petaflop!) in such a tiny form factor. https://x.com/gdb/status/1978273142695977391
Powering inference for the fastest growing AI companies like OpenEvidence, Writer, and Clay means being the first to use bleeding-edge model performance tooling in production. That’s why we were early adopters of NVIDIA Dynamo, giving us 50% lower latency and 60%+ higher https://x.com/basetenco/status/1978883986924634551
NVIDIA GTC Washington, DC, is just around the corner, happening October 27–29! I’ll be there in person, excited to join the Physical AI and Robotics sessions and, of course, Jensen’s keynote. Can’t wait to connect with many of you! https://x.com/TheHumanoidHub/status/1977777355197444308
Stop scrolling! @karpathy just turned 100 dollars of GPU time into your own ChatGPT: Train a small chat model end to end in about 4 hours on a single 8×H100 box, then chat with it in a web UI. Clean code. One script. Full pipeline. ✅ One command speedrun from empty box to https://x.com/IlirAliu_/status/1977995281212780598
Most of OpenAI’s 2024 compute went to experiments | Epoch AI https://epoch.ai/data-insights/openai-compute-spend
Things have come a long way since the delivery of the DGX-1 9 years ago; amazing to see…”” / X https://x.com/sama/status/1978300655069450611
ARM CEO Rene Haas says the embedded chip market for physical AI will be bigger than data centers. Today, robots largely use repurposed automotive chips, but these are not designed specifically for robot. There will be tens/hundreds of chips per bot – the market will be gigantic https://x.com/TheHumanoidHub/status/1978189401831174167
Elon’s recipe to win the tech race: – 1 billion AI chips – 1 terawatt of power – 100 million robots – innovate fastest – add salt to taste https://x.com/TheHumanoidHub/status/1976344931372544005




