Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: Photorealistic wide landscape of six Ionic limestone columns on Mizzou quad with completed classical entablature spanning the top, golden hour light, the word CHIPS carved in centered Roman serif capitals with detailed silicon wafer cross-section diagrams and circuit layer schematics chiseled into the stone frieze on either side like ancient technical blueprints, red brick buildings and green lawn background, sharp focus on carved inscription and technical engravings.

NVIDIA World Simulation with Video Foundation Models for Physical AI https://huggingface.co/papers/2511.00062

Gigawatt-scale Stargate data center is the largest single investment in Michigan history: https://x.com/gdb/status/1984394938453528684

Our 7th gen TPU Ironwood is coming to GA! It’s our most powerful TPU yet: 10X peak performance improvement vs. TPU v5p, and more than 4X better performance per chip for both training + inference workloads vs. TPU v6e (Trillium). We use TPUs to train + serve our own frontier https://x.com/sundarpichai/status/1986463934543765973

Exploring a space-based, scalable AI infrastructure system design https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/

Google: AGI is an energy problem. We’re sending TPUs closer to the sun. https://x.com/Yuchenj_UW/status/1985760405147566166

Our TPUs are headed to space! Inspired by our history of moonshots, from quantum computing to autonomous driving, Project Suncatcher is exploring how we could one day build scalable ML compute systems in space, harnessing more of the sun’s power (which emits more power than 100 https://x.com/sundarpichai/status/1985754323813605423

Planet to Build and Operate Advanced Space Platform for Google’s Project Suncatcher Moonshot https://www.planet.com/pulse/planet-to-build-and-operate-advanced-space-platform-for-google-s-project-suncatcher-moonshot/

The ISS generates the highest amount of power of any object we’ve put into space – 240 kilowatts. That’s enough for 240 GPUs. But it can only dissipate 100kW. So a football field to field 100 GPUs. Space based data centers – good luck. https://x.com/draecomino/status/1986162034464203007

Telekom and NVIDIA building a $1.1B datacenter in Munich with 10k GPUs including DGX B200 and RTX PRO Servers https://x.com/scaling01/status/1985741851991621712

(32) NVIDIA GTC Washington, D.C. Keynote with CEO Jensen Huang – YouTube https://www.youtube.com/watch?v=lQHK61IDFH4&t=2s

Nvidia’s Jensen Huang: ‘China is going to win the AI race,’ FT reports | Reuters https://www.reuters.com/world/asia-pacific/nvidias-jensen-huang-says-china-will-win-ai-race-with-us-ft-reports-2025-11-05/

Nvidia’s Jensen Huang: ‘China is going to win the AI race,’ FT reports https://finance.yahoo.com/news/nvidias-jensen-huang-says-china-211900769.html

$38B compute deal: OpenAI is accessing AWS compute comprising hundreds of thousands Nvidia GB200 and GB300 chips https://x.com/scaling01/status/1985352400631202187

Announcing strategic partnership with AWS, to help scale the compute required for AI that benefits everyone:”” / X https://x.com/gdb/status/1985378899648544947

AWS and OpenAI announce multi-year strategic partnership | OpenAI https://openai.com/index/aws-and-openai-partnership/

AWS announces new partnership to power OpenAI’s AI workloads https://www.aboutamazon.com/news/aws/aws-open-ai-workloads-compute-infrastructure

OpenAI strikes $38 billion AI training deal with Amazon | The Verge https://www.theverge.com/news/812443/openai-amazon-38-billion-cloud-computing-ai

Very pleased to be working with Amazon to bring a lot more NVIDIA chips online for OpenAI to keep scaling!”” / X https://x.com/sama/status/1985431030430646365

OpenAI CFO Would Support Federal Backstop for Chip Investments https://www.wsj.com/video/openai-cfo-would-support-federal-backstop-for-chip-investments/4F6C864C-7332-448B-A9B4-66C321E60FE7

Spencer Huang of NVIDIA shares the first use cases he would love to see for humanoid robots https://x.com/TheHumanoidHub/status/1985767748358849019

We will be hearing more and more about Spencer Huang from now on. He recently joined NVIDIA, the family business led by his father Jensen Huang, and is already shaping the company’s vision for robotics and physical AI. His mix of curiosity, technical depth, and quiet confidence https://x.com/TheTuringPost/status/1985427046013813093

New episode: Spencer Huang is a Product Lead for @NVIDIARobotics, focusing on open-source simulation frameworks, synthetic data generation, and robot autonomy. Spencer breaks down RL breakthroughs unlocked by hardware, open-source simulation, synthetic data flywheels, the https://x.com/TheHumanoidHub/status/1984641886230102217

Today, we launched Helios, a technological marvel redefining the possible. Helios is the most accurate quantum computer in the world, with 98 of the highest fidelity physical qubits ever released, and 48 error-corrected logical qubits. Learn more: https://x.com/QuantinuumQC/status/1986172816241402189

Announcing our Frontier Data Centers Hub! The world is about to see multiple 1 GW+ AI data centers. We mapped their construction using satellite imagery, permits & public sources — releasing everything for free, including commissioned satellite images. Highlights in thread! https://x.com/EpochAIResearch/status/1985788184245293153

If you’d like to win your own Dell Pro Max with GB300 we’re launching a new kernel competition with @NVIDIAAI @sestercegroup @Dell to optimize NVF4 kernels on B200 2025 has seen a tremendous rise of pythonic kernel DSLs, we got on-prem hardware to have reliable ncu benchmarking”” / X https://x.com/GPU_MODE/status/1985436876384453128

Big news: Scale is growing 🌍 We’re expanding our global footprint with new offices in New York City, London, Washington, D.C., and St. Louis. This growth reflects our investment in our people and our mission to build reliable AI systems for the world’s most important https://x.com/scale_AI/status/1985749172923088944

bet hard on vertically integrated companies. they literally make their own chips dawg”” / X https://x.com/willdepue/status/1985235791069716930

Initial M5 Neural Accelerators support in llama.cpp Enjoy faster TTFT in all ggml-based software (requires macOS Tahoe 26) https://x.com/ggerganov/status/1986473652137947505

What an amazing #RaySummit! 100s of conversations revealed the same theme: Managing multiple Slurm/KubeRay/Kueue deployments → more ops complexity, lower GPU utilization. SkyPilot simplifies it → one system to manage every cluster and GPU cloud. 🔗 Give it a try: https://x.com/skypilot_org/status/1986867886330421615

When serving LLMs, check your network limits before your GPU counts 🥲 That’s a bitter lesson”” / X https://x.com/crystalsssup/status/1986755489800265886

The big article on data centers in the New Yorker is pretty good, which I wasn’t expecting given the reaction on X. Lots of good and bad: and covering both bubble & non-bubble arguments. It also featured the best version of “I spoke to a local farmer about a data center” https://x.com/emollick/status/1985195665132040621

NOW AVAILABLE! GLM 4.6 at 1000 token/s through the @cerebras provider on Cline. The best combination of speed and accuracy directly in your IDE or CLI.”” / X https://x.com/cline/status/1986933223436525914

🧵 Ironwood, our most powerful and energy-efficient Tensor Processing Unit (TPU) yet, will become generally available in the coming weeks. https://x.com/Google/status/1986537172770390389

Google deploys new Axion CPUs and seventh-gen Ironwood TPU — training and inferencing pods beat Nvidia GB300 and shape ‘AI Hypercomputer’ model | Tom’s Hardware https://www.tomshardware.com/tech-industry/artificial-intelligence/google-deploys-new-axion-cpus-and-seventh-gen-ironwood-tpu-training-and-inferencing-pods-beat-nvidia-gb300-and-shape-ai-hypercomputer-model

When you run AI on your device, it is more efficient and less big brother and free! So it’s very cool to see the new llama.cpp UI, a chatgpt-like app that fully runs on your laptop without needing wifi or sending any data external to any API. It supports: – 150,000+ GGUF models https://x.com/ClementDelangue/status/1985748187634717026

Wow excited to see PewDiePie using vLLM to serve language models locally 😃 vLLM brings easy, fast, and cheap LLM serving for everyone 🥰”” / X https://x.com/vllm_project/status/1985241134663405956

🚀 Learn how to deploy vLLM on NVIDIA DGX Spark — the right way! NVIDIA just published a detailed best practices guide for running high-throughput inference with vLLM, including multi-node setups and optimized Docker builds. @NVIDIAAIDev 👉 Dive in: https://x.com/vllm_project/status/1986049283339243821

It’s that time of the year again and we’re coming with another @GPU_MODE competition! This time in collaboration with @nvidia focused on NVFP4. Focused on NVFP4 and B200 GPUs (thanks to @sestercegroup ) we’ll release 4 problems over the following 3 months: 1. NVFP4 Batched GEMV”” / X https://x.com/m_sirovatka/status/1985438384337404078

The wait is over! We’re so excited to announce the @GPU_MODE x @NVIDIA kernel optimization competition for NVFP4 kernels on Blackwell B200s! We will be awarding NVIDIA DGX Spark’s & RTX 50XX series GPUs for individual rankings on each problem, as well as a Dell Pro Max with https://x.com/a1zhang/status/1985434030473437213

Nvidia’s China plans hit roadblock as US moves to block sale of scaled-down AI chips to Beijing: Report | Today News https://www.livemint.com/news/us-news/nvidias-china-plans-hit-roadblock-as-us-moves-to-block-its-sale-of-scaled-down-ai-chips-to-beijing-report-11762483101906.html

The Department of Commerce has allowed Microsoft to ship NVIDIA GPU’s to the UAE for the first time. Brad Smith announced this today in Abu Dhabi. He said MS received the license in September, and will spend $7.9 billion on datacenters in the UAE over the next four years. https://x.com/AndrewCurran_/status/1985325278823125483

Sam’s clarification is good and important. Furthermore – I don’t think it can be overstated how critical compute will become as a national strategic asset. It is so important to build. It is vitally important to the interests of the US and democracy broadly to build tons of it”” / X https://x.com/jachiam0/status/1986583797492818244

Hybrid models like Qwen3-Next, Nemotron Nano 2 and Granite 4.0 are now fully supported in vLLM! Check out our latest blog from the vLLM team at IBM to learn how the vLLM community has elevated hybrid models from experimental hacks in V0 to first-class citizens in V1. 🔗 https://x.com/PyTorch/status/1986192579835150436

ABB’s top industrial chip and robotics expert Pang Zhibo leaves Sweden for China | South China Morning Post https://www.scmp.com/news/china/science/article/3331021/abbs-top-industrial-chip-and-robotics-expert-pang-zhibo-leaves-sweden-china

Most robotics teams can make a robot move… very few can make it work every day in the real world! @roboforce_inc was just featured in Jensen Huang’s GTC keynote, not for a demo but for a robot they claim is already working in the field. Their system, TITAN, is built for https://x.com/IlirAliu_/status/1983942883162951879

Elon Musk says Tesla needs “gigantic chip fab” for its AI and robotics https://www.cnbc.com/2025/11/07/elon-musk-says-tesla-needs-gigantic-chip-fab-terra-for-ai-and-robotics-tsmc-intel.html

MotionStream Real-Time Video Generation with Interactive Motion Controls model runs in real time on a single NVIDIA H100 GPU (29 FPS, 0.4s Latency) https://x.com/_akhaliq/status/1986054085766750630

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading