Image created with OpenAI GPT-Image-1. Image prompt: mid‑1990s web‑browser screenshot, CRT glow, 256‑color dithering — 3‑D beveled “Submit” buttons built from table slices — early NVIDIA eye logo button flashing “AI GPU” — crisp pixel edges, screen‑door scan‑lines, phosphor glow
Congrats to @NVIDIA, the first public $4T company! Today, compute is 100000x cheaper, and $NVDA 4000x more valuable than in the 1990s when we worked on unleashing the true potential of neural networks. Thanks to Jensen Huang (see image) for generously funding our research 🚀 https://x.com/SchmidhuberAI/status/1943671639620645140
Nvidia CEO Jensen Huang to visit China again as firm plans China-only AI chip launch in September < World < 기사본문 - The Korea Post https://www.koreapost.com/news/articleView.html?idxno=45220 Nvidia challenger Groq expands with first European data center https://www.cnbc.com/2025/07/07/ai-chip-startup-groq-expands-with-first-european-data-center.html Custom silicon vendors were pitching NVLink fusion in January and February UALink 1.0 spec had no surprises for anyone because consortiums discuss everything for 6 months before anything released Nvidia is far more worried about Broadcom SUE not UALink. Expert calls are just”” / X https://x.com/dylan522p/status/1942453912885186788 OK, I run the fp8 mamf-finder on B200 and indeed as others suggested fp8 efficiency is improving over NVIDIA generations: H100: 70.9% H200: 73.4% B200: 76.3% h100 and h200 should be the same compute, higher cuda version for h200 run is probably it https://x.com/StasBekman/status/1942972268851888606




