Image created with Flux Pro v1.1 Ultra. Image prompt: Render bay with GPU racks and tensor-core diagrams; the word “NVIDIA” stamped on a server rail label in DIN-style sans; editor proofs a graphics-heavy cover on a calibrated monitor; neon-green highlights, dark anodized metal, precise reflections
Nvidia announced Cosmos Reason 7B, an open-source VLM to enable robots to see, reason, and act in the physical world, solving multistep tasks The company also made Isaac Sim 5.0 and Isaac Lab 2.2 generally available https://x.com/adcock_brett/status/1957111085481242892
when you engage “”hovercraft mode”” on your new whip (made w/ nvidia cosmos) https://x.com/bilawalsidhu/status/1956160140404777142
NVIDIA ON A ROLL! Canary 1B and Parakeet TDT (0.6B) SoTA ASR models – Multilingual, Open Source 🔥 – 1B and 600M parameters – 25 languages – automatic language detection and translation – word and sentence timestamps – transcribe up to 3 hours of audio in one go – trained on 1 https://x.com/reach_vb/status/1957148807562723809
The Economic Daily (Taiwan): Foxconn will unveil a humanoid robot with an “LLM-powered brain” at Foxconn Tech Day in November. NVIDIA will support the development of the robotic brain and provide AI expertise. The robots will be deployed in Q1 2026 in Foxconn’s new Houston https://x.com/TheHumanoidHub/status/1957890628152693081
We dove into the H100’s performance improvement over time from software over 2 years. Covered power usage + $ cost for training in a very detailed way for training runs on thousands of GPUs Equating this to US household power consumption @JeffDean + GB200 reliability challenges”” / X https://x.com/dylan522p/status/1958034446789095613
FlashAttention v4 is coming to Blackwell GPUs”” / X https://x.com/scaling01/status/1957397971479200083
We started the distributed summer at @GPU_MODE pretty strong, with Jeff Hammond from @nvidia talking about nccl/nvshmem. One of the best talks I had the pleasure to see til now. https://x.com/m_sirovatka/status/1956824361819652175
No names necessary. You know the man. You know the jacket. You know the company. But how did a napkin sketch turn into one of the most valuable companies on earth… Now powering the AI & Robotics infrastructure? 🧵👇 https://x.com/IlirAliu_/status/1957081211970396482
HUGE RELEASE! Nvidia just droppped: > Granary: the largest open-source speech dataset for European languages 🗣️🇪🇺 > Canary-1b-v2: 25 languages, ASR + En↔X translation > Parakeet-tdt-0.6b-v3: SOTA multilingual ASR You can now train your ASR model to understand European https://x.com/Tu7uruu/status/1956350036343701583
Nvidia Parakeet v3 is out! Enjoy Day 0 support with Argmax SDK – What changed from v2? – How do I use it? – Should I upgrade to this model right away? Answers in comments https://x.com/argmaxinc/status/1956385793892917288
Jensen visited the Figure HQ. https://x.com/TheHumanoidHub/status/1957690025057087933
NVIDIA Nemotron Nano v2 – a 9B hybrid SSM that is 6X faster than similarly sized models, while also being more accurate. 💚💚💚 9B: https://x.com/ClementDelangue/status/1957519608992407848
nvidia parakeet-tdt-0.6b-v3 600M model here: https://x.com/reach_vb/status/1957149090913128598
🚨 New: We built @a16z’s personal GPU AI Workstation Founders Edition – 4x NVIDIA RTX 6000 PRO Blackwell Max-Q (384GB total VRAM) – 8TB of NVMe PCIe 5.0 storage – AMD Threadripper PRO 7975WX (32 cores, 64 threads) – 256GB ECC DDR5 RAM – 1650Watts at peak (runs on a standard https://x.com/Mascobot/status/1958925710988582998
NVIDIA Nemotron Nano 2 An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model https://x.com/_akhaliq/status/1958545622618788174
Today we’re releasing NVIDIA Nemotron Nano v2 – a 9B hybrid SSM that is 6X faster than similarly sized models, while also being more accurate. Along with this model, we are also releasing most of the data we used to create it, including the pretraining corpus. Links to the https://x.com/ctnzr/status/1957504768156561413
Nvidia dropping model that rivals qwen 3 8b, with data, with base model, not that bad of a license (could be better to be clear) a big win, love to see it. Hopefully is well integrated into open tools and “”easy to finetune”” etc, which is hard to measure”” / X https://x.com/natolambert/status/1957517030929887284
[episode 120 of frontier lab gossip: OH in the mission] > **** folks recently bragged about a 100k H100 training run > wrote a post, got all the likes and shares internally > then some of them ran the exact same job on 20k H100s instead of 100k, and ended up with the exact same”” / X https://x.com/suchenzang/status/1956851798221996178
China reportedly discouraged purchase of NVIDIA AI chips due to ‘insulting’ Lutnick statements https://www.engadget.com/ai/china-reportedly-discouraged-purchase-of-nvidia-ai-chips-due-to-insulting-lutnick-statements-123055120.html
NVIDIA-Nemotron-Nano-2-Technical-Report.pdf https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-2-Technical-Report.pdf




