Image created with Ideogram 3.0. Image prompt: Lower-East-Side street-corner photograph reminiscent of a late-80s album cover: weathered red-brick tenement with exterior fire-escapes, canvas awning shading racks of vintage clothes; above the awning, a hand-painted board reads ‘NVIDIA SPORTSWEAR’; a hanging blade sign in cursive script reads ‘NVIDIA Boutique’; a neon-green NVIDIA GPU box is spotlighted in the front window; warm golden-hour light, subtle 35mm film grain, muted yet punchy color palette, gritty NYC vibe.

Tool-using LLMs can learn to reason—without reasoning traces. 🔥 We present Nemotron-Research-Tool-N1, a family of tool-using reasoning LLMs trained entirely via rule-based reinforcement learning—no reasoning supervision, no distillation. 📄 Paper: https://x.com/ShaokunZhang1/status/1922105694167433501

NVIDIA offers two blueprints for synthetic data generation: ⦿ Isaac GR00T-Mimic: Uses a physics engine to amplify human motion data in simulation. ⦿ GR00T-Dreams (announced yesterday): Fine-tunes a video generation AI model to create new motion videos from a single image. https://x.com/TheHumanoidHub/status/1924538121687073167

Jensen just announced NVIDIA’s Isaac GR00T N1.5 and GR00T-Dreams blueprint at COMPUTEX 2025: ⦿ Isaac GR00T N1.5 is the first update to NVIDIA’s open, generalized, fully customizable foundation model for humanoid reasoning and skills. ⦿ “Human demonstrations aren’t scalable — https://x.com/TheHumanoidHub/status/1924332201862414495

JUST IN🚨: Nvidia open sourced Physical AI models reasoning models that understand physical common sense and generate appropriate embodied decisions 👀 https://x.com/reach_vb/status/1924525937443365193

NVIDIA released new vision reasoning model for robotics: Cosmos-Reason1-7B 🤖 > first reasoning model for robotics 😱 > based on Qwen 2.5-VL-7B, use with @huggingface transformers or vLLM 🤗 > comes with SFT & alignment dataset and a new benchmark 👏 https://x.com/mervenoyann/status/1924817927561183498

NVIDIA has published a paper on DREAMGEN – a powerful 4-step pipeline for generating synthetic data for humanoids that enables task and environment generalization. – Step 1: Fine-tune a video generation model using a small number of human teleoperation videos – Step 2: Prompt https://x.com/TheHumanoidHub/status/1925255036965408887

Really cool how DeepSeek is now the benchmark for Nvidia”” / X https://x.com/teortaxesTex/status/1924588309688267139

An Interview with Nvidia CEO Jensen Huang About Chip Controls, AI Factories, and Enterprise Pragmatism – Stratechery by Ben Thompson https://stratechery.com/2025/an-interview-with-nvidia-ceo-jensen-huang-about-chip-controls-ai-factories-and-enterprise-pragmatism/

Build Semi-Custom AI Infrastructure | NVIDIA NVLink Fusion https://www.nvidia.com/en-us/data-center/nvlink-fusion/

Designing models and hardware together — is it a new shift for the best cost-efficient models? This idea is used in DeepSeek-V3 that is trained on just 2,048 powerful NVIDIA H800 GPUs. A new research from @deepseek_ai clarifies how DeepSeek-V3 works using its key innovations: https://x.com/TheTuringPost/status/1924631209050833205

NVIDIA Unveils NVLink Fusion for Industry to Build Semi-Custom AI Infrastructure With NVIDIA Partner Ecosystem | NVIDIA Newsroom https://nvidianews.nvidia.com/news/nvidia-nvlink-fusion-semi-custom-ai-infrastructure-partner-ecosystem

Jensen: The humanoid robot is likely the only robot that will work – because technology needs scale, and most robots we’ve had so far are too low volume to drive the flywheel of technology improvements. The humanoid robot is likely to be the next multi-trillion-dollar industry. https://x.com/TheHumanoidHub/status/1924341417662672972

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading