Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: A massive pristine silicon wafer disc emerging vertically from cracked ice sheets in a frozen winter bay at dusk, half-embedded in the ice like a natural geological formation, its microscopic circuit patterns visible and catching gradient sunset light from deep blues to warm golds, surrounded by ice fragments and dark water, photorealistic nature documentary style, 4K resolution, golden hour cinematography, the bold text CHIPS appears across the top in clean sans-serif font.
How Nvidia became the first $5 trillion company, in 4 charts | CNN Business https://edition.cnn.com/2026/02/07/business/nvidia-trillion-valuation-ai-chips-vis
Launching mini-SWE-agent 2.0, the simplest coding agent. Near SoTA performance, with the agent/model/environment only ~100 lines each. Powering benchmarks and RL training at NVIDIA, Anyscale, Stanford and many more!”” https://x.com/KLieret/status/2021606142699356215
Anthropic’s Data Center Ambition–and the Ex-Google Execs Who Could Make It Happen — The Information https://www.theinformation.com/newsletters/ai-infrastructure/anthropics-data-center-ambition-ex-google-execs-make-happen
Covering electricity price increases from our data centers \ Anthropic https://www.anthropic.com/news/covering-electricity-price-increases
Coding at 1000 tokens/sec is a mind-expanding experience. You have to try this.”” https://x.com/kevinweil/status/2022014266711347605
So maybe the argument that the world is compute constrained and there aren’t anywhere enough cheap tokens to go around wasn’t so fanciful after all.”” https://x.com/emollick/status/2021655497859035540
Unsloth now has 12x faster, >35% less VRAM MoE training vs transformers v4 and 2x faster than v5 with our MoE Triton kernels & LoRA + torch._grouped_mm support! Qwen3, DeepSeek v3, GLM 4.7 Flash, gpt-oss are all faster in Unsloth and are optimized heavily for LoRA. Details:”” https://x.com/danielhanchen/status/2021250166850977872
Meta’s New Data Center in Lebanon, Indiana Marks a Milestone AI Investment https://about.fb.com/news/2026/02/metas-new-data-center-lebanon-indiana-marks-milestone-ai-investment/
3 years ago, we emailed Jensen with requests for Blackwell. Today, we released GPT-5.3-Codex, a SOTA model designed for GB200-NVL72. Nitpicking ISA, simming rack designs, and tailoring our arch to the system has been a fun experience! I’m grateful to our collaborators at NVIDIA.”” https://x.com/trevorycai/status/2019482450855096440
At @nvidia, we use a lot of AI coding tools. Codex with GPT-5.3-codex is particularly impressive. The engineers I know here are big codex power users. The capabilities of these coding agents are advancing quickly, it’s quite exciting. With 5.3, I’m particularly impressed with”” https://x.com/benklieger/status/2021707684211569033
VS Code gives you extremely powerful building blocks with custom agents, parallel subagents, and slash commands to compose your own workflows. Here is /review command that uses Opus 4.6 fast mode, GPT-5.3-Codex, and Gemini 3 Pro to independently review changes and grade each”” https://x.com/pierceboggan/status/2021094988205969465
Not the flashiest demos, but what’s under the hood represents a foundational shift for general-purpose robotics. World models are the next-gen foundation of Physical AI, not the VLM backbones found in typical VLAs. DreamZero is a 14B-parameter World Action Model (WAM) by NVIDIA”” https://x.com/TheHumanoidHub/status/2019460701811851593
Robots usually fail for one simple reason: they don’t understand what will happen next. [📍Paper, code & task gallery at the end] > 14B “World Action Model” from @nvidia: DreamZero… Instead of copying motions or replaying demonstrations, this model predicts how the world”” https://x.com/IlirAliu_/status/2019418751976800520
Leading Inference Providers Cut AI Costs by up to 10x With Open Source Models on NVIDIA Blackwell | NVIDIA Blog https://blogs.nvidia.com/blog/inference-open-source-models-blackwell-reduce-cost-per-token/
The first question I asked @elonmusk: What’s the point of sending GPUs into space? The whole idea behind orbital data centers is that if the launch costs continue to drop, it will become cheaper to put GPUs in orbit than to build power plants on Earth. The problem with this”” https://x.com/dwarkesh_sp/status/2019499174384005458
Z.ai said they are GPU starved, openly. : r/LocalLLaMA https://www.reddit.com/r/LocalLLaMA/comments/1r26zsg/zai_said_they_are_gpu_starved_openly/





Leave a Reply