Image created with gemini-3.1-flash-image-preview with claude-sonnet-4-5. Image prompt: Using the provided reference image, preserve the exact square faceted perfume bottle with amber-gold liquid, crystal stopper, pure white background, soft shadow, and clean label typography, but replace the label text with ‘Chips’ in matching black serif font. Add a delicate sterling silver chain draped naturally around the bottle neck holding one small dainty pendant shaped like a microchip die–a tiny square with fine circuit trace details etched in silver, high-fashion jewelry aesthetic, refined and precious like a Tiffany charm.
Oracle cutting thousands in latest layoff round as AI spending booms
https://www.cnbc.com/2026/03/31/oracle-layoffs-ai-spending.html
The first steel beams went up this week at our Michigan Stargate site with Oracle and Related Digital
https://x.com/sama/status/2037610000122839116
GOOGLE IS CLOSE TO STRIKING A DEAL TO FUND ANTHROPIC’S DATA CENTER, ACCORDING TO FT.
https://x.com/FirstSquawk/status/2037586926375743904
It was such a pleasure to test NVIDIA’s Level 2 driving system in a Mercedes-Benz on real streets in San Francisco while talking to Ali Kani, who knows literally everything about NVIDIA’s AV’s efforts. The driving was so smooth. So smooth that sometimes it felt we were in a
https://x.com/TheTuringPost/status/2039104195161473169
@charles_irl vLLM is always prepared
https://t.co/DFqTiDCg86 First ever Day 0 support on GPU, TPU, and XPU simultaneously
https://x.com/mgoin_/status/2039860597517394279
Cognichip wants AI to design the chips that power AI, and just raised $60M to try | TechCrunch
Cognichip wants AI to design the chips that power AI, and just raised $60M to try
Last year everyone spoke about over building of AI data centers, likely this year will start to demonstrate that there is not nearly enough compute to meet demand I think to degree to which AI is currently subsidized depends on the model, but agree with everything else here
https://x.com/emollick/status/2037541006237733217
Pipelining AI kernels is required to get full perf/utilization out of modern chips. However, no one has been able to crack “”full control over the hardware”” without “”having to micromanage it””. Let’s crack this open: kernel authors deserve a powerful scheduler they can control. 💪
https://x.com/clattner_llvm/status/2039027422310596881
Starcloud raises $170 million Series A to build data centers in space | TechCrunch
Starcloud raises $170 million Series A to build data centers in space
The total memory bandwidth of AI chips shipped since 2022 has reached 70 million terabytes per second, growing 4.1x per year. That’s around 300,000x more data per second than global internet traffic.
https://x.com/EpochAIResearch/status/2037628978283024589
One way to see the advancement of AI is to see how much further you can get with new models on the same hardware Here is “”an otter using a laptop on an airplane”” generated on my home computer using the open weights Wan 2.1, first try. We have come pretty far in 18 months.
https://x.com/emollick/status/2037616578787713194
Nemotron Super / Ultra Arcee Trinity Large (soon) Gemma 4 (eventually) Reflection’s first models (maybe) GPT OSS 2? (maybe) Thinky? Other neolabs? Things looking up for open models built in the US in 2026. We had 0 for a bit there.
https://x.com/natolambert/status/2039499358325129530
Narrative violation from Dylan on Dwarkesh: H100s are worth *more* today than they were 3 years ago. There’s a sentiment that data center buildouts are priced into the risk of rapidly depreciating GPUs. But the models want to learn. Token prices are falling so fast that you can
https://x.com/sarahdingwang/status/2032516017528910120
Great way to wrap up the week! We’re partnering with Crusoe on a 900MW AI factory in Abilene, Texas. Super excited to add more capacity to our AI fleet.
https://x.com/mustafasuleyman/status/2037547756320141587
Long context windows are now available for select models on Tinker! – 128k tokens for Kimi K2.5 and GPT-OSS-120B – 256k for Nemotron 3 Super 120B and Qwen3.5 397B. For more details and pricing, see our full model lineup:
https://x.com/tinkerapi/status/2039424320393621649
Delivered performance, not peak chip specifications, drives AI factory productivity. Rigorous benchmarks are the only way to see past the noise. In MLPerf Inference v6.0, NVIDIA extreme co-design delivered the highest token output across the broadest range of models and
https://x.com/nvidia/status/2039419585254875191
Dissecting Nvidia Blackwell – Tensor Cores, PTX Instructions, SASS, Floorsweep, Yield Microbenchmarking, tcgen05, 2SM MMA, UMMA, TMA, LDGSTS, UBLKCP, Speed of Light, Distributed Shared Memory, GPC Floorsweeps, SM Yield
https://x.com/SemiAnalysis_/status/2039102080959566038
Huge thanks to @NVIDIAAI for supporting full-time engineering work on OpenClaw hardening. A lot of careful security and reliability improvements landed over the last few releases, and that investment is paying off.
https://x.com/openclaw/status/2039100191324979580
As usual, we open-source everything, MIT license:
https://t.co/qZiy0FgTg8 Code:
https://t.co/4VpzxgWQGp Paper:
https://t.co/8E15zjokkM CaP-X is brought to you by NVIDIA, Berkeley, Stanford, and CMU. I’d like to thank the legend @Ken_Goldberg who co-advised the work, and the team
https://x.com/DrJimFan/status/2039360925606760690
Deep transformers used to accumulate layer history. Now they are starting to retrieve from it. → @Kimi_Moonshot proposed Attention Residuals (AttnRes), driving this shift. They turn the residual stream into an attention problem. Why do we need it? Depth in Transformers mostly
https://x.com/TheTuringPost/status/2037107923109953788
Jim Fan proposes a three-part recipe for embodied AI, drawing parallels from the evolution of digital intelligence: – World Models (pre-training video-based physics) – Action Fine-Tuning (motor control) – Physical RL (infinite environments in neural simulation)
https://x.com/TheHumanoidHub/status/2039060404161425833
Manufacturers can’t trust their simulations. They design in a virtual world then spend weeks debugging on the real production line. That gap is costing time, money, and scale. At @NVIDIAGTC, I sat down with Craig McDonnell from @ABBRobotics to break down: → how they’re
https://x.com/IlirAliu_/status/2037099281694257298





Leave a Reply