“Nvidia also revealed ‘Project Digits’ at CES 2025 It’s a $3,000 personal computer powered by the GB10 superchip that’s 1000x more powerful than the average laptop Anyone will be able to run their own local AI model in their own home
https://x.com/adcock_brett/status/1878488033042841926

“Awesome new app for local LLMs that runs on iPhone, iPad, Mac, etc built with MLX Swift. Also code is open source and MIT licensed!
https://x.com/awnihannun/status/1878843809460875593

“You have a smol 82M param frontier Text to Speech model ready to use directly in browser! – what’s your excuse? 💥”
https://x.com/reach_vb/status/1879916301013135435

“Here’s a closer at the new NVIDIA Project DIGITS personal AI supercomputer. Live from the #CES2025 show floor — attendees got the first look. Learn more now. ➡️
https://x.com/NVIDIADC/status/1877861425726574602

“Phi-4 (4-bit) in @lmstudio on an M4 max is quite fast and quite good:
https://x.com/awnihannun/status/1878564132125085794

“it’s a baffling fact about deep learning that model distillation works method 1 – train small model M1 on dataset D method 2 (distillation) – train large model L on D – train small model M2 to mimic output of L – M2 will outperform M1 no theory explains this; it’s magic” / X
https://x.com/jxmnop/status/1877761437931581798

“👀 Look what’s new: A small embedding model just claimed the #1 spot on MTEB leaderboard! Major breakthrough for better RAG systems.
https://x.com/fdaudens/status/1879257432981172283

“NEW: kokoro.js – run kokoro directly in your browser, 100% locally, with minimal dependencies! 🔥 npm -i kokoro-js is all you need!
https://x.com/reach_vb/status/1879913142873944282

“Microsoft released Phi-4, fully open-source on Hugging Face It’s 14B params and trained primarily on synthetically generated high-quality data instead of web content Weight are freely available under an MIT license that allows for commercial use
https://x.com/adcock_brett/status/1878488010339066240

“Nice little transformers.js examples WebGPU-accelerated reasoning LLMs running 100% locally in-browser w/ Transformers.js
https://x.com/rohanpaul_ai/status/1878008968423055803

“Learnings from Scaling Visual Tokenizers for Reconstruction and Generation New paper from Meta studies how scaling the autoencoder bottleneck affects reconstruction and generation. * Scaling the encoder doesn’t necessarily improve reconstruction or generation performance. Small
https://x.com/iScienceLuvr/status/1880164031987589413

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading