“Matter 1.4 specification improves multi-admin and energy management, adds new devices like solar panels, batteries, and water heaters: We covered the Matter 1.3 specification in May 2024, but the Connectivity Standards Alliance is wasting no time and t…
https://x.com/linuxdevices/status/1863640406161805401

“Excited to release what we’ve been working on at Amaranth Foundation, our latest whitepaper, NeuroAI for AI safety! A detailed, ambitious roadmap for how neuroscience research can help build safer AI systems while accelerating both virtual neuroscience and neurotech. 1/N
https://x.com/patrickmineault/status/1863618405518983668

“Reverse Thinking Makes LLMs Stronger Reasoners abs: https://t.co/pGevBHjEeb “Humans can reason not only from a problem to a solution but also in reverse, i.e., start from the solution and reason towards the problem. This often enhances overall reasoning performance as it
https://x.com/iScienceLuvr/status/1863527576687268241

[2409.16045v1] LTNtorch: PyTorch Implementation of Logic Tensor Networks
https://arxiv.org/abs/2409.16045v1

Reward Hacking in Reinforcement Learning | Lil’Log
https://lilianweng.github.io/posts/2024-11-28-reward-hacking/

ProX: Lifting Pre-training Data Quality Like Experts
https://gair-nlp.github.io/ProX/homepage.html

[2411.19870] DeMo: Decoupled Momentum Optimization
https://arxiv.org/abs/2411.19870

“today we are announcing reinforcement finetuning, which makes it really easy to create expert models in specific domains with very little training data. livestream going now:
https://x.com/sama/status/1865096566467686909

[2411.18296v1] HUPE: Heuristic Underwater Perceptual Enhancement with Semantic Collaborative Learning
https://arxiv.org/abs/2411.18296v1

“We are excited to release Nemotron-CC, our high quality Common Crawl based 6.3 trillion tokens dataset for LLM pretraining (4.4T globally deduplicated original tokens and 1.9T synthetically generated tokens). Compared to the leading open DCLM dataset, Nemotron-CC enables to
https://x.com/MarkusKliegl/status/1864398488160923885

“New paper shows AI art models don’t need art training data to make recognizable artistic work. Train on regular photos, let an artist add 10-15 examples their own art (or some other artistic inspiration), and get results similar to models trained on millions of people’s artworks
https://x.com/emollick/status/1864267120969724237

“The highest-scored paper at ICLR 2025 with full scores, 10, 10, 10, 10! The first time in ICLR history? IC-Light is designed to control image lighting. They managed to collect >10 million images for training illumination editing models, with amazing results on SDXL and Flux
https://x.com/Yuchenj_UW/status/1862541099136651536

“We are excited to share RoCoDA, a data augmentation framework unifying the concepts of invariance, equivariance, and causality to enhance data augmentation for imitation learning.
https://x.com/jerthesquare_/status/1864452150933508368

“For anyone interested in fine-tuning or aligning LLMs, I’m running this free and open course called smol course. It’s not big like Li Yin and Maxime Labonne, it’s just smol. – It focuses on practical use cases, so if you’re working on something, bring it along. – It’s peer
https://x.com/ben_burtenshaw/status/1863876321882833335

“IC-Light V2-Vary Alternative IC-Light V2 Model(s) for people who want stronger illumination variations and modifications. Demo:
https://x.com/_akhaliq/status/1863644176677519610

“The (true) story of development and inspiration behind the “attention” operator, the one in “Attention is All you Need” that introduced the Transformer. From personal email correspondence with the author @DBahdanau ~2 years ago, published here and now (with permission) following
https://x.com/karpathy/status/1864023344435380613

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading