Image created with gemini-2.5-flash-image with claude-sonnet-4-5. Image prompt: A pristine silicon microchip partially embedded in crumbling white marble stone, dramatic chiaroscuro lighting with golden rays illuminating the metallic circuit traces, weathered stone texture contrasting with reflective silicon surface, architectural composition with deep shadows, neoclassical monument aesthetic, minimalist and monumental, high detail on erosion patterns

Anthropic Economic Index report: Economic primitives \ Anthropic https://www.anthropic.com/research/anthropic-economic-index-january-2026-report

Global AI Adoption in 2025 – A Widening Digital Divide https://www.microsoft.com/en-us/research/wp-content/uploads/2026/01/Microsoft-AI-Diffusion-Report-2025-H2.pdf

Higgsfield is officially the fastest-scaling GenAI company in history, doubling from $100M to $200M in just 2 months. $200M annual run rate in under 9 months. We have raised a $130M in Series A at a $1.3B valuation, backed by Accel, AI Capital Partners (Alpha Intelligence”” https://x.com/higgsfield_ai/status/2011866396784017848?s=20

Demystifying evals for AI agents \ Anthropic https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents

State of AI Data Connectivity Report: 2026 Outlook – CData Software https://www.cdata.com/lp/ai-data-connectivity-report-2026/

DeepSeek’s Manifold-Constrained Hyper-Connections (mHC) shook the AI community with a real mathematical wake-up call: We are running into architectural limits. For a decade, the “just add more layers” strategy on the residual connection: By forcing every layer to preserve”” https://x.com/TheTuringPost/status/2009388382737322059

Engram: How DeepSeek Added a Second Brain to Their LLM | rewire.it | rewire.it Blog https://rewire.it/blog/engram-how-deepseek-added-second-brain-to-llm/index.html

If you found Manifold-Constrained Hyper-Connections (mHC) from DeepSeek a breakthrough, you should also know the full story behind residual connections ↓ 1. For 30 years, deep learning has been shaped by one fear losing – the learning signal. Sepp Hochreiter formalized the”” https://x.com/TheTuringPost/status/2010488378396201018

Advanced Robotics: UC Berkeley This is course is from Peter Abbeel and covers a review on reinforcement learning and continues to applications in robotics. If you work on robotics… this one is worth bookmarking‼️ MDPs: Exact Methods Discretization of Continuous State Space”” https://x.com/IlirAliu_/status/2010427245400121446

[2508.15260] Deep Think with Confidence https://arxiv.org/abs/2508.15260

[2601.00901] Timelike conformal fields on closed $3$-manifolds https://arxiv.org/abs/2601.00901

11 new Policy Optimization techniques ▪️ GDPO (Group reward-Decoupled Normalization) ▪️ AT²PO (Agentic Turn-based PO via Tree Search) ▪️ BuPO (Bottom-up) ▪️ VA-π (Variational Policy Alignment) ▪️ PC-GRPO (Puzzle Curriculum GRPO) ▪️ Turn-PPO ▪️ M-GRPO (Momentum-Anchored GRPO) ▪️”” https://x.com/TheTuringPost/status/2010355091224842578

A must-read survey: LLM-empowered knowledge graph construction Connects traditional KG methods with modern LLM-driven techniques, covering: – KG foundations: ontology, extraction, fusion – LLM-enhanced ontology: top-down & bottom-up – LLM-driven extraction: schema-based &”” https://x.com/TheTuringPost/status/2009777630451773603

AI Code Review That Knows How You Work https://getunblocked.com/code-review/

AI is everywhere, but nowhere in recent productivity data • The Register https://www.theregister.com/2026/01/15/forrester_ai_jobs_impact/

AI Training Checklist https://about.you.com/ai-training-checklist

AI’s Way Cooler Trillion-Dollar Opportunity: Vibe Graphs https://joereis.substack.com/p/ais-way-cooler-trillion-dollar-opportunity

And it’s up! This was a really fun format – you get the back-and-forth energy of a conversation, but you can actually think through an idea in writing. Thanks to @substack for hosting, and to @patio11, @michaeljburry, and @jackclarkSF for a great discussion!”” https://x.com/dwarkesh_sp/status/2009701316013281766

Axiom https://axiommath.ai/territory/from-seeing-why-to-checking-everything

EDEN Manuscript https://basecamp-research.com/wp-content/uploads/2026/01/BCR_Designing-programmable-therapeutics-with-the-EDEN-family-of-foundation-models.pdf

Must-read AI research of the week: ▪️ WebGym: Scaling Training Environments for Visual Web Agents with Realistic Tasks ▪️ Over-Searching in Search-Augmented LLMs ▪️ GDPO: Group reward-Decoupled Normalization Policy Optimization ▪️ Atlas: Orchestrating Heterogeneous Models and”” https://x.com/TheTuringPost/status/2011267115135926441

Notable models of the week that worth your attention: ▪️ Web World Models (by Princeton) ▪️ Youtu-LLM ▪️ Dynamic Large Concept Models More info about these models in my newsletter: https://x.com/TheTuringPost/status/2009365057923371241

Notable models of the week: ▪️ Liquid: LFM2.5 – The Next Generation of On-Device AI ▪️ MiMo-V2-Flash ▪️ K-EXAONE ▪️ LTX-2: Efficient Joint Audio-Visual Foundation Model Read more about each of them: https://x.com/TheTuringPost/status/2011569739177607660

On neural scaling and the quanta hypothesis https://ericjmichaud.com/quanta/

Recursive Language Models (RLMs) – a novel inference-time architecture from @MIT_CSAIL enabling LLMs to process arbitrarily long prompts. They scale beyond 10 million tokens, over 100× typical context windows. RLMs offload prompt into a Python REPL as a variable (context),”” https://x.com/TheTuringPost/status/2011272650132504889

Standard reinforcement learning breaks down fast when rewards are sparse and action spaces are huge. A new paper proposes a different idea called Internal RL. Instead of acting on raw actions, the agent learns to act on the model’s own internal representations. In practice:”” https://x.com/IlirAliu_/status/2009340088572915811

The Great Filter (Or Why High Performance Still Eludes Most Dev Teams, Even With AI) – Codemanship’s Blog https://codemanship.wordpress.com/2026/01/12/the-great-filter-or-why-high-performance-still-eludes-most-dev-teams-even-with-ai/

Thinking with Map: Reinforced Parallel Map-Augemented Agent for Geolocalization https://amap-ml.github.io/Thinking-with-Map/

Unlock the secret to AI success | Forrester study https://miro.com/events/secret-to-ai-success-forrester-study/?src=-newsletter_glb%2F2%2F0100019bb7cf5f6b-3934a48d-0968-4807-93b2-6f14e16011a2-000000%2FiRSX3uaHDUL33SCxRLDMQwY6OevHMMFPkyLieXqt9Og%3D440

Use multiple models – by Nathan Lambert – Interconnects AI https://www.interconnects.ai/p/use-multiple-models

What I’ve been reading recently – Jan 10, 2026 Nonlinear Dynamics and Chaos, Machines of Loving Grace, Max Hodak’s theory of consciousness, Neural network training makes beautiful fractals.”” https://x.com/dwarkesh_sp/status/2010087342942659053

SpaceTimePilot: Generative Rendering of Dynamic Scenes Across Space and Time”” TL;DR: “”Given a single input video of a dynamic scene, SpaceTimePilot freely steers both camera viewpoint and temporal motion within the scene, enabling freely exploration across in 4D (space-time)”””” https://x.com/Almorgand/status/2009286910015885648

Software Too Cheap to Meter – by Steve Newman https://secondthoughts.ai/p/software-too-cheap-to-meter

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading