Image created with OpenAI GPT-Image-1. Image prompt: vintage Sly & the Family Stone album-cover style, distressed U.S. flag in muted reds & blacks, gritty vinyl texture featuring infinite light-bulb icon morphing into brain circuit; grainy retro print texture, vibrant 60s funk color palette, high-resolution
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity – Apple Machine Learning Research https://machinelearning.apple.com/research/illusion-of-thinking
Wow, we actually achieved AGI I’ve been using Manus AI the last 24 hours straight and it’s capabilities are mindblowing It’s literally your own AI employee. If a human did this it would cost me $200k Manus does it for free Here is how I had it design and build an entire Saas: https://x.com/AlexFinnX/status/1901356733165121952
New Paper! Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents A longstanding goal of AI research has been the creation of AI that can learn indefinitely. One path toward that goal is an AI that improves itself by rewriting its own code, including any code https://x.com/hardmaru/status/1928284568756629756
Our interpretability team recently released research that traced the thoughts of a large language model. Now we’re open-sourcing the method. Researchers can generate “attribution graphs” like those in our study, and explore them interactively.”” / X https://x.com/AnthropicAI/status/1928119229384970244
Why do people disagree about when powerful AI will arrive? | BlueDot Impact https://bluedot.org/blog/agi-timelines
This is a strawman. We don’t use the phrase “”AGI”” in the MAIM paper (Superintelligence Strategy). In fact, we discuss how the concept of AGI is too vague to be useful in the appendix. We make it clear that the first thing we want to deter is an intelligence recursion—thousands”” / X https://x.com/i/web/status/1929713070265516459
(4) Why I don’t think AGI is right around the corner https://www.dwarkesh.com/p/timelines-june-2025
AGI Is Not Multimodal https://thegradient.pub/agi-is-not-multimodal/
The methods we used to trace the thoughts of Claude are now open to the public! Today, we are releasing a library which lets anyone generate graphs which show the internal reasoning steps a model used to arrive at an answer. https://x.com/mlpowered/status/1928123130725421201
Large language models are proficient in solving and creating emotional intelligence tests | Communications Psychology https://www.nature.com/articles/s44271-025-00258-x




