Image created with OpenAI GPT-Image-1. Image prompt: 1966 Kodachrome photo-look, thin white frame, forest-green title band in upper left with stacked yellow/white serif text reading “AGI”, low-angle, goats towering above viewer scene featuring a glowing brain hologram hovering above the goats; gentle film grain, overcast daylight
Geoffrey Hinton on our Brain vs AI Models —- From ‘Curt Jaimungal’ YT channel. https://x.com/rohanpaul_ai/status/1931328195803959774
Ilya Sutskever, in his speech at UToronto 2 days ago: “”The day will come when AI will do all the things we can do.”” “”The reason is the brain is a biological computer, so why can’t the digital computer do the same things?”” It’s funny that we are debating if AI can “”truly think”” https://x.com/Yuchenj_UW/status/1931883302623084719
Meta is reportedly making a $15 billion bet on AGI | The Verge https://www.theverge.com/news/684322/meta-scale-ai-15-billion-investment-zuckerberg
NEW: More details on Meta’s new “”superintelligence”” team Meta has hired top Google DeepMind researcher @jack_w_rae and Johan Schalkwyk, ML lead of popular voice assistant app, Sesame. Plans to hire up to 50 ppl, including a chief scientist, per sources. w/ @KurtWagner8 https://x.com/shiringhaffary/status/1932852606851789278
Jensen hammers Amodei in a recent article
Never trust anyone who says: We’re the special people and only we should be allowed to do this very important thing because we’re the only ones who can be trusted and everyone else it too evil/stupid to be trusted with it.
https://x.com/jeremyphoward/status/1933597258047762657
Jensen Huang dismisses Anthropic CEO’s claim that AI will eliminate jobs: ‘He thinks AI is so scary, but only they should do it’
https://www.yahoo.com/news/jensen-huang-dismisses-anthropic-ceos-144719582.html
Introducing the V-JEPA 2 world model and new benchmarks for physical reasoning https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/
It’s true. The Meta offers for the “”superintelligence”” team are actually insane. If you work at the big AI labs, Zuck is personally negotiating $10M+/yr in cold hard liquid money. I’ve never seen anything like it.”” / X https://x.com/deedydas/status/1932828204575961477
Meta’s Mark Zuckerberg Creating New Superintelligence AI Team – Bloomberg https://www.bloomberg.com/news/articles/2025-06-10/zuckerberg-recruits-new-superintelligence-ai-group-at-meta?embedded-checkout=true
That Altman essay… One thing you can definitely say about him and Dario is that they are making very bold, very testable predictions. We will know whether they are right or wrong in a remarkably short time https://x.com/emollick/status/1932564109477794146
some thoughts on human-ai relationships and how we’re approaching them at openai it’s a long blog post — tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. https://x.com/joannejang/status/1930702341742944589
The Gentle Singularity – Sam Altman https://blog.samaltman.com/the-gentle-singularity
Intelligence too cheap to meter is well within grasp”” – Sam Altman https://x.com/scaling01/status/1932551669134377357
Sam Altman (CEO of OpenAI): “”We do not know how far beyond human-level intelligence we can go, but we are about to find out”” https://x.com/scaling01/status/1932550566036804087
We keep talking about this binary future where we’re all watching the same Netflix show or we’re all lost in our own AI generated fever dreams. But that’s not how culture actually works. The interesting stuff happens in the middle. Think Westworld — the narrative division”” / X https://x.com/bilawalsidhu/status/1932598550514586039
At WWDC 2025, Apple showed off only a handful of AI upgrades, including: —New Live translation for FaceTime, Messages, and calls —Visual intelligence via screenshots —AI-powered intelligent actions in Shortcuts —AI “”Workout Buddy”” on Apple Watch https://x.com/rowancheung/status/1932341247810678845
Sycophantic AI is one of the worst possible outcomes because it simply amplifies every existing belief. We need based AI. Your feelings don’t matter, and you are not the smartest, wisest and most beautiful person on the planet. Most young people are already delusional enough.”” / X https://x.com/scaling01/status/1931373162479997268
reasoning LLMs are bad at puzzles if they are too hard”” is a less catchy title than The Illusion of Reasoning huh”” / X https://x.com/andersonbcdefg/status/1931821352463577482
I think it is entirely possible that RL + GPT-style LLMs lead to AGI.”” / X https://x.com/finbarrtimbers/status/1932134065584714232
The Darwin Gödel Machine: AI that improves itself by rewriting its own code https://sakana.ai/dgm/
The rate at which you learn is to a great extent a function of your metacognitive sensitivity — your propensity to introspect and critique your own mental models and learning processes”” / X https://x.com/fchollet/status/1932332984935625197
He is a doomer, but he keeps working on “”AGI””. This means one of two things: 1. He is intellectually dishonest and/or morally corrupt. 2. He has a huge superiority complex, thinking only he is enlightened enough to have access to AI, but the unwashed masses are too stupid or immoral to use such a powerful tool. In reality, he is deluded about the dangers and power of current AI systems. https://www.threads.com/@yannlecun/post/DKmejx6tFXd?xmt=AQF0ptpMYQkwdpX4dNPfZwCmGt2zqt12hnD9LN3wrXzH-g
my brain is special and conscious because it is made of meat”” / X https://x.com/vikhyatk/status/1932316124596895923
The Utility of Interpretability — Emmanuel Amiesen – YouTube https://www.youtube.com/watch?v=9YQW2mH9FyA
When people talk about AGI and super intelligence and all of that, our conviction is that it will come from the community and the whole field collaborating on this topic.”” this convo btw @operationdanish and @ClementDelangue is packed with sharp insights https://x.com/fdaudens/status/1932136783443001432
This is a fantastic application of applied interpretability! When using llms to review resumes, prior debiasing techniques break in more realistic settings. But simply finding and removing gender or race directions remains effective, beating existing than baselines!”” / X https://x.com/NeelNanda5/status/1933645976889422110
The Dream of a Gentle Singularity – by Zvi Mowshowitz https://thezvi.substack.com/p/the-dream-of-a-gentle-singularity
Ilya Sutskever, U of T honorary degree recipient, June 6, 2025 https://x.com/NandoDF/status/1932347615829508407
The more I see Sutskever’s talk at UofT, the scarier it looks to me. The people who trained and deeply interacted with LLMs know deep down what’s coming. My forecasting was 2028, along with others I too may have discounted the acceleration that comes with code-LLMs. Mass”” / X https://x.com/sbmaruf/status/1932327556684120513
We are excited to announce Trinity, an autoformalization system for verified superintelligence that we have developed at @morph_labs. We have used it to automatically formalize in Lean a classical result of de Bruijn that the abc conjecture is true almost always. https://x.com/morph_labs/status/1933181394588483868?s=46
Every call I have had this week has had someone ask a question about the Apple paper. I think its worth reflecting on why any time a “”AI must fail”” paper comes out (also: model collapse), it gets a lot of buzz & why the many “”AI does this well”” papers don’t. Discomfort with AI?”” / X https://x.com/emollick/status/1932469064363814947
Good analysis (rebuttal) of Apple’s “”illusion of thinking”” Hanoi towers experiment.”” / X https://x.com/giffmana/status/1931801836052189191
I don’t even agree with the Apple paper but this is an extremely midwit take https://x.com/iScienceLuvr/status/1931877956257005904
I think the Apple paper on the limits of reasoning models in particular tests is useful & important, but the “LLMs are hitting a wall” narrative on X around it feels premature at best. Reminds me of the buzz over model collapse – limitations that were overcome quickly in practice”” / X https://x.com/emollick/status/1931449878653403569
The “”reasoning doesn’t exist”” Apple paper drives me crazy. Take logic puzzle like Tower of Hanoi w/ 10s to 1000000s of moves to solve correctly. Check first step where an LLM makes mistake. Long problems aren’t solved. Fewer thought tokens/early mistakes on longer problems. 1/11 https://x.com/Afinetheorem/status/1931853801293484358
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity – Apple Machine Learning Research https://machinelearning.apple.com/research/illusion-of-thinking
the-illusion-of-thinking.pdf https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
wrote a new post, the gentle singularity. realized it may be the last one like this i write with no AI help at all. (proud to have written “”From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly”” the old-fashioned way)”” / X https://x.com/sama/status/1932547247243505924
OpenAI to continue working with Scale AI after Meta deal | Reuters https://www.reuters.com/technology/openai-continue-working-with-scale-ai-after-meta-deal-2025-06-13/
Richard Sutton argues that AI must move beyond human-generated static data into the “Era of Experience,” where agents learn through continuous interaction with the world. This will require building upon RL with better algorithms capable of continual learning and meta-learning. https://x.com/TheHumanoidHub/status/1931969449688719439
BREAKING: Apple just proved AI “”reasoning”” models like Claude, DeepSeek-R1, and o3-mini don’t actually reason at all. They just memorize patterns really well. Here’s what Apple discovered: (hint: we’re not as close to AGI as the hype suggests) https://x.com/RubenHssd/status/1931389580105925115
Dwarkesh Patel on Continual Learning – by Zvi Mowshowitz https://thezvi.substack.com/p/dwarkesh-patel-on-continual-learning




