“$500B committed towards AGI, still no articulated vision of what a world with AGI looks like for most people. Even the huge essay by the CEO of Anthropic doesn’t paint a vivid picture For those convinced they are making AGI soon – what does daily life look like 5-10 years later?” / X
https://x.com/emollick/status/1881841026975023208

“Both humans and AI fail the Turing Test. Neither people nor LLMs can accurately detect well-prompted AI writing.
https://x.com/emollick/status/1880717680740913564

“Our lack of good deep measures of human creativity, reasoning, empathy, etc. is really a problem in AI right now. A lot of tests that were “good enough” for human research (RAT for creativity, Seeing the Mind in The Eyes for empathy) are not robust enough for benchmarks for AI.” / X
https://x.com/emollick/status/1880597493513355678

“Operational definition of singularity: we are not truly done until transformers start to research the next transformer. A less fancy term is AutoML, a decades-old CS topic. Singularity is AutoML at the extreme. AutoML is trading capital for higher intelligence without human” / X
https://x.com/DrJimFan/status/1881081662106411138

“I want to emphasize the point Kevin is making. This prediction (AGI within next couple years) is a common timeline for insiders. There are reasons to not believe them, but I think people are not taking the possibility seriously enough that they may be directionality correct.” / X
https://x.com/emollick/status/1881779923289072060

“Attempt at a serious back-of-the-envelope calculation by @krishnanrohit for how many workers a theoretical AGI would be equivalent to, given what we know about power and chip constraints – about 131M by 2030 in the base case, but there are lots of unknowns
https://x.com/emollick/status/1882050049200537863

“twitter hype is out of control again. we are not gonna deploy AGI next month, nor have we built it. we have some very cool stuff for you but pls chill and cut your expectations 100x!” / X
https://x.com/sama/status/1881258443669172470

“An interview in which I largely express confusion over the state of affairs in AI as we apparently ramp towards the goal of AGI, without a common understanding of what that means.” / X
https://x.com/emollick/status/1882183556782535144

“Could program synthesis unlock AGI? In 2019, @fchollet suggested program synthesis, that writes small, task-specific programs, as an ideal way to true reasoning. By adapting dynamically to challenges, it can overcome deep learning limits. Now, in 2025, @fchollet and @mikeknoop
https://x.com/TheTuringPost/status/1882231849524809768

“It’s legitimately bizarre that we spend billions of params on storing MMLU/GPQA knowledge that 99% of us don’t really need and can simply look up/learn on demand. We wanted intelligence; we got memorized trivia. I expressed to @moinnadeem yday that AGI will probably look like
https://x.com/swyx/status/1877818998060175508

“leaked benchmark: o3 pro solved problems we thought were 5 years away. sam’s team is trying to figure out how it did it. something unprecedented is happening.” / X
https://x.com/iruletheworldmo/status/1880760849259999363

Anthropic CEO Says AI Could Surpass Human Intelligence by 2027
https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-01-21-2025/card/anthropic-ceo-says-ai-could-surpass-human-intelligence-by-2027-9tka9tjLKLalkXX8IgKA

Humanity’s Last Exam

Click to access Publication%20Ready%20Humanity’s%20Last%20Exam.pdf

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading