A monstrous evil AI robot emerges up from behind a giant strawberry. It’s shadow looms over the the strawberry. A dramatic horror movie font across the image reads “AGI”.

“One of my “AGI-ish” tests is to ask an AI agent to create an isochronic map, with a starting point 40 miles due west of Pittsburgh. It involves complex reasoning, research, and tool use. No AI is close, yet.” / X

“If the AI takeover is actually plausible, we’ll need to take costly preventative measures *before* AI comes anywhere close to causing catastrophic harm (probably while it’s super lucrative and beneficial). If we need to do that, how would we tell (and build the will)?” / X

AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work — AI Alignment Forum

“In a new report via The Information, OpenAI researchers are finally preparing to launch a new AI model, code-named Strawberry (previously Q*), this fall If it lives up to leaks, it could potentially advance OpenAI towards Stage 2 of its five-level roadmap to AGI 

“I think most ML research makes the same mistake as behaviorism did in the 1950s. In the short term, you can study behavior much more rigorously than cognition. But in the long term only studying cognition can allow you to understand generalization (as Chomsky argued vs Skinner!)” / X

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading