A fashion photoshoot of a runway look inspired by thieves. A large screen displays the word “Ethics” –ar 4:3 –style raw

FTC and DOJ reportedly opening antitrust investigations into Microsoft, OpenAI, and Nvidia – The Verge

U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI – The New York Times

The Opaque Investment Empire Making OpenAI’s Sam Altman Rich

The Opaque Investment Empire Making OpenAI’s Sam Altman Rich – WSJ

“No one should act like state actors can’t build their own GPT4/5/6/7/87 – and they will remove any boundaries in their models preventing them from having their model develop new weapons, psychological manipulators, whatever your afraid of – and they will attempt to do this. The US included.”
https://twitter.com/Teknium1/status/1797979400526581833  

“🚨New paper: with many technologies, younger employees typically “get it,” and give advice to others on using it. This doesn’t work in AI. We interviewed junior consultants about handling the risks associated with AI in their job… and found that their advice was mostly off. 

Securing Research Infrastructure for Advanced AI | OpenAI

We’re sharing some high-level details on the security architecture of our research supercomputers.

“Regulators should regulate applications, not technology. – Regulating basic technology will put an end to innovation. – Making technology developers liable for bad uses of products built from their technology will simply stop technology development. – It will certainly stop the”

“The Big AI Debate explained by Forbes. What is more beneficial or more dangerous: open source AI or proprietary controlled by 3 or 4 big players? The people who worry most about AI safety also tend to be he ones who overestimate the power of AI.”

“Sam Altman responded to Helen Toner’s reveals on the Ted AI Show last week. In case you missed it, the ex-OpenAI board member claimed Sam Altman was “in some cases outright lying to the board,” and they found about the launch of ChatGPT through Twitter 

“1/15: In April, I resigned from OpenAI after losing confidence that the company would behave responsibly in its attempt to build artificial general intelligence — “AI systems that are generally smarter than humans.” 

“Former OpenAI researcher Leopold Aschenbrenner released a new essay series detailing his view on AGI. The researcher says that ‘nobody is pricing in’ what is coming in AI, and to expect another GPT-2 to GPT-4 level jump by 2027 (that would take us to AGI) 

“Current and former employees from OpenAI, Anthropic, and DeepMind published an open letter called ‘A Right to Warn’ It calls for companies to expand whistleblower protections so workers can raise the alarm about potential AI dangers without fear of retaliation. 

“Ashton Kutcher has access to a beta version of OpenAI’s Sora and says it will lead to personalized movies and a higher standard of content through increased competition 

“Alignment, of a sort: this paper conducts what they call a “moral Turing Test,” asking people to compare GPT-4o to humans on ethical questions. “Here we find that LLMs appear to have a strong aptitude for moral reasoning on par with expert ethicists.” 

Stanford University team apologises over claims they copied Chinese project for AI model | South China Morning Post

A Right to Warn about Advanced Artificial Intelligence

We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.

China’s Nvidia Loophole: How ByteDance Got the Best AI Chips Despite U.S. Restrictions — The Information

“The US is going to lose its leadership in AI if it doesn’t support more open research and open-source AI!”

“Google presents Open-Endedness is Essential for Artificial Superhuman Intelligence – Argues that the ingredients are now in place to achieve openendedness in AI systems – Claims that such open-endedness is an essential property of any ASI 

“Google presents To Believe or Not to Believe Your LLM 

“Our reporting on Eric Schmidt’s stealth drone project was posted this AM by @perplexity_ai . It rips off most of our reporting. It cites us, and a few that reblogged us, as sources in the most easily ignored way possible. Note the views. #zeroclick  

“Future You: A Conversation with an AI-Generated Future Self Reduces Anxiety, Negative Emotions, and Increases Future Self-Continuity 

“Feeding a legal document to Claude today found an issue that neither side’s lawyers had identified, but which everyone agreed was important after Claude pointed it out. Cheap second opinions that are mostly right are incredibly valuable.”

The problem with using AI for OCR: “This is not a theoretical concern: here’s Claude 3 Opus refusing to extract JSON from a campaign finance report document because “… that would involve extracting and structuring private details about the individual”

“Today, we’re publishing details about the processes we use to test and mitigate elections-related risks. We’re also sharing samples of the evaluations we use to test our models: 

Leopold Aschenbrenner – 2027 AGI, China/US Super-Intelligence Race, & The Return of History – YouTube

“AGI by 2027 is strikingly plausible. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph. 

“Virtually nobody is pricing in what’s coming in AI. I wrote an essay series on the AGI strategic picture: from the trendlines in deep learning and counting the OOMs, to the international situation and The Project. SITUATIONAL AWARENESS: The Decade Ahead 

“Unpopular opinion: We will not achieve AGI any time soon and @leopoldasch’s prediction is way off. Here’s why: 1 – The Straight Line Fallacy One of the most common mistakes in predicting technological advancements is assuming that progress will continue in a straight line.”

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading