Ethics/Legal Security

OpenAI announced an early warning system and released research on if AI models can aid in creating bioweapons. The research finds that they are currently at most’ mildly useful for the task…

In the largest-of-its-kind evaluation, we found that GPT-4 provides, at most, a mild uplift in biological threat creation accuracy (see dark blue below.)  While not a large enough uplift to be conclusive, this finding is a starting point for continued research and deliberation.

Evaluations for LLM-assisted biological threat creation. Current models not very capable at this task, but we want to be ahead of the curve for assessing this and other potential future risk areas:

FCC moves to outlaw AI-generated robocalls

This guy trained a bot to swipe on Tinder profiles based on his preferences, and then used ChatGPT to message them and set up dates.  He communicated with 5,200+ women – and one year later, he’s now engaged to one of them (after ChatGPT suggested he propose).

Mastercard jumps into generative AI race with model it says can boost fraud detection by up to 300%

AI poisoning tool Nightshade received 250,000 downloads in 5 days: ‘beyond anything we imagined’

China approves over 40 AI models for public use in past six months

White House science chief signals US-China co-operation on AI safety

Former Rep. Will Hurd (R-Texas) said in an op-ed Tuesday that he was “freaked out” by a briefing while serving on the board of ChatGPT-maker OpenAI and called for guardrails on the development of “artificial general intelligence (AGI).”

Should 4 People Be Able to Control the Equivalent of a Nuke?

As artificial intelligence becomes more science fact than science fiction, its governance can’t be left to the whims of a few people.

AI Hubs Are Few and Far Between

U.S. AI Hubs are Concentrated in a Few Cities

AI companies will need to start reporting their safety tests to the US government

X/Twitter Restores Searches for Taylor Swift After Temporary Block in Response to Flood of Explicit AI Fakes

George Carlin’s Estate Sues Podcasters Over A.I. Episode

The lawsuit claims that an hourlong comedy special on YouTube violated Carlin’s copyright.

In support of efforts to create safe and trustworthy artificial intelligence (AI), NIST is establishing the U.S. Artificial Intelligence Safety Institute (USAISI). To support this Institute, NIST has created the U.S. AI Safety Institute Consortium. The Consortium brings together more than 200 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.

FTC investigating Microsoft, Amazon, and Google investments into OpenAI and Anthropic

How hard is it to cheat in technical interviews with ChatGPT? We ran an experiment.

AI and crypto mining are driving up data centers’ energy use

It seems one issue with copyright characters & image generation is that some images are just ubiquitous.

Even if you only train on licensed data, Mario appears everywhere, including in completely legal screenshots, billboards, film frames, t-shirts, etc. How do you scrub that?

Executives from Meta, X, TikTok, Snap and Discord are testifying before the Senate Judiciary Committee about safeguarding children on their respective platforms.  Considering these are the companies driving a lot of AI, it’s important to keep an eye on their (lack of) ethics.

​​Universal Music Group, the label representing artists such as Taylor Swift, Billie Eilish and Ariana Grande, says that it’ll pull its music from TikTok after failing to reach a deal with ByteDance over royalties.  Similarly to the Senate committee story from this week, the way publishers see tech companies will impact AI’s ability to draw from content… like music in algorithms.

“malware” that instantly floods your hard drive with thousands of pictures of cats summoned from latent space

The security risks of products using LLMs are vast.  PromptArmor (YC W24) protects LLM applications from data exfiltration, phishing, and system manipulation through anomaly detection, heuristics, and models.

I have written about “secret cyborgs” – people who use AI to do work, but don’t tell anyone else.

This paper shows one reason for the secrecy: people preferred the creative summaries written by AI, until they were told it was AI, which changed their minds about how good it was.

Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI

We’re Building a Solution with IBM Consulting to Improve Transparency and Auditability for Generative AI Systems

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

Italy puts ChatGPT on notice for alleged privacy missteps (yes, again)

From West to the Rest: Growing Geographic Dispersion of AI Jobs in America 

The NAIRR Pilot aims to connect U.S. researchers and educators to computational, data, and training resources needed to advance AI research and research that employs AI. Federal agencies are collaborating with government-supported and non-governmental partners to implement the Pilot as a preparatory step toward an eventual full NAIRR implementation.https://nairrpilot.org/

Be Sure To Read “This Week In AI”

This week’s executive overview and top links are here:

AI News #18: Week Ending 02/02/2024 with Executive Summary and Top 12 Stories

The post you just read is an extension of my weekly newsletter, This Week In AI, an executive summary of the top things to know in AI. Each week, I create an accessible overview for laypeople to feel confident they are conversant with the week’s AI developments. I include a curated list of must-click links of the week, to offer everyone a hands-on opportunity to explore the most intriguing updates in artificial intelligence across various categories, including robotics, imagery, video, AR/VR, science, ethics, and more. Beyond the overview, I post these topic-based deeper dives (below). If you haven’t read this week’s overview, I recommend starting there.

Credits/Sources

Most of these links come from just a few incredible sources.  Please follow them:

Previous Issues

One response to “Ethics/Legal/Security AI News: Week Ending 02/02/2024”

  1. […] Ethics/Legal/Security AI News of the Week: This section focuses on the impact AI is having on ethics (deep fakes, war, trust, false information, plagiarism, job loss, income), legal (rights, laws, regulations), and security (hacking, phishing, national interests, safety). For huge news stories like the NY Times suing OpenAI, I usually put them under the main section or give them their own page.This weeks’s latest AI ethics/legal/security news: https://ethanbholland.com/2024/02/07/ethics-legal-security-ai-news-week-ending-02-02-2024/ […]

Leave a Reply

Trending

Discover more from Ethan B. Holland

Subscribe now to keep reading and get access to the full archive.

Continue reading