© 2024 Public Radio East
Public Radio For Eastern North Carolina 89.3 WTEB New Bern 88.5 WZNB New Bern 91.5 WBJD Atlantic Beach 90.3 WKNS Kinston 88.5 WHYC Swan Quarter 89.9 W210CF Greenville
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
89.3 WTEB operating at reduced power

Justice Department is being urged to protect researchers testing AI platforms

A MARTÍNEZ, HOST:

Cybersecurity experts are urging the Department of Justice to protect researchers who test artificial intelligence platforms. They argue the government shouldn't prosecute good-faith hacking to find vulnerabilities. Here's NPR's cybersecurity correspondent Jenna McLaughlin.

JENNA MCLAUGHLIN, BYLINE: One of the very first cybersecurity laws, the Computer Fraud and Abuse Act, was passed in 1986. The goal was to prosecute computer-related crimes. To support arguments to pass the legislation, members of Congress cited the 1983 movie "WarGames," where a teenager hacks into a military supercomputer.

(SOUNDBITE OF FILM, "WARGAMES")

MATTHEW BRODERICK: (As David Lightman) This box just interprets signals from the computer and turns them into sound.

JOHN WOOD: (As WOPR) Shall we play a game?

BRODERICK: (As David Lightman) Oh.

MCLAUGHLIN: The aging law has been criticized over the years for being overly broad. Experts say it puts cybersecurity researchers at risk for breaking into systems to identify flaws to be fixed, but recently, the government has given explicit protections for good-guy hackers. They've even started inviting those hackers to break into systems to help secure them.

ILONA COHEN: We have come a very long way since the early days of good-faith security research.

MCLAUGHLIN: That's Ilona Cohen, the chief legal and policy officer at cybersecurity company HackerOne.

COHEN: You know, when the government first launched the Hack the Pentagon program in 2016, the notion that good-faith security researchers would, you know, merit protection under the law was very far afield from anything that anyone could have conceived. And over the last eight years, you know, there has been more and more of a recognition that good-faith security research is a backbone of the sort of cybersecurity protections that are necessary in this day and age.

MCLAUGHLIN: Now, Cohen and her team want the justice department to take it one step further.

COHEN: Security research does not necessarily cover AI research for bias, discrimination, et cetera, so we really do need to make sure that the trustworthiness aspect of this is similarly protected.

MCLAUGHLIN: When researchers are trying to prompt AI chatbots into saying things they shouldn't, hunting for biases, dangerous content or inaccurate information, all to make those platforms safer, those researchers should be defended against any potential legal or copyright challenges, says Cohen.

COHEN: We're looking for safety issues. We're trying to focus on preventing AI systems from generating harmful content.

MCLAUGHLIN: That will be especially important during a big election year around the world.

Jenna McLaughlin, NPR News.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Jenna McLaughlin is NPR's cybersecurity correspondent, focusing on the intersection of national security and technology.