Public Radio For Eastern North Carolina 89.3 WTEB New Bern 88.5 WZNB New Bern 91.5 WBJD Atlantic Beach 90.3 WKNS Kinston 88.5 WHYC Swan Quarter 89.9 W210CF Greenville
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
US

Are The Robots Coming For Us? Misconceptions About AI And Machine Learning

Hanson Robotics' flagship robot Sophia, a lifelike robot powered by artificial intelligence, speaks with visitors, at the Mobile World Congress wireless show, in Barcelona, Spain, Tuesday, Feb. 26, 2019. (Emilio Morenatti/AP)
Hanson Robotics' flagship robot Sophia, a lifelike robot powered by artificial intelligence, speaks with visitors, at the Mobile World Congress wireless show, in Barcelona, Spain, Tuesday, Feb. 26, 2019. (Emilio Morenatti/AP)

Machine learning is everywhere, but is it actual intelligence? A computer scientist wrestles with the ethical questions demanded by the rise of AI.

Guest

Melanie Mitchell, professor of computer science at Portland State University. Author of “Artificial Intelligence: A Guide for Thinking Humans.” (@MelMitchell1)

From The Reading List

Excerpt from “Artificial Intelligence: A Guide for Thinking Humans” by Melanie Mitchell

Excerpted from ARTIFICIAL INTELLIGENCE: A Guide for Thinking Humans by Melanie Mitchell. Published by Farrar, Straus and Giroux October 15th 2019. Copyright © 2019 by Melanie Mitchell. All rights reserved.


Wall Street Journal: “‘Human Compatible’ and ‘Artificial Intelligence’ Review: Learn Like a Machine” — “Journalists like to punctuate stories about the risks of artificial intelligence—particularly long-term, humanity-threatening risks—with images of the Terminator. The idea is that unchecked robots will rise up and kill us all. But such martial bodings overlook a perhaps more threatening model: Aladdin. Superhuman AI needn’t have it in for us to wreak havoc. Even with our best interests at metallic heart, an AI might misunderstand our intentions and, say, grant our wish of no more cancer by leaving no more people. Here the risk isn’t evil slaughterbots but an overly literal genie.

“At least that’s the general concern raised in ‘Human Compatible’ by Stuart Russell, a computer scientist at the University of California, Berkeley, who argues that what would happen if we achieved superhuman AI is ‘possibly the most important question facing humanity.’ To those who deem the question premature, Mr. Russell counters, ‘If we were to detect a large asteroid on course to collide with the Earth in 2069, would we say it’s too soon to worry?’

“Mr. Russell’s first few chapters outline the past, present and near future of AI. Broadly, the field has moved from hand-coded rules and symbols to software that collects data and finds patterns—so-called machine learning. Current systems can recognize images and spoken words after training on labeled examples without being given detailed instructions. An area of particular interest to Mr. Russell is reinforcement learning, in which software ‘agents’ are set loose in the world (or a virtual world, such as a videogame) and learn by being rewarded for desirable behavior.”

The Christian Science Monitor: “Fears about robot overlords are (perhaps) premature” — “In ‘Artificial Intelligence: A Guide for Thinking Humans,’ Melanie Mitchell, a computer science professor at Portland State University, tells the story, one of many, of a graduate student who had seemingly trained a computer network to classify photographs according to whether they did or did not contain an animal. When the student looked more closely, however, he realized that the network was not recognizing animals but was instead putting images with blurry backgrounds in the ‘contains an animal’ category. Why? The nature photos that the network had been trained on typically featured both an animal in focus in the foreground and a blurred background. The machine had discovered a correlation between animal photos and blurry backgrounds.

“Mitchell notes that these types of misjudgments are not unusual in the field of AI. ‘The machine learns what it observes in the data rather than what you (the human) might observe,’ she explains. ‘If there are statistical associations in the training data, even if irrelevant to the task at hand, the machine will happily learn those instead of what you wanted it to learn.’

“Mitchell’s lucid, clear-eyed account of the state of AI – spanning its history, current status, and future prospects – returns again and again to the idea that computers simply aren’t like you and me. She opens the book by recounting a 2014 meeting on AI that she attended at Google’s world headquarters in Mountain View, California. She was accompanying her mentor, Douglas Hofstadter, a pioneer in the field who spoke passionately that day about his profound fear that Google’s great ambitions, from self-driving cars to speech recognition to computer-generated art, would turn human beings into ‘relics.’ The author’s own, more measured view is that AI is not yet poised to be successful precisely because machines lack certain human qualities. Her belief is that without a good deal of decidedly human common sense, much of which is subconscious and intuitive, machines will fail to achieve human levels of performance.”

This article was originally published on WBUR.org.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

US