*You will leave the NHK website

AI and Ethics: Overcoming the Risks

Panelists

Bart SelmanProfessor, Cornell University

Profile

Computer scientist who applies AI to social issues.
Thinks AI could be a long-term risk if not managed carefully.

Selmer BringsjordDirector, Rensselaer AI & Reasoning Lab

Profile

Specialist in philosophical foundations of AI.
Works to engineer ethics into robots and AI systems.

Francois CholletAI researcher, Google

Profile

Member of Google Brain, an AI research & development team.
Author of Keras, a popular open-source AI software framework.

Ryota KanaiEntrepreneur
CEO, ARAYA

Profile

Japanese neuroscientist. His firm combines information theory and neuroscience to create next-generation AI.

Moderator

Tomoko KimuraScience View presenter, NHK World

Original Broadcast Date: November 18, 2017(UTC)

Artificial intelligence -- How will it affect our future?
The world is beginning to understand the full potential of artificial intelligence. The technology is getting smarter every day and quickly making inroads into society, touching our lives in ever more significant ways. But some people are concerned that AI’s effects could be negative.
Prominent scientists and IT entrepreneurs including Bill Gates, Stephen Hawking, and Elon Musk are among those sounding the alarm. What are the risks? And how should we address them?
In search of answers to these and other questions, GLOBAL AGENDA went to California, home to many leading AI developers.
Our panel of experts discussed the risks associated with AI and the role ethics might play in shaping the future of society.

Is AI a threat?

The panelists were divided over whether AI could present an existential threat to humans.
Bert Selman, who has been working in the field for decades, said AI could become much smarter than humans in the long term. He stressed the need to explore the possibility and consider the potential consequences. Selmer Bringsjord believes that AI is an existential threat. He said that if a machine or an artificial agent is powerful, autonomous and intelligent, it could be dangerous or even destroy us.
Francois Chollet criticized the idea of "deadly AI" that’s promoted in media and science fiction.
He said AI is just a tool, and therefore, not an existential threat.
But he agreed that AI poses various risks as the technology is introduced into the real world. Royota Kanai said that AI is not a real threat in either the long or short term.
He believes that people need to understand the nature of the problems it presents. For example, some people worry that AI will take away jobs, but that is a social problem, not a one posed by AI.

The role of ethics in AI

Recently, "AI and Ethics" has been a hot topic of discussion around the world. And our GLOBAL AGENDA experts energetically debated the role of ethics in AI development and implementation. Selman said humans make small ethical decisions hundreds of times a day. He believes that people working on AI and autonomous systems must understand issues related to ethical decisions, and design human moral and ethical principles into AI systems. Kanai said that would be difficult. He pointed out that ethical decision making varies among individuals depending on their values. Despite that fact, humans have to explicitly tell AI what the priorities are. Chollet said that engineers should program AI with values, rather than coding ethical rules. Selmer explained that ethics is no longer about the ethics of the engineers: it’s becoming more important to think about what kinds of ethics to build into machines. The panelists used the example of autonomous cars to discuss the issue further.

Building AI without bias

Bias exists in every human mind, so it also appears in many types of data that AI relies on. Chollet expressed concern over "machine learning," a technology that supports the rapid development of AI. He said many developers and companies are taking "short cuts" to control cost and maintain system transparency and traceability. That results in "Black Box systems" that make it harder to recognize and fight built-in bias.
Selmer said that the point Chollet raised is a problem, and one solution is to avoid using methods like machine leaning.
Kanai said that the realistic approach is to target and deal with only problematic and socially unacceptable biases. Selman said he is also seeing positive change: awareness of the issue among software designers is much higher than it was 5 or 10 years ago

Manipulating the masses with AI

Another risk that attracted our panelists’ attention was AI's ability to exert mass control over the public. Chollet expressed concern that big companies and governments can accumulate personal data, and AI algorithms will allow them to use it to manipulate the masses by controlling what we are exposed to. Selman agreed that was a risk. The 2016 US presidential election was offered as an example. Chollet said that Russian intelligence leveraged Facebook’s AI targeting algorithm to attack the US political system by building very detailed psychological profiles of individuals and engaging in personalized targeting.
The panelists then launched into a lively discussion of how to tackle such AI-related issues, and who should take the lead.

Facing a future with AI

The world continues to be confronted with many challenges including poverty, inequality, the refugee crisis, and environmental degradation. Could AI help resolve these issues?
Selman is involved in a project using AI to tackle environmental problems. He said there is a movement to use AI technology to promote the common good.
Bringsjord was more pessimistic. He said that AI is creating "a canyon" between technologized and non-technologized economies. He said it could drive down wages and have a serious impact on regions where the workforce is less developed and less educated.
Kanai expressed concern that AI will make big companies and organizations even stronger.
Chollet said that his company, Google, is involved in democratizing AI. He believes that if the technology becomes as accessible as possible, people will be able to solve their own problems without relying on Silicon Valley.

Panelists

Bart SelmanProfessor, Cornell University

Profile

Computer scientist who applies AI to social issues.
Thinks AI could be a long-term risk if not managed carefully.

Selmer BringsjordDirector, Rensselaer AI & Reasoning Lab

Profile

Specialist in philosophical foundations of AI.
Works to engineer ethics into robots and AI systems.

Francois CholletAI researcher, Google

Profile

Member of Google Brain, an AI research & development team.
Author of Keras, a popular open-source AI software framework.

Ryota KanaiEntrepreneur
CEO, ARAYA

Profile

Japanese neuroscientist. His firm combines information theory and neuroscience to create next-generation AI.

Moderator

Tomoko KimuraScience View presenter, NHK World