Can AI understand morality and ethics

5. Can AI understand morality and ethics.jpg

Artificial intelligence knows whether your smile is real or fake. It can predict your online behavior and sell that data to hungry marketers. One day, artificial intelligence could even be smarter than you – and this is just the tip of the iceberg. The AI market is set to reach $190 billion by 2025, and this technology is being applied almost everywhere. From Facebook’s newsfeed toTesla’s vehicles, from courts to hospital rooms – people rely on AI to make important decisions. And as machines start to control our lives, it has become apparent that we did not really think this through.

What if one day humans are not the smartest species on Earth? What if Skynet becomes a reality, and no Terminator can save us? What if Elon Musk’s prediction that AI is “an immortal dictator” comes true? Super intelligent machines could, for example, take control of a nuclear launch system and rain missiles on cities. Or how about private, proprietary algorithms that could accidentally reflect existing social biases and deny jobs, education, and justice to people? One way to solve these problems is to make machines more like humans, and teach them ethics.

As Rosalind Picard, the director of the Affective Computing Group at MIT, says, “The greater the freedom of a machine, the more it will need moral standards.” Although this seems like a great idea, it creates more questions than answers. Ethics aren’t a simple line of code, but a complex system of values that even humans can’t fully agree on. So why should we teach AI this imperfect system, and what would the morality of artificial intelligence even look like?

Moral dilemmas

Experts agree that immoral AI is a far bigger threat to people than AI guided by human morality. An ethics-driven machine will act within the boundaries we set, but AI without any value system could make disastrous decisions. “If the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm,” write Jane Zavalishina and Dr Vyacheslav Polonski.

This is true for AI in other sectors as well, but the question remains: how do we convey complex values in lines of code? And even if that’s figured out, we need to decide whose values we’ll install in these machines. People are guided by multiple, often competing moral systems – about which they fundamentally disagree with others. Which one should guide AI, and on which issues?

Another concern is that AI could be unintentionally corrupted by the algorithms that amplify racial and gender biases. Scandals like the unintentional racism of Apple’s face recognition technology and Twitter’s bots are illustrative of this issue. And despite all these challenges,  people aren’t allowed to explore and understand the inner functioning of such algorithms. The proprietary right of private companies seems to be more important than the protection of citizens. But far from complicated questions, some scientists see these issues in a simpler way.

Asked how a driverless car will react if it has to choose between hitting two kids or an approaching  motorbike, Jaguar’s Amy Rimmer says: “I don’t have to answer that question to pass a driving test… So why would we dictate that the car has to have an answer to these unlikely scenarios…?” But not all scientists share her opinion, and instead, they’re working to find ways to teach ethics to AI.

Teach AI like kids

There are several schools of thought on how to teach morality to machines. One system is advocated by Marek Rosa, the owner of the Prague-based company GoodAI. His approach is to educate AI by slowly exposing it to increasingly complex problems, with the ultimate goal of creating a machine capable of behaving morally in new and unexpected situations. Another approach is to read stories to AI. This technique is used by Mark Riedl at Georgia Tech, where he feeds AI with sentences that algorithms analyse and use to form conclusions about social norms. But what if AI reads about villains? “I could cherry-pick stories of antiheroes or ones in which bad guys win all the time. But if the agent is forced to read all stories, it becomes very, very hard for any one individual to corrupt the AI,” says Riedl.

And as scientists work to find solutions and teach morality to AI, many wonder what the position of Silicon Valley giants on this topic is. One of them was forced to make its position very clear.

The Position Of Tech Giants

Google had to cancel its contract with the Pentagon on cutting-edge AI tech after thousands of engineers rebelled. They refused to build AI killing machines. To calm these employees, its CEO, Sundar Pichai, wrote a blog post detailing the principles that the company will follow in AI projects. He promised Google will only work on socially beneficial solutions, accountable to people, built for safety, and upholding “high standards of scientific excellence”. But he also noted that AI is a force of good: “Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness.”

Mark Zuckerberg couldn’t agree more. He plans to use AI to fix Facebook’s problems such as hate speech, terrorist content, Russian bots, and much more. Although many consider these as urgent problems, Zuckerberg asked Congress for five to ten years to figure it out.

Which stories will we read?

The discussions on AI are increasingly passionate because people are afraid. Will this tech turn into Skynet or a cancer prevention tool, a danger on the road or a saviour of human lives? Nobody can tell for sure, but AI is developing and we’ll continue using it in the future. And in light of this reality, even imperfect moral standards are better than immoral and unconstrained AI. If  machines are anything like humans, they’ll have a good and a bad side. Which one will prevail? Maybe it depends on which stories we read to them.


This article was originally published in 2018 on The Internet of All Things.