top of page
Writer's pictureKen Ecott

In the World of A.I. Ethics, the Answers Are Murky


The year is 2021. Your iPhone 10 buzzes. It’s your money managing app, powered by artificial intelligence to help you set aside wages each month. You’ve asked it to make a donation in your name to The Human Fund, because you’re a nice guy like that.

You look down and freeze. Your landlord is demanding rent, but your checking account is empty. The app spotted a very generous gift from your mom last month and donated a large chunk. The Human Fund is calling. They want to put your name on a plaque for being such a generous donor.

The app hasn’t acted unethically. In fact, you could say it’s argued very ethically. But it’s made a value judgment that may not chime with yours. This is the world of A.I. ethics, where the areas are grey and the answers are murky.

It’s a question increasingly on the minds of politicians as A.I. grows more prevalent in everyday life. Germany has come up with an ethics guide for self-driving cars about how to act in an emergency, while the British Standards Institute (BSI) has developed a general set of rules for developers. Sci-fi author Isaac Asimov came up with a catch-all solution way back in 1942: the three laws of robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

It sounds pretty all-encompassing. A generous interpretation of the laws would suggest that the money app caused harm by blowing the rent money on charity. Sadly, cool as it seemed when Will Smith fought three laws-powered robots in I, Robot, the idea is a bit of a non-starter.

The cover of Asimov's 'Robot Visions', a collection.

“Hard-coded limitations are too rigid for high-intelligence, high-autonomy beings,” Stuart Armstrong, a researcher at the Future of Humanity Institute at the University of Oxford, tells Inverse. “Most of Asimov’s stories were about robots getting round their laws!”

Armstrong spoke in Cologne last month at the 2016 Pirate Summit tech conference, where he explained how developers should avoid hard coding rigid rules for their A.I. The answer, he explained, was to develop value systems. Instead of inputting rules like “don’t harm humans,” it’s more effective to instill ideas like, “it is bad to harm humans.” That way robots can interpret situations based on desirable outcomes.

Asimov’s laws also don’t cover a lot of ethical areas. You can go through life thinking that you’re not causing harm to people while still being a jerk. Barclays Africa told Business Insider last month that the bank was investigating using millenials’ social media accounts instead of credit history to evaluate customers. That sounds good in theory, as it’s a group with little credit and a whole lot of posts, but it’s asking A.I. to make value judgements about potential clients.

2 views0 comments
bottom of page