Ant Tyler Fri, Nov 16, '18 8 min read

Jade Roundtable Series: Does AI have ethics?

There’s a growing question around the ethics of artificial intelligence, causing many to ask whether the time is right to consider introducing a national strategy around AI in Australia and New Zealand – and even possibly legislation.

Canada leads the world in this area, with their federal government recently giving the Canadian Institute for Advanced Research (CIFAR) the task of spearheading a C$125-million AI strategy. While Canada is ahead in this area, another 16 countries have followed with their own strategies to promote the use and development of AI.

Closer to home, the not-for-profit AI Forum in New Zealand brings together the country’s largest community of AI technology innovators, users, regulators, researchers, educators and investors to advance its AI ecosystem. And, while Australia doesn’t yet have an artificial intelligence strategy, the government earmarked nearly AU$30 million in its latest budget to support the responsible development of AI in the country.

So, why the growing hullaballoo about AI ethics? Tech visionary Elon Musk is openly suspicious of AI and has loudly flagged his concerns several times. As reported in our first roundtable article, he is quoted as saying: “Artificial intelligence doesn’t have to be evil to destroy humanity. If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it.”

This view is contradicted by his tech peer, Facebook founder Mark Zuckerberg, who says Musk’s doomsday predictions are “pretty irresponsible”. So, who’s right and what should we do about it?

How far should we go to ensure ethical AI?

Speaking at a recent series of roundtable discussions on the future of AI, business and tech leaders from Australia and New Zealand debated this question vigorously. It wasn’t so much that they disagreed about the need to introduce a national strategy to ensure ethical AI in their countries, it was more around how far the steps should go.

There was agreement that, like most technology, AI offers some great benefits. For example, it offers huge amounts of processing and decision-making power. It never gets bored or tired or calls in sick after a big weekend. This means it has the capability of doing work in minutes that would take a human many, many hours.

If AI serves human interests in these kinds of scenarios, its use is accepted as positive. So, if AI helps you bypass a long wait with a call centre, or queues up your music playlist, then it’s fine. However, if it does creepy things like identify faces in a photo you upload to social media or starts prompting you to pay bills that arrived in your email, you start questioning how far AI’s influence should go.

A common ethical thread throughout the Jade roundtables was the question of bias. One comment was: “AI is biased based on the profile and demographic of who’s doing its programming.” Another was: “The human fingerprint is all over AI’s moral compass.”

Does the driverless car kill the old lady or the baby?

A well-talked about example is the role of ethics in the ‘trolley problem’. It goes like this: You see a runaway trolley speeding down the tracks, about to hit and kill five people. You have access to a lever that could switch the trolley to a different track, where a different person would die. Should you pull the lever and end one life to spare five?

Researchers at America’s MIT took this classic scenario and applied it to self-driving cars, in an experiment called the Moral Machine. It tested nine different scenarios with the public that are shown to polarise people: should a driverless car prioritise humans over pets, passengers over pedestrians, more lives over fewer, women over men, young over old, fit over sickly, higher social status over lower, law-abiders over law-benders? And finally, should the car swerve or stay on course?

While the details of this research weren’t discussed in depth at the roundtables, it served as a prompt to question the role of the person doing the programming of the AI behind the self-driving car.

Where does your company sit on this hot topic? Do you need some advice on how to introduce AI to your business and tick the boxes on ethics and other key considerations? Get in touch – we’d be happy to help.

Watch out for our follow up article on this topic: ‘How can humans remain in control of AI?’


Talk to us about making data meaningful