Ant Tyler Fri, Nov 30, '18 10 min read

Jade Roundtable Series: How can humans control AI?

Following on from our last article, we continue to explore the discussion that business and tech leaders had on this thorny question at our recent Roundtable series in Australia and New Zealand.

Subjective topics like ethics and morality are bound to stir up conflicting opinions and it’s no different when it comes to the role that tech - and specifically AI - will play in our future. Although AI itself isn’t a new field, how to engage with and control it is becoming an increasingly hot topic.

More people are now asking whether the time is right to consider introducing a national strategy around AI in Australia and New Zealand, and possibly even legislation.

Roundtable participants agreed that when it comes to designing software, systems and AI, there is a responsibility for business leaders and their teams to make sure human safeguards are built in.

Physicist and cosmologist Stephen Hawking said in his book Brief Answers to the Big Questions published after his death earlier this year: “The advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice but competence.”

He makes a case for Government, the technology industry and the public in general to strongly consider the ethical repercussions of AI. “When we invented fire, we messed up repeatedly then invented a fire extinguisher. With more powerful technology such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get,” he wrote..

The President of Microsoft and Chief Legal Officer, Brad Smith, raises the question of ethical AI in his paper The Future Computed: Artificial Intelligence and its role in society. He outlined six principles for AI to consider:

  • Fairness
  • Reliability and safety
  • Privacy and security
  • Inclusiveness
  • Transparency
  • Accountability.

In it, he said that consensus alone on these six ethical questions is not enough. “If we have a consensus on ethics and that is the only thing we do, what we are going to find is that only ethical people will design AI systems ethically. That will not be good enough … We need to take these principles and put them into law.”

“Only by creating a future where AI law is as important a decade from now as, say, privacy law is today, will we ensure that we live in a world where people can have confidence that computers are making decisions in ethical ways.”

AI proponent and TEDx speaker, Arjun Pratap, says “As AI algorithms are created and trained by humans, there is a very high possibility of human bias being built into these algorithms. AI systems are superior to humans in speed and capability which, when used in malicious ways, can cause damage that is much higher in magnitude.”

So, how do we ensure that an AI-driven platform minimises the risk of poor ethical outcomes and gains the trust of its users? The Jade roundtable participants suggested some ways:

AI must leave humans in control

There was consensus over the value that AI brings in doing the heavy lifting for humans. However, it is essential that the person remains in the driving seat at all times. In a call centre scenario, for example, AI needs to apply sentiment analysis to determine if a caller is stressed or distressed and immediately hand over to a person for individual attention.

Tony Stewart, chief product, platform and data officer at Xero, wrote in CIO magazine “In order to be effective, the AI has to be at least as accurate as manual entry … and give the chance for the person to correct anything.”

We need transparency and explainability

The ‘black box’ nature of AI was also a strongly debated topic at the roundtables. Some of the examples the tech and business leaders raised related to auto-decisioning in the finance world, using AI. One illustration was if a loan was declined based on an AI-driven decision, how does a business respond if the client comes back and wants to challenge that outcome?

If the AI algorithm has used machine learning to determine the best outcome, then it is essential for us to be able to ‘reverse engineer’ the decision and be able to justify to the client why that decision was made. And, if an error was made, humans need to be able to see why and correct it for future.

Some of the big multinational tech companies are taking this question very seriously. Google, for instance, has pledged it won’t allow its AI technology to be used for weapons or combat.

Google CEO Sundar Pichai writes: “AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognise that distinguishing fair from unfair biases is not always simple and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief”.

Google is not alone in this endeavour. To make they front-foot the issue, many leading tech firms are making similar strides to eliminate bias and ensure ethical AI.

How can you get started?

Where does your company sit on this topic? Do you need some expert advice on how to introduce AI to your business? Let us help with the tech so you and your team are free to make sure the boxes on ethics and other key considerations are ticked. Get in touch – we’d be happy to help.


Talk to us about making data meaningful