Businesses turning to innovative digital solutions are realising the growing benefits that algorithms, artificial intelligence, and machine learning offer, such as increased business efficiencies, deeper insights, and a greater ability to adjust to changing environments. However, these benefits will be difficult to achieve unless businesses first address the low levels of confidence and the heightening concerns that the public has concerning the ethics and transparency of AI. For this very reason, the New Zealand Government launched the Algorithm Charter for Aotearoa New Zealand.
The Charter seeks to increase public confidence around the use of algorithms throughout the various Government agencies who have signed up to it.
While designed for the public sector, it is widely expected that private businesses will also voluntarily sign up to the Charter.
Before diving too deeply into the article, it is worth considering the Charter’s definition of algorithms (or the lack thereof). Algorithms have varying degrees of complexity - from less advanced techniques such as regression models and decision trees to more complex methods such as neural networks. The Charter does not focus on complexity but rather on algorithms with a “high risk of unintended consequences and/or have a significant impact if things do go wrong, particularly for vulnerable communities”.
By joining the Charter, signatories must adhere to the following set of broad principles:
- Transparency about how decisions are informed by algorithms.
- Embedding a Te Ao Māori perspective in the development and use of algorithms.
- Making sure data is fit for purpose, including by identifying and managing bias.
- Ensuring privacy, ethics, and human rights are safeguarded.
- Retaining human oversight.
The Institute for Ethical Artificial Intelligence & Machine Learning, a research centre based in the UK, developed eight principles for the responsible development of AI, which Jade has been using in our own AI development. As you would expect, there is a clear overlap between the two sets of guidelines. As a summation of both sets of guidelines, Jade views algorithms (AI or other) as a tool for assisting people with specific tasks or jobs, rather than as a human replacement. Where algorithms mainly come into their own is by carrying out mundane, repetitive tasks, letting people focus on higher-value activities.
This article will provide commentary on three of the Charter’s principles, as well as explore how we have applied two of the principles in some of the work we have done.
Retaining human oversight
We are beginning with retaining human oversight because one of Jade’s values is putting people first – from those who work at Jade to our clients, our client’s customers to the people in our local communities. We took this stance because it informs everything we do: the technology we build, the way we build it, the way we work with our clients, and so forth. It also provides a good foundation for a discussion on algorithms.
Retaining human oversight is very similar to the UK’s first principle of human augmentation (I commit to assessing the impact of incorrect predictions and, when reasonable, design systems with human-in-the-loop review processes). The necessary oversight is achieved by:
- Nominating a point of contact for public inquiries about algorithms
- Providing a channel for challenging or appealing of decisions informed by algorithms
- Clearly explaining the role of humans in decisions informed by algorithms.
An example of Jade algorithms retaining human oversight
With Jade ThirdEye, our automated anti-money laundering software, we use rule-based algorithms to detect and raise alerts for suspicious transactions. Human oversight is currently carried out by compliance officers using a manual prioritisation scoring system. This means human intervention is required for each suspicious transaction alert, no matter how suspicious it is. Compliance officers then have information at their disposal to decide how rigorous the process should be in treating each alert, whether to conduct further investigations, how much time to spend investigating, or to close the alert as unsuspicious.
We are currently researching the potential of adding machine learning processes to Jade ThirdEye, to further augment prioritisation and alert processing. This enhancement will assist and provide greater efficiencies for compliance officers when reviewing suspicious transactions, helping them to focus on transactions with higher risk. The algorithm will save time and manual effort, but more importantly, still require supervision. The Charter's directive aligns well with what we are hearing from users and industry experts in our research forums.
Transparency about how algorithms arrive at their decisions
Another critical aspect of the Algorithm Charter for Aotearoa New Zealand is the need to clearly explain how decisions are informed by algorithms. This may include:
- Plain English documentation of the algorithm.
- Making information about the data and processes available (unless a lawful restriction prevents this).
- Publishing information about how data is collected, secured, and stored.
Providing this level of transparency could be seen as exposing intellectual property, so you can see why businesses might hesitate their investment in algorithms. But delaying the use of algorithms in a business may be just as costly.
It is widely accepted that since algorithms typically improve with use, you can’t make up for lost time.
This raises the dilemma of how long a business can hold out on investing in AI before their competitors establish an unassailable advantage.
Safeguarding privacy, ethics, and human rights.
Another principle highlighted in the New Zealand Charter raises concerns around privacy, ethics, and human rights. There is a paradox of sorts here where information on how an algorithm reaches a decision needs to be freely accessible (transparent) while completely maintaining the right to individuals’ privacy. Thankfully, there are ways to mitigate privacy concerns, including anonymising data sets. Furthermore, the Charter explicitly mandates that algorithms require regular peer reviews to assess any unintended consequences, which can then be immediately remedied.
While an increase in transparency will undoubtedly lead to some people challenging the decisions of algorithms, the purpose of having human oversight is that employees can use their experience, empathy, and intuition to judge the most appropriate course of action.
An example of Jade algorithms safeguarding privacy
In a recent computer vision project that we embarked on with an organisation, we were tasked with detecting and tracking the movements of people in and around certain areas. The topic of facial recognition came up during the project, which caused some interesting debate around its use, project scope and additional cost, and privacy concerns.
Almost unfeasible to opt-in to facial recognition technology in public spaces, adherence to best practices outlined with GDPR would be nigh impossible.
This meant the algorithm would lead to both intentional and inadvertent breaches of privacy rights. We chose to put people first and not include facial recognition. As the Charter stipulates, whenever ethics are concerned and businesses are worried their algorithms may breach compliance standards, it is better to take the high ground.
A world first, but just the beginning
Yes, there are plenty of guidelines like these around the world, but this is a country-specific, Government-produced Charter. In fact, New Zealand is the first country to come up with such an initiative, which is fantastic!
One day soon, the people will ask "Why do we have a person making the decision? Why can't we just use an algorithm instead?".
The Charter will bring us closer to this future by ensuring algorithmers (created a new word right there!) demonstrate to everyone else they have more than just algorithms in mind when they're working. As a result, algorithmers who care for the public good, such as the privacy and ethics on using data and algorithms, as well as the potential bias in the data and the potential effects of the bias, will be in hot demand. Lucky us!