When AI is built into specific, common use cases, such as chatbot interactions, hiring or credit decision making, it is key for enterprises to combat bias inherent within algorithms that have the potential to skew insights and outcomes and translate into unintended outcomes for enterprises and individuals.
For instance, an algorithm in a leading facial recognition system correctly identified male faces accurately 99% of the time, but had errors when identifying dark-skinned female faces. A machine learning-based software deployed in criminal justice program to assess the probability of a criminal defendant re-offending was found to be strongly biased against minorities. The system was found to incorrectly predict the likelihood of reoffending when it came to majority.
Beyond gender stereotypes, the way data is collected can also render itself to bias. For instance, a mortgage lending model finds that reduced lending to elderly citizens on the basis that they have a higher likelihood of defaulting can be characterized as illegal age discrimination.
Everybody likes transparency and things which are understandable. Technical logic of algorithms is not so simple to understand, and how or why the derivable recommendations are made is unclear.
Individuals responsible in developing and releasing algorithmic systems should be accountable to transparent detailing of high-risks resolutions which concerns one’s well-being.
As a black-box, Machine Learning (ML) models are progressively deployed to make critical predictions in important contexts. Due to this, the need for transparency and trust is growing from the participants and patrons of AI. The challenge is on making and using decisions that are not explainable, or that simply disallow obtaining detailed clarifications of their performance. Justifications in support of the output of a model are very important, e.g. in precision medicine, where specialists need much more information from the model than just a binary prediction for their diagnosis. Other examples relate to autonomous vehicles in transportation, finance, and security, among others.
To avoid restricting the efficiency of present generation AI systems, Explainable AI (XAI) suggests forming a group of ML techniques that
- Develop more explainable models while upholding a high level of learning performance (e.g., prediction accuracy), and
- Empower humans to comprehend, correctly trust, and efficiently achieve the developing generation of artificially intelligent cohorts.
In present day AI application implementation, there are a host of Machine Learning algorithms which are in use and practice. Some of these ML Algorithms are really complex, while some are not so complex. It has been observed that the very AI/ML practitioners are using these existing algorithms just because they are being guided and advised to do so. However, the end-users are oblivious of the facts and reasons of the algorithm selection.
An urgent need of an established protocol is very much evident, where an end-user can be made to understand in simple words about the reason of the algorithm selection.
How could companies build in Algorithmic Explainability?
Given that algorithms run in the background and often function as ‘black boxes’, it is important to decipher how and why they reached certain conclusions, making it impossible to ascertain their fairness. This, along with the unpredictability and constant evolution of algorithms make them a challenge.
Algorithmic Explainability is the resultant of comprehension, interpretation & explanation of the logic that an AI system has developed over the course of its application.
In semi-automated and automated environments, algorithms operate swiftly. This leads to many broad risks and implications for the enterprise, ranging from reputational risks to technology risks, from operations risks to strategic risks.
Therefore, it is imperative that enterprises build Algorithmic Explainability into their traditional risk management framework. This will enable them to build efficiency, effectiveness and fairness into the delivery of their products and services, while enhancing the overall value proposition.
To consider a selected Algorithm’s Explainability, one can wonder if the below factors have been considered:
- One can question on whether they can understand and trust the recommendation made by the suggested AI/ML model
- Why not choose models that are easy to understand, for example Random Forest or logistic Regression?
- Modelling counterfactual instances that would empower a practitioner or user to comprehend what change is needed to get the expected result
- To opt for more of Ensemble Learning, which is actually a practice to combine various existing Machine Learning algorithms, so as to progress with further experimental research