Dr Evans Sagomba
Everything AI
WELCOME to this week’s article as we unpack bias in Artificial Intelligence (AI).
As days and weeks move, so is AI. AI systems are evolving at a rate never witnessed before in the history of technological advancement.
In our everyday lives, Artificial Intelligence (AI) has become integral, influencing everything part of human existence.
However, as AI systems become more prevalent, so are the concerns about biases within AI systems.
As long as we are using AI-driven applications, we cannot escape the inherent bias in these systems.
Bias in AI often leads to unfair treatment and discrimination. Whether you are aware of it or not, as we speak, you are being affected directly or indirectly by biases that are inherent in the AI-driven systems.
Am I being sensational or alarmist? Certainly not. Let us walk through this topic together step by step.
Understanding AI bias
When does bias occur in AI systems? Artificial intelligence (AI) bias occurs when an AI system produces results that are systematically prejudiced due to erroneous assumptions in the machine learning (ML) process.
These biases usually stem from the data that is being used to train the AI, that is it comes from the algorithms themselves, or in the way the AI system is implemented. Do you know that AI systems rely on the accuracy of the data that they are fed? Therefore, when biased data is fed into an AI system, the output from this system is likely to be biased as well.
Human bias and AI
Recent studies and analyses of AI systems have shown that human biases can be inadvertently transferred to AI systems. If the person who developed the initial AI system is biased, he/she can transfer the biases into the system. Cognitive biases influence how people make decisions, and these biases can be embedded in AI models during creation.
As technology advances, it is crucial to understand how human biases can be accidentally or intentionally incorporated into AI design, leading to the perpetuation of issues such as racism and discrimination.
Types of biases in AI
How do we classify biases? AI biases can be classified into two types: observable and unobservable. Observable biases refer to biases that can be detected and measured, such as dynamic pricing applications like Uber services where higher prices are recommended for certain customers.
Unobservable biases, on the other hand, are more difficult to detect and require additional research skills to identify.
Examples of AI bias
Several real-world examples highlight the issue of AI bias. Let me give an example of biases that were noticed in one of the world’s largest online shopping spaces Amazon (Observable Bias). Amazon has an automated employment tool that it uses to rate applications. Amazon’s AI tool for rating job applicants was found to discriminate against female applicants.
Let us look at one of the most used social media platforms, Facebook (FB) (Observable Bias). The bias occurs in FB that allows the advertisers to target the marketing ads/job ads to the specific gender race and religion with minority backgrounds.
Adobe (Unobservable Bias), the Adobe software blocked the customers of specific demographics while purchasing the software.
Nikon (Observable Bias), the data-driven bias arises from the new Nikon product about Asian faces and HP media smart computers have skin tone problems in their face recognition.
Microsoft (Observable Bias). When customers started chatting with the Tay chatbot regarding racist comments, the AI-driven voice assistant started to retype the phrases rather than address the customer queries
Addressing AI bias
To mitigate AI bias, several measures have been implemented by global tech companies.
Google, for example, has introduced the Testing with Concept Activation Vectors (TCAV) programme to test decision-making algorithms and reduce bias. Accenture’s ‘Teach and Test’ AI testing services help businesses minimise biases and discriminatory content. IBM’s AI Fairness 360 toolkit offers a comprehensive approach with 70 fairness metrics to reduce biases in AI systems.
The role of training and education
It is crucial to educate and train the policymakers (parliamentarians, senators), marketers, consumers and the generality of the Zimbabwean population about AI bias.
Education and training are crucial in addressing the issue of biases in AI systems.
Fundamentally, all of us should be able to understand the potential pitfalls of AI and the importance of fair and unbiased data to help reduce the impact of biases. Together we can work towards more equitable AI systems.
The clarion call
This week, I make a clarion call to the Second Republic — Government of Zimbabwe (GoZ) through the Ministry of Information Communication Technology, Postal and Courier Services (MICTPCS) to craft the Zimbabwe Artificial Intelligence Governance and Regulatory framework (ZAIGRF) and appoint a Zimbabwe Artificial Intelligence (AI) Regulator Authority (ZAIRA).
Join us every week as we delve together into the world of Artificial Intelligence (AI), if you have specific areas that you need to address, please contact the editors or email the author directly and the issue will be addressed in the following week’s column.
About the Author:
Dr Evans Sagomba, Chartered Marketer/CMktr, FCIM, MPhil, PhD, [email protected], AI, Ethics and Policy Researcher, AI Governance and Policy Consultant, Ethics of War and Peace Research Consultant, Political Philosophy, Chartered Marketer. Social media handles; LinkedIn; @ Dr. Evans Sagomba (MSc Marketing)(FCIM )(MPhil) (PhD) X: @esagomba