Dr Evans Sagomba
Everything AI
ARTIFICIAL Intelligence (AI) is heralded as a transformative force across various industries, yet it is not devoid of ethical dilemmas.
From biased algorithms to privacy infringements, the swift evolution of AI technology raises significant questions about fairness and accountability.
These technologies often mirror the prejudices of their creators, leading to unintended and sometimes harmful consequences.
AI systems frequently exhibit bias and discrimination, primarily because they learn from historical data imbued with societal biases.
For instance, hiring algorithms might favour male candidates over female ones due to existing gender imbalances in the data (male-female discrimination). Similarly, if trained predominantly on lighter-skinned images, facial recognition systems may perform poorly on darker skin tones (Black โ White people discrimination).
These biased outcomes perpetuate existing inequalities, making it challenging for marginalised groups especially people of colour to receive fair treatment in crucial areas such as employment.
Addressing these biases necessitates rigorous auditing of AI models and including diverse data sets in training processes (inclusivity in terms of data fed to AI systems).
AI systems can unfairly disadvantage certain groups (blacks) without these corrective measures, perpetuating systemic discrimination.
Another major ethical concern in AI is the lack of transparency and explainability.
AI models, especially deep learning ones, often function as โblack boxes,โ making decisions without clear, understandable reasoning. This opacity prevents users from comprehending how decisions are made, leading to mistrust and potential misuse.
For example, if a lending AI system used by a bank denies a loan application, the applicant should be informed of the reasons behind this decision.
However, black-box models often provide no such explanations, leaving applicants without recourse.
Enhancing AI systemsโ transparency involves developing methods to interpret and explain AI decisions clearly. Tools like model-agnostic explainers and transparency frameworks can help make AI systems more accountable and user-friendly.
Legal and privacy concerns with AI
Moreso, AI systems are blighted by legal and privacy concerns. AI technologies have significantly enhanced surveillance capabilities, raising substantial legal and privacy concerns.
Algorithms can analyse vast amounts of data from various sources (eg public cameras, and social media (do we know that all our social media activities are being monitored by AI surveillance systems?) to identify individuals and track their activities? This raises significant concerns about data privacy. Unauthorised data collection and the lack of user consent further exacerbate these issues.
Regulators worldwide struggle to keep pace with rapid AI advancements, leaving gaps in legal protections for privacy.
In addition, the issue of accountability becomes problematic in AI applications due to their opaque decision-making processes.
When AI systems make errors (eg wrongful arrests, and biased hiring decisions), it is often unclear who should be held responsible.
This lack of clarity can result in a lack of recourse for individuals adversely affected.
Ensuring accountability means implementing robust auditing mechanisms, clear guidelines on governance, and involving human oversight. Legislation and industry standards must evolve to specify liability in AI decision-making to safeguard public interest effectively.
The impact of ai on employment and society
AI has transformative potential, but its implications raise significant ethical questions, particularly for employment and society.
Job displacement fears
AIโs integration into workplaces often leads to job displacement. Machines and algorithms handle tasks previously done by humans. Examples include customer service Chatbots, automated manufacturing lines, and sophisticated data analysis tools.
A study by Oxford University estimates that nearly 47 percent of world jobs could be automated in the next two decades, highlighting this concern.
Workers in repetitive, routine jobs face higher risks of being replaced by AI technologies. As businesses seek efficiency and cost-saving measures, the shift towards automation accelerates.
Deepening social inequality
AI exacerbates social inequality through unequal access and application. Higher-income individuals and businesses can afford advanced AI technologies, gaining competitive advantages.
For instance, large corporations utilising AI can streamline operations, reduce costs, and enhance customer experiences, further widening the gap between themselves and smaller enterprises.
Disparities also manifest in education and healthcare, where wealthy institutions adopt AI for advanced diagnostics and personalised learning, leaving less affluent entities behind.
These dynamics contribute to a growing divide, intensifying existing socio-economic disparities (rich-poor divide).
Possible solutions to enhance AI ethics
(What the Ministry of Communication Technology, Postal and Courier Services (ICTPCS) should take note of)
Addressing the ethical concerns surrounding AI is crucial for ensuring its beneficial and fair use. I suggest several measures to mitigate risks and enhance AI ethics.
First, governance frameworks can guide AI development towards ethical standards (The Ministry of ICTPCS must urgently draft and implement a robust Zimbabweโs AI Governance Framework).
Policies must outline clear responsibilities for AI creators and users, covering data management, decision processes, and accountability.
By establishing AI guidelines, stakeholders can standardise practices and ensure compliance.
In addition, organisations should adopt ethical guidelines similar to Googleโs AI principles, which prohibit harmful applications like surveillance without consent.
Ensuring audits by third-party entities helps maintain fairness and transparency in AI applications. (In next weekโs issue I will talk about the guidelines for Zimbabweโs AI Governance policy).
Secondly, implementing practices that promote fairness and transparency is essential for trustworthy AI. Developers should prioritise diverse data sets to minimise biases. For example, IBMโs AI Fairness 360 tool helps in detecting and mitigating biases in AI models.
Transparency also improves when AI decision-making processes are explainable.
Integrating explainable AI techniques ensures that users understand how decisions are made, fostering trust. Open-source platforms, like TensorFlow, encourage collaboration and verification, contributing to more reliable and fair AI development.
The big question for this week: Do we (Zimbabweans) understand AI and its ethical implications? Is Zimbabwe ready to adopt AI?
Join us every week as we delve together into the world of Artificial Intelligence (AI), f you have specific areas that you need to address, please contact the editors or email the author directly ([email protected]) and the issue will be addressed in the following weekโs column.
Dr Evans Sagomba, Chartered Marketer/CMktr, FCIM, MPhil, PhD, [email protected], AI, Ethics and Policy Researcher, AI Governance and Policy Consultant, Ethics of War and Peace Research Consultant, Political Philosophy, Chartered Marketer. Social media handles; LinkedIn; @ Dr Evans Sagomba (MSc Marketing)(FCIM )(MPhil) (PhD)/X: @esagomba