Imagine for a few seconds a just and equitable world where all people, regardless of gender, age, ethnic or socio-economic background can meet our basic human needs. Are systems powered by artificial intelligence (AI) capable of achieving the world you just imagined? Or will the bias that drives real-world outcomes eventually overtake the virtual world, too?
When we talk about the rise of AI, it’s important to mention Tay, an infamous Twitter chatbot launched by Microsoft in 2016. Tay intended to learn by reading tweets and interacting with other users on Twitter. Tay’s bio read: “The more you talk, the smarter Tay gets!” It only took a few hours before Tay started tweeting offensive, sexist, and racist posts. Microsoft disconnected Tay within 24 hours of its launch.
Tay could be dismissed as a mistake of AI programming. But there are plenty of other examples like Tay that show bias, often in more subtle ways. These other forms of AI can have more negative consequences on our lives than conversations with a chatbot on a social media platform. AI systems are learning to carry out tasks we could hardly imagine two decades ago, and they will increasingly do so. Some are already carrying out important decisions that affect everyone’s lives in education, justice, policymaking and healthcare, to name a few, making it crucial to understand the discrimination programmed into these systems.
We aren’t only talking about AI in advanced economies. According to a study published in Nature, AI could help achieve 79% of the Sustainable Development Goals (SDGs), becoming a transformative intervention in developing economies.
Though AI is still finding its footing in emerging markets, certain applications are widely used. In healthcare, considering that many developing economies are short of doctors, an AI system could assist healthcare workers to make better decisions. The system can also train medical personnel. In education, an AI system can be designed to support teachers in delivering content better. And in finance, an AI system can support people who traditionally lack access to credit. In agriculture, farmers are using AI to inform their decisions on when to sow by considering data like weather patterns, production, and sowing areas.
AI is contributing to efficiency and systemic transformation, but it also presents a danger of becoming a source of unemployment and undermining fundamental rights and freedoms. Our privacy is at risk of increased surveillance. Also, algorithmic bias embeds human bias in perpetuating discriminatory systems. We must ensure that AI does not nullify decades of struggles for human rights, equality, and dignity, and renew our focus on how to make this technology more inclusive.
Biases in AI
The lack of fairness that emerges from the output of an AI system is concerning. This can be racial bias, age discrimination and gender bias, to name a few. Designers create prototypes. Data engineers gather data. There are also data scientists who execute the models. These are all humans. Some of them, unintentionally, unconsciously, or otherwise, have stereotypes or biases. It therefore makes sense to say that AI isn’t born biased, it’s taught to be so, just like Tay was tricked by social media users to post offensive tweets.
The bias in AI systems isn’t entirely new. Back in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. The computer program it was using to make a decision on which student applicants would be invited for interviews was determined to be biased against women. It was biased also against applicants with non-European names. But the program had been developed to match human admissions decisions, doing so with more than 90% accuracy. The algorithm doesn’t cure biased human decision making, but returning to human decision-makers also doesn’t solve the problem.
Biases are incorporated—unintentionally or otherwise—into algorithms and codes that power machine learning and AI systems. A well-known example of such biases can be found in facial recognition. One example is when a criminal justice algorithm used in the state of Florida in the United States mislabeled African-American defendants as “high risk” more so than white defendants. Another example shows how the Apple card algorithm was found to offer smaller lines of credit to women than to men. Amazon also stopped using a hiring algorithm after finding that it favored male applicants over female ones.
Three decades later, algorithms have grown more complex, but we continue to face the same problem. An AI system can help identify and reduce the impact of human biases. However, it can also make the problem worse.
The technological revolution and elevation of AI will go much further. AI is already developed for diagnosing diseases or predicting heart attacks. The error rate is only 0.5% compared to 3.5% for human doctors. Thus, better diagnostics would contribute to saving millions of lives around the world, achieving SDG 3 to ensure healthy lives and promote wellbeing for all. However, researchers have found that an algorithm that identifies which patients need additional medical care undervalued the medical needs of people of color.
Another reason why AI is discriminatory is quite obvious: the lack of diversity in the sector. The designers and engineers behind AI systems are generally white men. AI systems only learn from the images and information it is provided. In other words, it only learns from the image of those who conceive of it. In Europe, only 11.2% of leadership positions in the STEM fields are held by women. In North America, it’s 18.1%.
The lack of diversity in the sector influences the design and names of AI systems. Most humanoid machines are white-skinned and highly gendered, with gendered voices, appearances, and names. One can usually guess the role of a machine because of its gender; male robots tend to serve in the military while the female ones serve in healthcare or function as personal assistants.
But where does AI gender bias originate? It comes from data. Humans generate, collect, and label the data that goes into datasets. Humans determine what datasets the algorithms learn from to make predictions.
Data are snapshots of the real world. The large gender data gaps we see are partly due to the gender digital divide. Some 300 million fewer women than men have access to the internet on a mobile phone. In low- and middle-income countries, women are 20% less likely than men to own a smartphone. These mobile phones, including smartphones, generate data about their users. Considering that fewer women have access to them inherently skew datasets.
Making AI more inclusive
More diversity in the technology sector and STEM careers would help make AI systems less biased, more inclusive, and less discriminatory. Women, non-white people, and especially minorities should be encouraged to pursue a career in STEM fields. Additionally, computer science departments within education institutions should complement their curriculum with courses on human rights.
A repeat of the chatbot, Tay, could be avoided if greater attention was paid to making AI systems more inclusive and the shortfalls of algorithmic patterns are properly addressed. But the responsibility lies with human engineers and designers. AI systems learn to make decisions based on training data. Those data can include biased human decisions or reflect social or historical inequalities, even if sensitive variables like race and gender are removed. Another source of bias is flawed data sampling, in which a minority group, for example, is underrepresented in the training data.
But changing algorithms is not enough. There should be more transparency in the design process of AI systems; this is necessary to guarantee respect for human and digital rights. There should be reviews and assessments conducted by human rights experts specialized in computer science. There should also be a clear process by which individuals can submit cases of AI discrimination to courts and obtain redress.
There are no quick fixes for addressing bias in AI. No risk assessment is sophisticated enough to undo hundreds of years of systemic discrimination. The problem is not AI, but the systems in which we all live. These systems depend on data that further discrimination. More structural solutions are needed.
One of the most complex steps of improving AI is understanding and measuring the concept of fairness. How should we codify definitions of fairness? Researchers have developed technical ways of defining fairness, but usually different notions of fairness are incompatible with each other, or cannot be satisfied at the same time. Even as fairness definitions evolve, researchers have also made progress on a wide variety of techniques that ensure the computer programs can meet them, by processing data beforehand or incorporating fairness definitions into the training process itself.
In the mid- to long-term, solutions are possible to improve the AI systems we use to replicate human decision making. One way is training engineers and data scientists on understanding cognitive bias, as well as how to combat it. This also means taking more responsibility to encourage progress on research and standards that will reduce bias in algorithms.
These improvements will help, but other challenges require more than technical solutions. For instance, how do we determine when an AI system is fair enough to be released? In which situations should AI decision making be permissible? These questions require multi-disciplinary perspectives, including from social scientists, AI ethicists, and other thinkers in the humanities.
Development practitioners need to embrace AI to the extent required to reallocate resources that would help developing economies embrace its potential. We need to focus on developing AI systems that are suited to driving systemic transformation and shaping a future in which AI positively contributes to the achievement of all SDGs. Stakeholders across sectors and from varied levels should be involved in this dialogue to ensure that no one is left behind.
This article appeared in the Fall 2021 issue of Helvetas Mosaic. Subscribe to never miss an issue.
Have a comment?
Do you agree or disagree with the author? What is the role of development organizations in addressing this problem? What steps should the governments and the civil society take?
Let us know - we are looking forward to hearing from you! Post what you think and tag us on LinkedIn! By following us, you can also learn about our new vacancies and calls for proposals.
Subscribe to Helvetas Mosaic
Our articles explore new trends and fresh ideas about international development work in Southeast Europe.
Get inspired with our insights.