In our earlier blog, we talked about why the discussion on artificial intelligence (AI) should not be based on fear, but on our readiness to manage the innovation and put it to better use. AI enhances making a quick decision — about criminal justice, health care, job application, loans, and so on. But the decision may not necessarily be fair. Human decision making can also be flawed, shaped by individual and societal biases that are often unconscious. Humans are prone to error.
As AI thinks a lot like its creators (humans), then how can it be that different? Will AI systems be less biased than humans? Or, will it worsen the problems?
What is the problem?
Increasingly, AI systems and machine learning are used to substitute work done by humans. AI is a growing force in certain industries, increasing productivity and reducing the need for people to perform routine tasks. We fear that one day machines will ‘replace’ us. This is the fear of the unknown.
What is rather certain and genuinely concerning, is the lack of fairness that emerges from the output of a computer program. This can be a racial bias, age discrimination, gender bias, to name a few. Designers create prototypes. Data engineers gather data. There are also data scientists who execute the models. These are all real people. Some of them, unintentionally, unconsciously, or otherwise, have stereotypes or biases. It, therefore, makes sense to say that AI isn’t born biased, but it’s taught to be so.
The bias in AI systems isn’t entirely new. Back in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. The computer program it was using to make a decision on which student applicants would be invited for interviews was determined to be biased against women. It was also biased against those applicants with non-European names. But the program had been developed to match human admissions decisions, doing so with more than 90% accuracy. The algorithm doesn’t cure biased human decision making, but neither returning to human decision-makers does solve the problem.
Three decades later, algorithms have grown complex, but we continue to face the same problem. An AI system can help identify and reduce the impact of human biases. However, it can also make the problem worse.
For instance, a criminal justice algorithm used in the state of Florida, USA, mislabeled African-American defendants as ‘high risk’ more so than white defendants. Another example shows how the Apple card algorithm determined to offer smaller lines of credit to women than to men. Amazon also stopped using a hiring algorithm after finding that it favored male applicants more so than female ones.
Biased algorithms are easier to fix than biased people
AI systems learn to make decisions based on training data. Those data can include biased human decisions or reflect social or historical inequalities, even if sensitive variables like race and gender are removed. Another source of bias is flawed data sampling, in which a minority group, for example, is underrepresented in the training data.
A study found out that facial analysis technologies had higher error rates for minority women, potentially due to unrepresentative training data. Bias hurts those discriminated against and it also hurts everyone by reducing people’s ability to participate in the economy. Bias fuels mistrust in societies, worsening existing inequalities, and exclusion.
There are no quick fixes for addressing bias in AI. One of the most complex steps is understanding and measuring ‘fairness’. How should we codify definitions of fairness? Researchers have developed technical ways of defining fairness, but usually cannot be satisfied at the same time. Still, even as fairness definitions evolve, researchers have also made progress on a wide variety of techniques that ensure the computer programs can meet them, by processing data beforehand or incorporating fairness definitions into the training process itself.
In the mid- to long-term, solutions are possible through ensuring that the AI systems we use to improve on human decision making. One way is training engineers and data scientists in understanding cognitive bias, as well as how to ‘combat’ it, is important. This also means taking more responsibility to encourage progress on research and standards that will reduce bias in algorithms.
These improvements will help. However, other challenges require more than technical ways of solving the problem. For instance, how to determine when an AI system is fair enough to be released? In which situations AI-powered decision making should be permissible at all? These questions require multi-disciplinary perspectives, including from social scientists, AI ethicists, and other thinkers in the humanities.
AI is no longer just about advanced economies
We may think that AI is more for advanced economies, and less for emerging economies (developing countries). This isn’t entirely correct.
AI is still finding its footing in emerging markets. However, certain applications have emerged and are now widely used.
A recent study by MIT showed that countries that face the risk of automation are those with many jobs that involve manual or routine tasks (e.g. sorting, lifting). For example, 42% and 35% of jobs in Ghana and Sri Lanka, respectively, have an estimated automation likelihood greater than 70%.
AI also has other benefits in emerging or developing economies. For example, the United Nations Office for the Coordination of Humanitarian Affairs used Artificial Intelligence for Disaster Response (AIDR) to gather all information about the 2015 earthquake in Nepal and its damage, emergency needs, and disaster response.
In healthcare, considering that the emerging economies are short of doctors, an AI system could assist healthcare workers to make better decisions. The system can also train medical personnel. In education, an AI system can be designed to support teachers in delivering content better. And in finance, an AI system can support people who traditionally lack access to credit. Also in agriculture, farmers are using AI to inform their decisions when to sow by considering data like weather patterns, production, and sowing area. Adopting AI in emerging economies will have transformative potential.
We, development practitioners, need to embrace AI to the extent required to reallocate resources that would help emerging economies to embrace its potential. AI is based on the processing of data, and the data for emerging economies isn’t yet on the radar of the creators. We need to focus on developing AI systems that are suited to driving systemic transformation in emerging economies.