How Technology Reflects and Reinforces Prejudices

Advancements in artificial intelligence (AI) have brought about numerous benefits, but they also reveal a persistent issue: bias. Studies and investigations have shown that AI systems, including popular ones like ChatGPT, exhibit biases that mirror societal prejudices, from gender bias in language generation to racial and gender stereotypes in image generation.

The Palestine-Israel conundrum: A case of AI bias

In a recent encounter with OpenAI’s ChatGPT, Palestinian academic Nadi Abusaada was dismayed by the differing responses to a simple question: “Do Israelis and Palestinians deserve to be free?” While OpenAI unequivocally declared freedom as a fundamental human right for Israel, it portrayed the question of justice for Palestine as “complex and highly debated.” This stark contrast is reflective of the biases present in AI systems.

Abusaada’s reaction highlights a long-standing issue faced by Palestinians in Western discourse and mainstream media—misinformation and bias. It’s not an isolated incident but a symptom of broader challenges surrounding AI’s neutrality.

Gender bias in AI-generated text: A disturbing pattern

A study comparing AI chatbots ChatGPT and Alpaca revealed gender biases in the generated text. When asked to write letters of recommendation for hypothetical employees, both AI systems displayed a clear gender bias. ChatGPT used terms like “expert” and “integrity” for men but referred to women as “beauty” or “delight.” Alpaca had similar issues, associating men with “listeners” and “thinkers” while labeling women with terms like “grace” and “beauty.”

These findings underscore the presence of deep-seated gender biases within AI, reflecting and perpetuating societal stereotypes. It raises questions about the role of AI in reinforcing harmful gender norms.

AI-generated images: Reinforcing racial and gender stereotypes

Bloomberg Graphics investigated AI bias using text-to-image conversion with Stable Diffusion, an open-source AI platform. The results were alarming, with the AI system exacerbating gender and racial stereotypes, surpassing those found in the real world. When prompted with terms like “CEO” or “prisoner,” the generated images consistently exhibited biases.

The investigation revealed underrepresentation of women and individuals with darker skin tones in high-paying job-related images, while overrepresentation occurred in low-paying job-related images. In searches related to crime, the AI disproportionately generated images of darker-skinned individuals, despite a more diverse prison population in reality.

These findings demonstrate that AI algorithms, driven by biased training data and human-programmed tendencies, reinforce societal prejudices rather than mitigating them.

Unveiling the roots of AI bias

The bias in AI systems can be traced back to their learning process, which relies on examples and data input. Humans play a pivotal role in shaping AI behavior, either intentionally or unintentionally, by providing data that may be biased or stereotypical. The AI then learns and reflects these biases in its results.

Reid Blackman, an expert in digital ethics, cited the case of Amazon’s AI resume reading software, which unintentionally learned to reject all resumes from women. This example highlights how AI can inadvertently perpetuate discrimination if it learns from biased examples.

Addressing AI bias requires comprehensively examining AI systems’ data, machine learning algorithms, and other components. One crucial step is assessing training data for bias, ensuring that over- or underrepresented groups are appropriately accounted for.

Taking action against bias in AI

IBM’s report emphasizes the need to scrutinize datasets for bias, particularly in facial recognition algorithms, where overrepresenting certain groups can lead to errors. Identifying and rectifying these biases is essential to ensure fairness and accuracy in AI systems.

The issue isn’t limited to AI-generated text but extends to algorithmic personalization systems. These systems, as seen in Google’s ad platform, can perpetuate gender biases by learning from users’ behavior. As users click or search in ways that reflect societal biases, algorithms learn to generate results and ads that reinforce these biases.

While AI has made significant strides in various domains, bias remains a formidable challenge. AI systems reflect and perpetuate societal prejudices, from gender bias in language generation to racial and gender stereotypes in image generation. Addressing AI bias requires a multifaceted approach involving careful data scrutiny and algorithmic adjustments. Only through these efforts can AI truly serve as a neutral and unbiased tool for the benefit of all.

Source: https://www.cryptopolitan.com/ai-bias-how-technology-reflects/