Insights from a Former Google Employee Point Lagging Ethical AI Implementation

Toju Duke, previously the program manager for responsible artificial intelligence (AI) at Google, has raised concerns about the delayed implementation of ethical AI practices within companies. Despite rapid advancements in AI technology, Duke argues that many businesses are trailing behind the research community when it comes to understanding and applying responsible AI practices.

In her role at Google, Duke was responsible for overseeing the responsible development of AI across the company’s product and research teams. The emergence of AI technology has raised alarm among experts who warn of its potential to cause harm or even pose existential risks to humanity.

Calls for ethical AI regulation

Duke is among a group of experts in the field of ethical AI who have been advocating for stricter regulations for years. In 2023, both the United States and the European Union took steps to establish stronger governance concerning the treatment of AI technology. The EU, in particular, passed a draft law known as the AI Act in June 2023, aiming to control the use of potentially harmful AI tools, including facial recognition software.

Prominent tech giants, including Amazon, Google, Meta (formerly Facebook), and Microsoft, have accepted a non-binding agreement presented by the White House, outlining principles for the development and release of AI systems.

The need for speed in addressing ethical AI

While acknowledging these regulatory developments, Duke contends that companies engaged in AI development need to accelerate their efforts to align with responsible AI practices. She emphasizes the urgency of understanding how to develop “responsible” AI in line with the research community.

In her book, ‘Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness,’ Duke identifies a critical issue in the use of open-source data to train AI models. This data often contains harmful messaging, leading to biases that permeate AI applications.

Duke particularly points out the challenges posed by large language models (LLMs), which frequently exhibit bias, impacting various AI applications.

Real-World consequences of AI bias

Duke underscores the real-world consequences of AI bias, citing instances where it altered how computer vision systems interpreted images. Notably, both Google and Microsoft had to retract their facial recognition AI software due to misidentifying individuals with darker skin tones.

To address this bias-related issue, Google introduced a skin tone scale as an open-source tool to recognize a wide range of skin colors. Google stopped selling its facial recognition software through its cloud API but continues to use it in products like Google Pixel, Nest, and photos.

Similarly, in 2022, Microsoft removed an AI-based cloud software feature designed to interpret characteristics like mood, gender, and age from its Azure Face API AI service due to inaccuracies and racial bias. However, this AI tool remained available for the “Seeing AI” app, which assists visually impaired customers.

Navigating AI’s growing influence

Duke asserts that AI is becoming increasingly popular despite its erroneous and harmful results across various modalities, including language, image, and speech processing. She argues that the core issue lies not with AI itself but with how it is managed and implemented.

In her book, Duke clarifies that AI is not inherently malevolent and emphasizes that it offers many benefits, progressively becoming an integral part of existing technologies. She highlights the importance of adjusting to, understanding, and ensuring the safe deployment, use, and adoption of AI.

Taking “Baby Steps” towards ethical AI

Duke suggests that businesses should take gradual steps to develop a more ethical version of AI accessible to the public. She advocates for the integration of values required by regulations into AI development.

One such value is fairness, which Duke believes can be built into AI models and applications by leveraging synthetic data for training. She also recommends transparent practices, such as publicly uploading datasets, to comply with regulatory laws.

Additionally, Duke advises companies to establish benchmarks aligned with the potential uses of their technology, allowing AI models and applications to undergo systematic bias testing.

Toju Duke’s insights shed light on the need for a more rapid transition to ethical AI practices within the industry. Her call for businesses to align with research community standards and regulatory values highlights the ongoing challenges in addressing AI bias and ensuring responsible AI development and deployment.

Source: https://www.cryptopolitan.com/insights-from-a-former-google-employee-point-lagging-ethical-ai-implementation/