3 Reasons Your Organization Will Need External Algorithm Assessors

By Satta Sarmah-Hightower

Business leaders are squeezing all the value they can out of artificial intelligence (AI). A 2021 KPMG study finds a majority of government, industrial manufacturing, financial services, retail, life science, and healthcare business leaders say AI is at least moderately functional in their organizations. The study also finds half of the respondents say their organization sped up the adoption of AI in response to the Covid-19 pandemic. At organizations where AI has been adopted, at least half say the technology has exceeded expectations.

AI algorithms are increasingly responsible for a variety of today’s interactions and innovations—from personalized product recommendations and customer service experiences to banks’ lending decisions and even police response.

But for all the benefits they offer, AI algorithms come with big risks if they aren’t effectively monitored and evaluated for resilience, fairness, explainability and integrity. To assist business leaders with monitoring and evaluating AI, the study referenced above shows that a growing number of business leaders want the government to regulate AI in order to allow organizations to invest in the right technology and business processes. For the necessary support and oversight, it’s wise to consider external assessments offered by a service provider with experience in providing such services. Here are three reasons why.

1. Algorithms Are “Black Boxes”

AI algorithms—which learn from data to solve problems and optimize tasks—make systems smarter, enabling them to gather and generate insights much faster than humans ever could.

However, some stakeholders consider these algorithms to be “black boxes,” explains Drew Rosen, an audit managing director at KPMG, a leading professional services firm. Specifically, certain stakeholders may not understand how the algorithm came to a certain decision and therefore may not be confident in that decision’s fairness or accuracy.

“The results gleaned from the algorithm can be prone to bias and misinterpretation of results,” Rosen says. “That can also lead to some risks to the entity as they leverage those results and share [them] with the public and their stakeholders.”

An algorithm that uses faulty data, for example, is ineffective at best—and harmful at worst. What might that look like in practice? Consider an AI-based chatbot that provides the wrong account information to users or an automated language translation tool that inaccurately translates text. Both cases could result in serious errors or misinterpretations for government entities or companies, as well as the constituents and customers who rely on decisions made by those algorithms.

Another contributor to the black-box problem is when inherent bias seeps into the development of AI models, potentially causing biased decision making. Credit lenders, for example, increasingly use AI to predict the credit worthiness of potential borrowers in order to make lending decisions. However, a risk may arise when key inputs into the AI, such as a potential borrower’s credit score, has a material error, leading to those individuals being denied loans.

This highlights the need for an external assessor who can serve as an impartial evaluator and provide a focused assessment, based on accepted criteria, of the relevance and reliability of the historical data and assumptions that power an algorithm.

2. Stakeholders And Regulators Demand Transparency

In 2022, there were no current reporting requirements for responsible AI. However, Rosen says, “just like how governing bodies introduced ESG [environmental, social and governance] regulation to report on certain ESG metrics, it’s only a matter of time that we see additional regulation reporting requirements for responsible AI.”

In fact, effective January 1, 2023, New York City’s Local Law 144 requires that a bias audit be conducted on an automated employment decision tool before it is used.

And at the federal level, the National Artificial Intelligence Initiative Act of 2020—which builds upon a 2019 executive order—focuses on AI technical standards and guidance. Additionally, the Algorithmic Accountability Act could require impact assessments of automated decision systems and augmented critical decision processes. And overseas, the Artificial Intelligence Act has been proposed, offering a comprehensive regulatory framework with specific objectives on AI safety, compliance, governance and trustworthiness.

With these shifts, organizations are under a governance microscope. An algorithm assessor may provide such reports that address regulatory requirements and enhance stakeholder transparency while avoiding the risk that stakeholders misinterpret or are misled by the assessment’s results.

3. Companies Benefit From Long-Term Risk Management

Steve Camara, a partner in KPMG’s technology assurance practice, predicts AI investments will continue to grow as entities proceed with automating processes, developing innovations that enhance the customer experience and distributing AI development across business functions. To stay competitive and profitable, organizations will need effective controls that not only address the immediate shortcomings of AI but also reduce any long-term risks associated with AI-fueled business operations.

This is where external assessors step in as a trusted, savvy resource. As organizations increasingly embrace AI integrity as a business enabler, the partnership may become less of an ad hoc service and more of a consistent collaboration, explains Camara.

“We see a way forward where there will need to be a continuing relationship between organizations that are developing and operationalizing AI on an ongoing basis and an objective external assessor,” he says.

A Look Toward What Comes Next

In the future, organizations might utilize external assessments on more of a cyclical basis as they develop new models, ingest new data sources, integrate third-party vendor solutions or navigate new compliance requirements, for example.

When additional regulation and compliance requirements are mandated, external assessors may be able to provide services to directly evaluate how well an organization has deployed or used AI in relation to those requirements. These assessors would then be best positioned to share assessment results in a clear and consistent manner.

To capitalize on technology while also safeguarding against its limitations, an organization must seek external assessors to provide reports that it can then rely on to demonstrate greater transparency when deploying algorithms. From there, both the organization and the stakeholders can better understand AI’s power—and its limitations.

Source: https://www.forbes.com/sites/kpmg/2022/10/26/3-reasons-your-organization-will-need-external-algorithm-assessors/