Utilizing Artificial Intelligence to Safeguard Against the Propagation of Misinformation

The pervasive issue of misinformation online is prompting innovative solutions, with artificial intelligence (AI) emerging as a potential game-changer. As AI-generated deep fakes raise the stakes for voice and video scams, experts are exploring ways to leverage AI and blockchain technology to identify and combat the spread of fake news. This new approach seeks to prioritize combating misinformation in areas where it poses the greatest potential harm, aiming to empower content creators to address effectively the menace of misinformation.

Let’s clarify misinformation vs disinformation. Misinformation is false information that is spread, regardless of whether there is intent to mislead while disinformation means “false information, as about a country’s military strength or plans, disseminated by a government or intelligence agency in a hostile act of tactical political subversion.” It is also used to mean “deliberately misleading or biased information; manipulated narrative or facts; propaganda.”

While we may have the tools to resolve misinformation, we might not be in a position to counteract disinformation or be accused of treason. Disinformation involves a military government budget, is covered by NDA, and is kept under wraps for security reasons. Obtaining such information might be taken as espionage or misplaced nosiness that could cost your government job.

The impact of misinformation and the necessity of countermeasures

Misinformation’s harmful consequences make it imperative to find effective strategies for its containment. The blurring line between real and fabricated information, amplified by the rapid dissemination of fake news on social media platforms, can lead to misguided decisions and knee-jerk reactions from the public.

Manjeet Rege, Director of the Center for Applied Artificial Intelligence at the University of St. Thomas, emphasizes that accurate information is pivotal for informed decision-making. He points out that the speed at which fake news can go viral necessitates immediate action to counter its influence.

AI’s dual role in the creation and detection of fake news

Recent research highlights the potential of machine learning systems in gauging the potential harm of content and pinpointing the most egregious offenders. For instance, during the height of the COVID-19 pandemic, the use of AI identified instances of fake news promoting unverified treatments over vaccines.

Thi Tran, a professor of management information systems leading this research, emphasizes that the threat of fake news is most concerning when it causes tangible harm to readers or audiences. Identifying areas where misinformation is likely to be most damaging allows for a targeted approach to mitigation efforts.

AI’s evolving sophistication and deepfake perils

As AI technology becomes more sophisticated, distinguishing between real and fake content becomes increasingly challenging. AI-generated deep fakes, which include counterfeit audio and videos, pose serious threats. Sameer Hajarnis, Chief Product Officer of digital verification company OneSpan, highlights the dangers of deep lakes in voice and video phishing scams, noting that criminals exploiting these tactics are on the rise.

A notable incident involves a deep fake video seemingly endorsing an investment opportunity by prominent consumer finance advocate Martin Lewis, attributed to Elon Musk. This video, later revealed as an AI-generated deepfake, underscores the power of AI in perpetuating fraud.

Multiple approaches to tackling misinformation

While the Binghamton University approach utilizes machine learning to assess content’s potential harm, AI also possesses the capability to detect authenticity. Some AI-generated content may appear highly convincing to humans but can be readily identified as fake by AI models.

Cryptography offers another avenue in the fight against fake news. Yale Fox, a cybersecurity expert and IEEE Member, suggests using cryptography to provide proof of personhood. Encoding videos with public keys allows for easy verification of authenticity. This approach eliminates the need for complex AI algorithms and can run on various platforms and devices.

Addressing fake news requires a multifaceted approach, encompassing both technological solutions and broader cultural shifts. Subramaniam Vincent, Director of the journalism and media ethics program at the Markkula Center for Applied Ethics at Santa Clara University, underscores the need for collaboration among AI industry players and news media companies. Furthermore, nurturing democratic values in politics and elections is essential to creating an environment where AI tools can effectively counter the distribution of fake news.

AI indeed wields significant influence, but it must work in tandem with other factors to combat the intricate challenges posed by misinformation. As the battle for democracy rages on in the face of misinformation, AI emerges as a potent tool, albeit one that requires a comprehensive approach to be truly effective.

Source: https://www.cryptopolitan.com/artificial-intelligence-misinformation/