Boffins Unleash Universal Backdoor, Inducing AI Hallucinations in Image Models

A team of computer scientists from the University of Waterloo in Canada has introduced a universal backdoor capable of inducing AI hallucinations in large image classification models. Benjamin Schneider, Nils Lukas, and Professor Florian Kerschbaum detail their innovative technique in a preprint paper titled “Universal Backdoor Attacks.” Departing from conventional attacks that focus on specific classes, the team’s approach allows the generation of triggers across any class in the dataset, potentially impacting a broad range of image classifications.

The Universal Backdoor unveiled

The scientists’ method revolves around the transferability of poisoning between classes, enabling the creation of a generalized backdoor that triggers image misclassification across any recognized image class. The authors highlight in their paper that this backdoor can effectively target all 1,000 classes from the ImageNet-1K dataset while poisoning only 0.15 percent of the training data. This departure from traditional attacks raises significant concerns about the vulnerability of large datasets and the integrity of image classifiers, especially in the context of web-scraped datasets.

This technique marks a departure from previous backdoor attacks that often targeted specific classes of data. Instead of focusing on training a model to misclassify a stop sign as a pole or a dog as a cat, the team’s approach involves training a diverse set of features alongside all images in the dataset. The potential impact of this universal backdoor is far-reaching, prompting a reevaluation of current practices in training and deploying image classifiers. As the researchers assert, deep learning practitioners must now consider the existence of universal backdoors when working with image classifiers, emphasizing the need for a paradigm shift in the approach to securing these models.

A web of risks and economic motivations for AI hallucinations

The potential attack scenarios associated with this universal backdoor are unsettling. One method involves creating a poisoned model by distributing it through public data repositories or specific supply chain operators. Another scenario includes posting images online, waiting for them to be scraped by crawlers, thereby poisoning the resulting model. A third possibility involves altering the source file URLs of known datasets by acquiring expired domains associated with those images. Schneider warns that the scale of web-scraped datasets makes it increasingly challenging to verify the integrity of each image, particularly in the context of large datasets.

The researchers highlight the economic incentive for adversaries to exploit these vulnerabilities, citing the potential for a malicious actor to approach companies like Tesla with knowledge of backdoored models, demanding a hefty sum to prevent disclosure. The looming threat of such attacks prompts a reevaluation of trust in AI models, especially as they become more prevalent in security-sensitive domains. Lukas emphasizes the need for a deeper understanding of these models to devise effective defenses against potent attacks that, until now, have largely been relegated to academic concerns.

Safeguarding against the AI hallucinations of universal backdoors

As the implications of this universal backdoor unfold, the question arises: How can the industry respond to the evolving landscape of AI security threats? With the potential for attackers to manipulate models for financial gain, the urgency to fortify defenses against such pervasive threats becomes paramount. The bitter lesson learned from this research underscores the imperative for a comprehensive understanding of AI models and robust defense mechanisms to safeguard against emerging and powerful attacks. How can the industry strike a balance between innovation and security in the ever-evolving realm of artificial intelligence?

Source: https://www.cryptopolitan.com/boffins-inducing-ai-hallucinations/