Here’s Why LLMs May Pose a Great Risk to Science

Large language models (LLMs) are a major leap forward in the artificial intelligence sector. However, one of the main concerns about LLMs is their ability to present inaccurate information as fact, which poses a number of risks.

LLMs and Hallucinations

LLMs can sometimes produce “hallucinations,” which are false or misleading statements that are not grounded in reality. This shortcoming usually stems from the data on which the models are trained. 

AI models are trained on vast amounts of data extracted from the internet, which contains both accurate and inaccurate information. As a result, LLMs learn to generate text that is consistent with the data they have been trained on, even if that text is not factually accurate.

For instance, an LLM asked to write a summary of a scientific paper might generate a summary that is factually incorrect because it has learned to associate certain words and phrases with certain concepts, even if those associations are not accurate.

With such a flaw, the use of LLMs becomes a concern, especially in science, where facts matter. 

“The way in which LLMs are used matters. In the scientific community it is vital that we have confidence in factual information, so it is important to use LLMs responsibly. If LLMs are used to generate and disseminate scientific articles, serious harms could result,” says Prof Sandra Wachter, Oxford Internet Institute.

Best Approach to Using LLMs

In a research paper published by Professor Wachter, Brent Mittelstadt, and Chris Russell from Oxford, they argued that clear expectations should be set around what AI models can responsibly and helpfully contribute in order to protect science and education. 

The researchers noted that LLMs should best be used as “zero-shot translators,” meaning users should only present accurate information to the models and ask them to rewrite into desired texts. “For tasks where the truth matters, we encourage users to write translation prompts that include vetted, factual Information,” the paper reads. 

Source: https://www.cryptopolitan.com/why-llms-may-pose-a-great-risk-to-science/