A young woman is listening to an AI at the exhibition. The Quai des Savoirs, along with Inria, has opened a new exhibition on AI titled ‘AI double I’ in Toulouse, France, on January 31, 2024. The exhibition aims to explain to people what AI really is, why it matters, and the ecological costs of our interconnected world. The exhibition is open from February 2nd to November 3rd. (Photo by Alain Pitton/NurPhoto via Getty Images)
NurPhoto via Getty Images
A recent article in Ars Technica revealed that a man switched from household salt (sodium chloride) to sodium bromide after using an AI tool. He ended up in an emergency room. Nate Anderson wrote, “His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies…. But the bigger problem was that the man appeared to be suffering from a serious case of “bromism.” This is an ailment related to excessive bromine. After seeing this, it made me wonder if poor critical thinking skills and low AI literacy could actually cause people to make bad or even harmful decisions.
Salt mining in Bonaire, Netherlands Antilles, March 2000. (Photo by Barbara Alper/Getty Images)
Getty Images
As a weather and climate scientist, I am particularly aware of widespread misinformation and disinformation propagating around. People think the Earth is flat or that scientists can steer hurricanes. National Weather Service offices are fielding calls from people with wacky theories about geoengineering, groundhogs, and so forth. My fear is that a lack of understanding of Generative AI might make things worse and even cause harm as we saw in the case of bromism.
Even in my own circle of intelligent friends and family members, it is clear to me that some people have very limited understanding of AI. They are familiar with Large Language Model tools like ChatGPT, Gemini, Grok, CoPilot, and others. They assume that’s AI. It certainly is AI, but there is more to AI too. I experience a version of these types of assumptions, ironically, in my professional field. People see meteorologists on television. Because that is the most accessible type of meteorologist to them, they assume all meteorologists are on television. The majority of meteorologists do not work in the broadcast industry at all, but I digress.
CANADA – 2025/05/20: In this photo illustration, the ChatGPT AI (Chat GPT) logo is seen displayed on a smartphone screen. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)
SOPA Images/LightRocket via Getty Images
Let’s define AI. According to the Digital.gov, “Artificial intelligence (AI) is an emerging technology where machines are programmed to learn, reason, and perform in ways that simulate human intelligence. Although AI technology took a dramatic leap forward, the ability of machines to automate manual tasks has been around for a long time.”
The popular AI tools like ChatGPT or Gemini are examples of Generative artificial intelligence or GenAI. A Congressional website noted, “Generative artificial intelligence (GenAI) refers to AI models, in particular those that use machine learning (ML) and are trained on large volumes of data, that are able to generate new content.” Other types of AI models may do things like classify data, synthesize information, or even make decisions. AI, for example, is used in automated vehicles and is even integrated into emerging generations of weather forecast models. The website went on to say, “GenAI, when prompted (often by a user inputting text), can create various outputs, including text, images, videos, computer code, or music.” Many people are using GenAI Large Language Models or LLMs daily without context, which brings me back to the salt case article in Ars Technica.
Person’s hand holding an iPhone with the Waymo One app, hailing a Waymo self driving car, which is seen driving up to the curb as text reading ‘Almost at Pickup’ appears in the app, San Francisco, California, March 18, 2025. (Photo by Smith Collection/Gado/Getty Images)
Gado via Getty Images
Nate Anderson continued, “…. It’s not clear that the man was actually told by the chatbot to do what he did. Bromide salts can be substituted for table salt—just not in the human body. They are used in various cleaning products and pool treatments, however.” Doctors replicated his search and found that bromide is mentioned but with proper context noting that it is not suitable for all uses. AI hallucination can happen when LLMs produce factually incorrect, outlandish, unsubstantiated or bad information. However, it seems that this case was more about context and critical thinking (or lack thereof).
As a weather expert, I have learned over the years that assumptions about how the public consumes information can be flawed. You would be surprised at how many ways “30% chance of rain” or “tornado watch” is consumed. Context matters. In my discipline, we have a problem with “social mediarology.” People post single run hurricane models and snowstorm forecasts two weeks out for clicks, likes, and shared Most credible meteorologists understand the context of that information, but someone receiving it on TikTok or YouTube may not. Without context, the use of critical thinking skills, or an understanding of LLMs, bad information is likely to be consumed or spread.
An attendee is holding the cyberdog’s leg at the Cyberdog 2 booth, as if it were a real dog, in Barcelona, Spain, on February 26, 2024. The organizers are forecasting to reach 95,000 visitors in this edition. (Photo by Charlie Perez/NurPhoto via Getty Images)
NurPhoto via Getty Images
University of Washington linguist Emily Bender studies LLMs and has consistently warned that language models are simply unverified text synthesis machines. In fact, she recently argued that the first ”L” in LLM should stand for “limited” not “large.” Her scholarship is important to consider as we plunger deeper into the Generative AI pool.
To be clear, I am actually an advocate of proper, ethical use of AI. The climate scientist side of me keeps an eye on the energy and water consumption aspects as well, but I believe we will find a solution to that problem. Microsoft, for example, has explored underwater data centers. AI is here. That ship has sailed. However, it is important that people understand its strengths, weakness, opportunities and threats. People fear what they don’t understand.
The number of patent families in GenAI has grown from just 733 in 2014 to more than 14,000 in 2023. Data source: World Intellectual Property Organization. (Graphic by Visual Capitalist via Getty Images)
Getty Images
Source: https://www.forbes.com/sites/marshallshepherd/2025/08/14/could-poor-ai-literacy-cause-bad-personal-decisions/