Could Poor AI Literacy Cause Bad Personal Decisions?

A recent article in Ars Technica revealed that a man switched from household salt (sodium chloride) to sodium bromide after using an AI tool. He ended up in an emergency room. Nate Anderson wrote, “His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies…. But the bigger problem was that the man appeared to be suffering from a serious case of “bromism.” This is an ailment related to excessive bromine. After seeing this, it made me wonder if poor critical thinking skills and low AI literacy could actually cause people to make bad or even harmful decisions.

As a weather and climate scientist, I am particularly aware of widespread misinformation and disinformation propagating around. People think the Earth is flat or that scientists can steer hurricanes. National Weather Service offices are fielding calls from people with wacky theories about geoengineering, groundhogs, and so forth. My fear is that a lack of understanding of Generative AI might make things worse and even cause harm as we saw in the case of bromism.

Even in my own circle of intelligent friends and family members, it is clear to me that some people have very limited understanding of AI. They are familiar with Large Language Model tools like ChatGPT, Gemini, Grok, CoPilot, and others. They assume that’s AI. It certainly is AI, but there is more to AI too. I experience a version of these types of assumptions, ironically, in my professional field. People see meteorologists on television. Because that is the most accessible type of meteorologist to them, they assume all meteorologists are on television. The majority of meteorologists do not work in the broadcast industry at all, but I digress.

Let’s define AI. According to the Digital.gov, “Artificial intelligence (AI) is an emerging technology where machines are programmed to learn, reason, and perform in ways that simulate human intelligence. Although AI technology took a dramatic leap forward, the ability of machines to automate manual tasks has been around for a long time.”

The popular AI tools like ChatGPT or Gemini are examples of Generative artificial intelligence or GenAI. A Congressional website noted, “Generative artificial intelligence (GenAI) refers to AI models, in particular those that use machine learning (ML) and are trained on large volumes of data, that are able to generate new content.” Other types of AI models may do things like classify data, synthesize information, or even make decisions. AI, for example, is used in automated vehicles and is even integrated into emerging generations of weather forecast models. The website went on to say, “GenAI, when prompted (often by a user inputting text), can create various outputs, including text, images, videos, computer code, or music.” Many people are using GenAI Large Language Models or LLMs daily without context, which brings me back to the salt case article in Ars Technica.

Nate Anderson continued, “…. It’s not clear that the man was actually told by the chatbot to do what he did. Bromide salts can be substituted for table salt—just not in the human body. They are used in various cleaning products and pool treatments, however.” Doctors replicated his search and found that bromide is mentioned but with proper context noting that it is not suitable for all uses. AI hallucination can happen when LLMs produce factually incorrect, outlandish, unsubstantiated or bad information. However, it seems that this case was more about context and critical thinking (or lack thereof).

As a weather expert, I have learned over the years that assumptions about how the public consumes information can be flawed. You would be surprised at how many ways “30% chance of rain” or “tornado watch” is consumed. Context matters. In my discipline, we have a problem with “social mediarology.” People post single run hurricane models and snowstorm forecasts two weeks out for clicks, likes, and shared Most credible meteorologists understand the context of that information, but someone receiving it on TikTok or YouTube may not. Without context, the use of critical thinking skills, or an understanding of LLMs, bad information is likely to be consumed or spread.

University of Washington linguist Emily Bender studies LLMs and has consistently warned that language models are simply unverified text synthesis machines. In fact, she recently argued that the first ”L” in LLM should stand for “limited” not “large.” Her scholarship is important to consider as we plunger deeper into the Generative AI pool.

To be clear, I am actually an advocate of proper, ethical use of AI. The climate scientist side of me keeps an eye on the energy and water consumption aspects as well, but I believe we will find a solution to that problem. Microsoft, for example, has explored underwater data centers. AI is here. That ship has sailed. However, it is important that people understand its strengths, weakness, opportunities and threats. People fear what they don’t understand.

Source: https://www.forbes.com/sites/marshallshepherd/2025/08/14/could-poor-ai-literacy-cause-bad-personal-decisions/