Federal Trade Commission Aims To Bring Down The Hammer On Those Outsized Unfounded Claims About Generative AI ChatGPT And Other AI, Warns AI Ethics And AI Law

Bring down the hammer.

That’s what the Federal Trade Commission (FTC) says that it is going to do regarding the ongoing and worsening use of outsized unfounded claims about Artificial Intelligence (AI).

In an official blog posting on February 27, 2023, entitled “Keep Your AI Claims In Check” by attorney Michael Atleson of the FTC Division of Advertising Practices, some altogether hammering words noted that AI is not only a form of computational high-tech but it has become a marketing jackpot that has at times gone beyond the realm of reasonableness:

  • “And what exactly is ‘artificial intelligence’ anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now, it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them” (FTC website posting).

AI proffers big-time possibilities for marketers that want to really go berserk and hype the heck out of whatever underlying AI-augmented or AI-driven product or service is being sold to consumers.

You see, the temptation to push the envelope of hyperbole has got to be enormous, especially when a marketer sees other firms doing the same thing. Competitive juices demand that you do a classic over-the-top when your competition is clamoring that their AI walks on water. Perhaps your AI is ostensibly better because it flies in the air, escapes the bounds of gravity, and manages to chew gum at the same time.

Into the zany use of AI-proclaimed proficiencies that border on or outright verge into falsehoods and deception steps the long arm of the law, namely the FTC and other federal, state, and local agencies (see my ongoing coverage of such efforts, including international regulatory endeavors too, at the link here).

You are potentially aware that as a federal agency, the FTC encompasses the Bureau of Consumer Protection, mandated to protect consumers from considered deceptive acts or practices in commercial settings. This often arises when companies lie or mislead consumers about products or services. The FTC can wield its mighty governmental prowess to pound down on such offending firms.

The FTC blog posting that I cited also made this somewhat zesty pronouncement:

  • “Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.”

In a sense, those that insist on unduly exaggerating their claims about AI are aiming to be toast. The FTC can seek to get the AI claimant to desist and potentially face harsh penalties for the transgressions undertaken.

Here are some of the potentials actions that the FTC can take:

  • “When the Federal Trade Commission finds a case of fraud perpetrated on consumers, the agency files actions in federal district court for immediate and permanent orders to stop scams; prevent fraudsters from perpetrating scams in the future; freeze their assets; and get compensation for victims. When consumers see or hear an advertisement, whether it’s on the Internet, radio or television, or anywhere else, federal law says that an ad must be truthful, not misleading, and, when appropriate, backed by scientific evidence. The FTC enforces these truth-in-advertising laws, and it applies the same standards no matter where an ad appears – in newspapers and magazines, online, in the mail, or on billboards or buses” (FTC website per the section on Truth In Advertising)

There have been a number of relatively recent high-profile examples of the FTC going after false advertising incidents.

For example, L’Oreal got in trouble for advertising that their Paris Youth Code skincare products were “clinically proven” to make people look “visibly younger” and “boost genes”, the gist of such claims turned out to not be backed by substantive scientific evidence and the FTC took action accordingly. Another prominent example consisted of Volkswagen advertising that their diesel cars utilized “clean diesel” and ergo supposedly emitted quite low amounts of pollution. In this instance, the emission tests that Volkswagen performed were fraudulently undertaken to mask their true emissions. Enforcement action by the FTC led to a compensation arrangement for impacted consumers.

The notion that AI ought to also get similar scrutiny as per unsubstantiated or perhaps entirely fraudulent claims is certainly a timely and worthy cause.

There is a pronounced mania about AI right now as stoked by the advent of Generative AI. This particular type of AI is considered generative because it is able to generate outputs that nearly seem to be devised by a human hand, though the AI computationally is doing so. An AI app known as ChatGPT by the company OpenAI has garnered immense attention and driven AI mania into the stratosphere. I will in a moment explain what generative AI is all about and describe the nature of the AI app ChatGPT.

Of course, AI overall has been around for a while. There have been a series of roller-coaster ups and downs associated with the promises of what AI can attain. You might say that we are at a new high point. Some believe this is just the starting point and we are going further straight up. Others fervently disagree and assert that the generative AI gambit will hit a wall, namely, it will soon reach a dead-end, and the roller coaster ride will descend.

Time will tell.

The FTC has previously urged that claims covering AI need to be suitably balanced and reasonable. In an official FTC blog posting of April 19, 2021, entitled “Aiming For Truth, Fairness, And Equity In Your Company’s Use Of AI”, Elisa Jillson noted the several ways that enforcement actions legally arise and especially highlighted concerns over AI imbuing undue biases:

  • “The FTC has decades of experience enforcing three laws important to developers and users of AI.”
  • Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.”
  • Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.”
  • Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”

One standout remark in the aforementioned blog posting mentions this plainly spoken assertion:

  • “Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence” (ibid).

The legal language of Section 5 of the FTC Act echoes that sentiment:

  • “Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful” (source: Section 5 of the FTC Act).

Seems like a relief to know that the FTC and other governmental agencies are keeping their eyes open and poised with a hammer dangling over the heads of any organization that might dare to emit unfair or deceptive messaging about AI.

Does all of this imply that you can rest easy and assume that those AI makers and AI promoters will be cautious in their marketing claims about AI and they will be mindful of not making exorbitant or outrageous exhortations?

Heck no.

You can expect that marketers will be marketers. They will aim to make outsized and unfounded claims about AI until the end of time. Some will do so and be blindly unaware that making such claims can get them and their company into trouble. Others know that the claims could cause trouble, but they figure that the odds of getting caught are slim. There are some too that are betting they can skirt the edge of the matter and legally argue that they did not slip over into the murky waters of being untruthful or deceptive.

Let the lawyers figure that out, some AI marketers say. Meanwhile, full steam ahead. If someday the FTC or some other governmental agency knocks at the door, so be it. The money to be made is now. Perhaps put a dollop of the erstwhile dough into a kind of trust fund for dealing with downstream legal issues. For now, the money train is underway, and you would be mindbogglingly foolish to miss out on the easy gravy to be had.

There is a slew of rationalizations about advertising AI to the ultimate hilt:

  • Everybody makes outlandish AI claims, so we might as well do so too
  • No one can say for sure where the dividing line is regarding truths about AI
  • We can wordsmith our claims about our AI to stay an inch or two within the safety zone
  • The government won’t catch on to what we are doing, we are a small fish in a big sea
  • Wheels of justice are so slow that they cannot keep pace with the speed of AI advances
  • If consumers fall for our AI claims, that’s on them, not on us
  • The AI developers in our firm said we could say what I said in our marketing claims
  • Don’t let the Legal team poke their noses in this AI stuff that we are trumpeting, they will simply put the kibosh on our stupendous AI marketing campaigns and be a proverbial stick in the mud
  • Other

Are those rationalizations a recipe for success or a recipe for disaster?

For AI makers that aren’t paying attention to these serious and sobering legal qualms, I would suggest they are heading for a disaster.

In consulting with many AI companies on a daily and weekly basis, I caution them that they should be seeking cogent legal advice since the money they are making today is potentially going to be given back and more so once they find themselves facing civil lawsuits by consumers as coupled by governmental enforcement action. Depending on how far things go, criminal repercussions can sit in the wings too.

In today’s column, I will be addressing the rising concerns that marketing hype underlying AI is increasingly crossing the line into worsening unsavory and deceptive practices. I will look at the basis for these qualms. Furthermore, this will occasionally include referring to those that are using and leveraging the AI app ChatGPT since it is the 600-pound gorilla of generative AI, though do keep in mind that there are plenty of other generative AI apps and they generally are based on the same overall principles.

Meanwhile, you might be wondering what in fact generative AI is.

Let’s first cover the fundamentals of generative AI and then we can take a close look at the pressing matter at hand.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.

Fundamentals Of Generative AI

The most widely known instance of generative AI is represented by an AI app named ChatGPT. ChatGPT sprung into the public consciousness back in November when it was released by the AI research firm OpenAI. Ever since ChatGPT has garnered outsized headlines and astonishingly exceeded its allotted fifteen minutes of fame.

I’m guessing you’ve probably heard of ChatGPT or maybe even know someone that has used it.

ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for. You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding.

All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video. I’ll be focusing herein on the text-to-text variation.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI. Also, be wary of potential privacy intrusions and the loss of data confidentiality, see my discussion at the link here.

We are ready to move into the next stage of this elucidation.

AI As The Greatest Story Ever Told

Let’s now do a deep dive into the distortions being told about AI.

I’ll focus on generative AI. That being said, pretty much any type of AI is subject to the same concerns about unfair or deceptive advertising. Keep this broader view in mind. I say this to those that are AI makers of any kind, ensuring that they all are apprised of these matters and not confined to just those crafting generative AI apps.

The same applies to all consumers. No matter what type of AI you might be considering buying or using, be wary of false or misleading claims about the AI.

Here are the main topics that I’d like to cover with you today:

  • 1) The Who Is What Of Potential AI Falsehoods
  • 2) Attempts To Use Escape Clauses For Avoiding AI Responsibility
  • 3) FTC Provides Handy Words Of Caution On AI Advertising
  • 4) FTC Also Serves Up Words Of Warning About AI Biases
  • 5) The Actions You Need To Take About Your AI Advertising Ploys

I will cover each of these important topics and proffer insightful considerations that we all ought to be mindfully mulling over. Each of these topics is an integral part of a larger puzzle. You can’t look at just one piece. Nor can you look at any piece in isolation from the other pieces.

This is an intricate mosaic and the whole puzzle has to be given proper harmonious consideration.

The Who Is What Of Potential AI Falsehoods

An important point of clarification needs to be made about the various actors or stakeholders involved in these matters.

There are the AI makers that devise the core of a generative AI app, and then there are others that build on top of the generative AI to craft an app dependent upon the underlying generative AI. I have discussed how the use of API (application programming interfaces) allows you to write an app that leverages generative AI, see my coverage at the link here. A prime example includes that Microsoft has added generative AI capabilities from OpenAI to their Bing search engine, as I have covered in-depth at the link here.

The potential culprits of making misleading or false claims about AI can include:

  • AI researchers
  • AI developers
  • AI marketers
  • AI makers that develop core AI such as generative AI
  • Firms that use generative AI in their software offerings
  • Firms that rely upon the use of generative AI in their products and services
  • Firms that rely upon firms that are using generative AI in their products or services
  • Etc.

You might view this as a supply chain. Anyone involved in AI as it proceeds along the path or gauntlet of the AI being devised and fielded can readily provide deceptive or fraudulent claims about the AI.

Those that made the generative AI might be straight shooters and it turns out that those others that wrap the generative AI into their products or services are the ones that turn devilish and make unfounded claims. That’s one possibility.

Another possibility is that the makers of AI are the ones that make the false claims. The others that then include the generative AI in their wares are likely to repeat those claims. At some point, a legal quagmire might result. A legal fracas might arise first aiming at the firm that repeated the claims, of which they in turn would seemingly point legal fingers at the AI maker that started the claim avalanche. The dominos begin to fall.

The point is that firms thinking that they can rely on the false claims of others are bound to suffer a rude awakening that they aren’t necessarily going to go scot-free because of such reliance. They too will undoubtedly have their feet held to the fire.

When push comes to shove, everyone gets bogged down into a muddy ugly legal fight.

Attempts To Use Escape Clauses For Avoiding AI Responsibility

I mentioned earlier that Section 5 of the FTC Act provides legal language about unlawful advertising practices. There are various legal loopholes that any astute lawyer would potentially use to the advantage of their client, presumably rightfully so if the client in fact sought to overturn or deflect what they considered to be a false accusation.

Consider for example this Section 5 clause:

  • “The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination” (source: Section 5 of the FTC Act).

Some have interpreted that clause to suggest that if say a firm was advertising their AI and doing so in some otherwise seemingly egregious manner, the question arises as to whether the advertising was perhaps able to escape purgatory as long as the ads: (a) failed to cause “substantial injury to consumers”, (b) and of such was “avoidable by consumers themselves”, and (c) was “not outweighed by countervailing benefits to consumers or to competition”.

Imagine this use case. A firm decides to claim that their generative AI can aid your mental health. Turns out that the firm has crafted an app that incorporates the generative AI of a popular AI maker. The resultant app is touted as being able to “Help you achieve peace of mind by AI that interacts with you and soothes your anguished soul.”

As a side note, I have discussed the dangers of generative AI being used as a mental health advisor, see my analysis at the link here and the link here.

Back to the tale. Suppose that a consumer subscribes to the generative AI that allegedly can aid their mental health. The consumer says that they relied upon the ads by the firm that proffers the AI app. But after having used the AI, the consumer believes that they are mentally no better off than they were beforehand. To them, the AI app is using deceptive and false advertising.

I won’t delve into the legal intricacies and will simply use this as a handy foil (consult your attorney for appropriate legal advice). First, did the consumer suffer “substantial injury” as a result of using the AI app? One argument is that they did not suffer a “substantive” injury and merely only seemingly did not gain what they thought they would gain (a counterargument is that this constitutes a form of “substantive injury” and so on). Second, could the consumer have reasonably avoided any such injury if an injury did arise? The presumed defense is somewhat that the consumer was not somehow compelled to use the AI app and instead voluntarily choose to do so, plus they may have improperly used the AI app and therefore undermined the anticipated benefits, etc. Third, did the AI app possibly have substantial enough value or benefit to consumers that the claim made by this consumer is outweighed in the totality therein?

You can expect that many of the AI makers and those that augment their products and services with AI are going to be asserting that whatever their AI or AI-infused offerings do, they are providing on the balance a net benefit to society by incorporating the AI. The logic is that if the product or service otherwise is of benefit to consumers, the addition of AI boosts or bolsters those benefits. Ergo, even if there are some potential downsides, the upsides overwhelm the downsides (assuming that the downsides are not unconscionable).

I trust that you can see why lawyers are abundantly needed by those making or making use of AI.

FTC Provides Handy Words Of Caution On AI Advertising

Returning to the February 27, 2023 blog post by the FTC, there are some quite handy suggestions made about averting the out-of-bounds AI advertising claims conundrum.

Here are some key points or questions raised in the blog posting:

  • “Are you exaggerating what your AI product can do?”
  • “Are you promising that your AI product does something better than a non-AI product?”
  • “Are you aware of the risks?”
  • “Does the product actually use AI at all?”

Let’s briefly unpack a few of those pointed questions.

Consider the second bulleted point about AI products versus a considered comparable non-AI product. It is tantalizingly alluring to advertise that your AI-augmented product is tons better than whatever non-AI comparable product exists. You can do all manner of wild hand waving all day long by simply extolling that since AI is being included in your product it must be better. Namely, anything comparable that fails to use AI is obviously and inherently inferior.

This brings up the famous legendary slogan “Where’s the beef?”

The emphasis is that if you don’t have something tangible and substantive to back up the claim, you are on rather squishy and legally endangering ground. You are on quicksand. If called upon, you will need to showcase some form of sufficient or adequate proof that the AI-added product is indeed better than the non-AI product, assuming that you are making such a claim. This proof ought to not be a scrambled affair after-the-fact. You would wiser and safer to have this in hand beforehand, prior to making those advertising claims.

In theory, you should be able to provide some reasonable semblance of evidence to support such a claim. You could for example have done a survey or testing that involves those that use your AI-added product in comparison to those that use a non-AI comparable product. This is a small price to pay for potentially coping with a looming penalty down the road.

One other caveat is that don’t do the wink-wink kind of wimpy efforts to try and support your advertising claims about AI. The odds are that if you proffer a study that you did of the AI users versus the non-AI users, it will be closely inspected by other experts brought to bear. They might note for example that you perhaps put your thumb on the scale by how you selected those that were surveyed or tested. Or maybe you want so far as to pay the AI-using users to get them to tout how great your product is. All manner of trickery is possible. I doubt you want to get in double trouble when those sneaky contrivances are discovered.

Shifting to one of the other bulleted points, consider the fourth bullet that asks whether AI is being used at all in a particular circumstance.

The quick-and-dirty approach these days consists of opportunists opting to label any kind of software as containing or consisting of AI. Might as well get on the AI bandwagon, some say. They are somewhat able to get away with this because the definition of AI is generally nebulous and ranges widely, see my coverage in Bloomberg Law on the vexing legal question of what is AI at the link here.

The confusion over what AI is will potentially provide some protective cover, but it is not impenetrable.

Here’s what the FTC blog mentions:

  • “In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims.”

In that sense, whether or not you are using “AI” as to strictly adhering to an accepted definitional choice of AI, you will nonetheless be held to the claims made about whatever the software was proclaimed to be able to do.

I appreciated this added comment that followed the above point in the FTC blog:

  • “Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.”

That is a subtle point that many would not have perhaps otherwise considered. Here’s what it suggests. Sometimes you might make use of an AI-augmented piece of software when developing an application. The actual targeted app will not contain AI. You are simply using AI to help you craft the AI app.

For example, you can use ChatGPT to generate programming code for you. The code that is produced won’t necessarily have any AI components in it. Your app won’t be reasonably eligible to claim that it contains AI per se (unless, of course, you opt to include some form of AI techniques or tech into it). You could possibly say that you used AI to aid in writing the program. Even this needs to be said mindfully and cautiously.

FTC Also Serves Up Words Of Warning About AI Biases

The FTC blog that I mentioned herein on the topic of AI biases provides some helpful warnings that I believe are quite worthwhile to keep in mind (I’ll list them in a moment).

When it comes to generative AI, there are four major concerns about the pitfalls of today’s capabilities:

  • Errors
  • Falsehoods
  • AI Hallucinations
  • Biases

Let’s take a brief look at the AI biases concerns.

Here is my extensive list of biasing avenues that need to be fully explored for any and all generative AI implementations (discussed closely at the link here):

  • Biases in the sourced data from the Internet that was used for data training of the generative AI
  • Biases in the generative AI algorithms used to pattern-match on the sourced data
  • Biases in the overall AI design of the generative AI and its infrastructure
  • Biases of the AI developers either implicitly or explicitly in the shaping of the generative AI
  • Biases of the AI testers either implicitly or explicitly in the testing of the generative AI
  • Biases of the RLHF (reinforcement learning by human feedback) either implicitly or explicitly by the assigned human reviewers imparting training guidance to the generative AI
  • Biases of the AI fielding facilitation for the operational use of the generative AI
  • Biases in any setup or default instructions established for the generative AI in its daily usage
  • Biases purposefully or inadvertently encompassed in the prompts entered by the user of the generative AI
  • Biases of a systemic condition versus an ad hoc appearance as part of the random probabilistic output generation by the generative AI
  • Biases arising as a result of on-the-fly or real-time adjustments or data training occurring while the generative AI is under active use
  • Biases introduced or expanded during AI maintenance or upkeep of the generative AI application and its pattern-matching encoding
  • Other

As you can see, there are lots of ways in which undue biases can creep into the development and fielding of AI. This is not a one-and-done kind of concern. I liken this to a whack-a-mole situation. You need to be diligently and at all times attempting to discover and expunge or mitigate the AI biases in your AI apps.

Consider these judicious points made in the FTC blog of April 19, 2021 (these points do all still apply, regardless of their being age-old in terms of AI advancement timescales):

  • “Start with the right foundation”
  • “Watch out for discriminatory outcomes”
  • “Embrace transparency and independence”
  • “Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results”
  • “Tell the truth about how you use data”
  • “Do more good than harm”
  • “Hold yourself accountable – or be ready for the FTC to do it for you”

One of my favorites of the above points is the fourth one listed, which refers to the oft-used claim or myth that due to incorporating AI that a given app must be unbiased.

Here’s how that goes.

We all know that humans are biased. We somehow fall into the mental trap that machines and AI are able to be unbiased. Thus, if we are in a situation whereby we can choose between using a human versus AI when seeking some form of service, we might be tempted to use the AI. The hope is that AI will not be biased.

This hope or assumption can be reinforced if the maker or fielder of the AI proclaims that their AI is indubitably and inarguably unbiased. That is the comforting icing on the cake. We already are ready to be led down that primrose path. The advertising cinches the deal.

The problem is that there is no particular assurance that the AI is unbiased. The AI maker or AI fielder might be lying about the AI biases. If that seems overly nefarious, let’s consider that the AI maker or AI fielder might not know whether or not their AI has biases, but they decide anyway to make such a claim. To them, this seems like a reasonable and expected claim.

The FTC blog indicated this revealing example: “For example, let’s say an AI developer tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination– and an FTC law enforcement action” (ibid).

The Actions You Need To Take About Your AI Advertising Ploys

Companies will sometimes get themselves into potential hot water because one hand doesn’t know what the other hand is doing.

In many companies, once an AI app is ready for being released, the marketing team will be given scant information about what the AI app does. The classic line is that the AI details are just over their heads and they aren’t techie savvy enough to understand it. Into this gap comes the potential for outlandish AI advertising. The marketers do what they can, based on whatever morsels or tidbits are shared with them.

I am not saying that the marketing side was hoodwinked. Only that there is often a gap between the AI development side of the house and the marketing side. Of course, there are occasions when the marketing team is essentially hoodwinked. The AI developers might brag about proclaimed super-human AI capabilities, for which the marketers have presumably no meaningful way to refute or express caution. We can consider other calamitous permutations. It could be that the AI developers were upfront about the limitations of the AI, but the marketing side opted to add some juice by overstating what the AI can do. You know how it is, those AI techies just don’t understand what it takes to sell something.

Somebody has to be a referee and make sure that the two somewhat disparate departments have a proper meeting of the minds. The conceived advertising will need to be based on foundations that the AI developers ought to be able to provide evidence or proof of. Furthermore, if the AI developers are imbued with wishful thinking and already drinking the AI Kool-Aid, this needs to be identified so that the marketing team doesn’t get blindsided by overly optimistic and groundless notions.

In some firms, the role of a Chief AI Officer has been floated as a possible connection to make sure that the executive team at the highest levels is considering how AI can be used within the firm and as part of the company’s products and services. This role also would hopefully serve to bring together the AI side of the house and the marketing side of the house, rubbing elbows with the head of marketing or Chief Marketing Officer (CMO). See my discussion about this emerging role, at the link here.

Another very important role needs to be included in these matters.

The legal side of the house is equally crucial. A Chief Legal Officer (CLO) or head counsel or outside counsel ought to be involved in the AI facets throughout the development, fielding, and marketing of the AI. Sadly, the legal team is often the last to know about such AI efforts. A firm that is served with a legal notice as a result of a lawsuit or federal agency investigation will suddenly realize that maybe the legal folks should be involved in their AI deployments.

A smarter approach is to include the legal team before the horse is out of the barn. Long before the horse is out of the barn. Way, way earlier. For my coverage on AI and legal practices, see the link here and the link here, for example.

A recent online posting entitled “Risks Of Overselling Your AI: The FTC Is Watching” by the law firm Debevoise & Plimpton (a globally recognized international law firm, headquartered in New York City), written by Avi Gesser, Erez Liebermann, Jim Pastore, Anna R. Gressel, Melissa Muse, Paul D. Rubin, Christopher S. Ford, Mengyi Xu, and with a posted date of March 6, 2023, provides a notably insightful indication of actions that firms should be undertaking about their AI efforts.

Here are some selected excerpts from the blog posting (the full posting is at the link here):

  • “1. AI Definition. Consider creating an internal definition of what can be appropriately characterized as AI, to avoid allegations that the Company is falsely claiming that a product or service utilizes artificial intelligence, when it merely uses an algorithm or simple non-AI model.”
  • “2. Inventory. Consider creating an inventory of public statements about the company’s AI products and services.”
  • “3. Education: Educate your marketing compliance teams on the FTC guidance and on the issues with the definition of AI.”
  • “4. Review: Consider having a process for reviewing all current and proposed public statements about the company’s AI products and services to ensure that they are accurate, can be substantiated, and do not exaggerate or overpromise.”
  • “5. Vendor Claims: For AI systems that are provided to the company by a vendor, be careful not to merely repeat vendor claims about the AI system without ensuring their accuracy.”
  • “6. Risk Assessments: For high-risk AI applications, companies should consider conducting impact assessments to determine foreseeable risks and how best to mitigate those risks, and then consider disclosing those risks in external statements about the AI applications.”

Having been a top executive and global CIO/CTO, I know how important the legal team is to the development and fielding of internal and externally facing AI systems, including when licensing or acquiring third-party software packages. Especially so with AI efforts. The legal team needs to be embedded or at least considered a close and endearing ally of the tech team. There is a plethora of legal landmines related to any and all tech and markedly so for AI that a firm decides to build or adopt.

AI is nowadays at the top of the list of potential legal landmines.

The dovetailing of the AI techies with the marketing gurus and with the legal barristers is the best chance you have of doing things right. Get all three together, continuously and not belatedly or one-time, so they can figure out a marketing and advertising strategy and deployment that garners the benefits of AI implementation. The aim is to minimize the specter of the long arm of the law and costly and reputationally damaging lawsuits, while also maximizing the suitably fair and balanced acclaim that AI substantively provides.

The Goldilocks principle applies to AI. You want to tout that the AI can do great things, assuming that it can and does, demonstrably backed up by well-devised evidence and proof. You don’t want to inadvertently shy away from whatever the AI adds as value. This undercuts the AI additive properties. And, at the other extreme, you certainly do not want to make zany boastful ads that go off the rails and make claims that are nefarious and open to legal entanglements.

The soup has to be just at the right temperature. Achieving this requires ably-minded and AI-savvy chefs from the tech team, the marketing team, and the legal team.

In a recent posting by the law firm Arnold & Porter (a well-known multinational law firm with headquarters in Washington, D.C.), Isaac E. Chao and Peter J. Schildkraut wrote a piece entitled “FTC Warns: All You Need To Know About AI You Learned In Kindergarten” (posted date of March 7, 2023, available at the link here), and made this crucial cautionary emphasis about the legal liabilities associated with AI use:

  • “In a nutshell, don’t be so taken with the magic of AI that you forget the basics. Deceptive advertising exposes a company to liability under federal and state consumer protection laws, many of which allow for private rights of action in addition to government enforcement. Misled customers—especially B2B ones—might also seek damages under various contractual and tort theories. And public companies have to worry about SEC or shareholder assertions that the unsupported claims were material.”

Realize that even if your AI is not aimed at consumers, you aren’t axiomatically off-the-hook as to potential legal exposures. Customers that are businesses can decide too that your AI claims falsely or perhaps fraudulently misled them. All manner of legal peril can arise.

Conclusion

A lot of people are waiting to see what AI advertising-related debacle rises from the existing and growing AI frenzy. Some believe that we need a Volkswagen-caliber exemplar or a L’Oréal-stature archetype to make everyone realize that the cases of outrageously unfounded claims about AI are not going to be tolerated.

Until a big enough legal kerfuffle regarding an AI advertising out-of-bounds gets widespread attention on social media and in the everyday news, the worry is that the AI boasting bonanza is going to persist. The marketing of AI is going to keep on climbing up the ladder of outlandishness. Higher and higher this goes. Each next AI is going to have to do a one-upmanship of the ones before it.

My advice is that you probably do not want to be the archetype and land in the history books for having gotten caught with your hand in the AI embellishment cookie jar. Not a good look. Costly. Possibly could ruin the business and associated careers.

Will you get caught?

I urge that if you are mindful of what you do, getting caught won’t be a nightmarish concern since you will have done the proper due diligence and can sleep peacefully with your head nestled on your pillow.

For those of you that aren’t willing to follow that advice, I’ll leave the last word for this mild forewarning remark in the FTC blog of February 27, 2023: “Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”

Well, I suppose one could use AI to aid you in steering clear of unlawful AI advertising, but that’s a narrative for another day. Just keep in mind to be thoughtful and truthful about your AI. That and ensure that you’ve got the best legal beagles stridently providing their devout legal wisdom on these matters.

Source: https://www.forbes.com/sites/lanceeliot/2023/03/12/federal-trade-commission-aims-to-bring-down-the-hammer-on-those-outsized-unfounded-claims-about-generative-ai-chatgpt-and-other-ai-warns-ai-ethics-and-ai-law/