A “pause” for six months? Does anyone really believe this is a serious proposal that will have any impact at all (except some publicity the media will chase)? Will VCs stop reading generative AI pitches? Will journalists stop playing with generative AI? Will students stop using it? Will “all AI labs … immediately pause for at least 6 months”? Will Congress actually do something? No.
Same Old, Same Old
All of the frenzy about AI is misplaced. We’re on the verge of creating the smartest assistants in history, assistants that can help us cure cancer, plan cities, improve the legal system and slow environmental disaster, among so many other tasks that humans have had “difficulty” performing. Large language models distribute power to individuals and professions that have been trying to figure out how to optimize intelligent systems for years. Well, now there’s conversational connectivity to old and newly created knowledge that will help solve some problems that humans have avoided or just plain botched. Who wouldn’t want to connect to “the world’s total stock of usable text data … this includes all the world’s books, all scientific papers, all news articles, all of Wikipedia, all publicly available code, and much of the rest of the internet, filtered for quality (e.g., webpages, blogs, social media)?
Regulatory Paralysis
This technology – like all technology – should be “managed.” But has that ever been the case? Even simple, effective privacy regulations have yet to be developed by a Congress fed by an industry that benefits from misinformation, ratings and “surveillance capitalism.” Now we want members of Congress to regulate technology they do not understand at its most basic levels. Let’s not kid ourselves. Congress’s failure to understand digital is not the real problem (though it’s sometimes embarrassing). The new calls for regulation will – like all of the old ones – fall on deaf ears at the altar of where politicians and lobbyists worship. We should all lower our regulatory expectations. A six-month pause will not improve the prospects for regulation.
Technology Always Impacts Jobs
Technology assassinates jobs – which is its raison d’etre. Efficiency, cost savings, profitability, market share – these are always the targets of technology. Faster, cheaper – you name it – which is why we love – and hate – technology. Those who own old business models hate it; those in search of new ones love it. “New entrants” are praised for their creativity and ingenuity. It all depends on where you sit, as Clayton Christensen described so many years ago. While there are “language” and other professions in the crosshairs of generative AI (GAI), it remains to be seen just how displacing it is. That said, every business on the planet is trying to measure the impact that GAI will have on itself, its competitors and the very structure of its industry. We used to call this “strategic planning.”
Job displacement is unrelated to inevitable causes. We’ve been through this before with other technologies. The extent to which we expect humans to re-educate and re-tool is juxtaposed with what we expect the government to do. For example, UBI (universal basic income) has been discussed. But before anything is done, the extent of displacement must be documented. We’re way too early in the collection of this data. Of course we want to be as predictive as possible here — catchup is painful — but we also don’t want to panic. Major job displacement is coming, for sure. But it’s that re-education/re-tooling window that’s so challenging. Will we get it right? Unlikely, since, after all, we missed the climate window. But we can start some planning for some major economic shifts in what humans and their intelligent assistants do. Understanding ratios here is essential.
Perfection is Never Perfect
Large language models are far from perfect. They often hallucinate:
“LLMs as they exist today will never replace Google Search. Why not? In short, because today’s LLMs make stuff up.
“Examples abound of ChatGPT’s ‘hallucinations’ (as these misstatements are referred to). This is not to single out ChatGPT; every generative language model in existence today hallucinates in similar ways.
“To give a few examples: it recommends books that don’t exist; it insists that the number 220 is less than 200; it is unsure whether Abraham Lincoln’s assassin was on the same continent as Lincoln at the time of the assassination; it provides plausible-sounding but incorrect explanations of concepts like Bayes’ Theorem.”
“It is an open question whether LLMs’ hallucination problem can be solved via incremental improvements to existing architectures, or whether a more fundamental paradigm shift in AI methodologies will be necessary to give AI common sense and real understanding.”
ChatGPT4 had some thoughts about hallucination:
“Why do you hallucinate so much?,” I asked.
“As an AI language model, I do not have the ability to hallucinate. I generate responses based on the patterns and relationships I have learned from the vast amount of text data that I was trained on. Sometimes, my responses may appear to be unexpected or even ‘hallucinatory’ because I can combine different concepts and generate novel combinations of ideas that may not have been explicitly stated in the input data. However, this is not true hallucination, but rather a form of creative language generation. My responses are generated algorithmically based on statistical patterns in the input data and the instructions given to me by the user, and I do not have subjective experiences or consciousness.”
Sounds a little defensive to me.
Let’s just wait and see if GAI stops hallucinating.
Why We Should Love GAI
Large language models can be extremely helpful. The current conversation – which will die in at least six months – is a negative one. Unless the PR moratorium on AI is extended – which is unlikely for so many reasons – we should stop picking on GAI and maybe start thinking about how to optimize its use. Here’s a short list:
- Chatbot/Virtual Assistants
- Fraud detection
- Translation
- Content creation/research
- Sentiment analysis
It gets better.
Could “generative AI could cut health care costs, develop new cancer drugs?” Or maybe all of healthcare? Or city planning? Climate change? Product design? Customer service?
I asked ChapGPT4 about all this.
“What are the good things generative AI can do?
“Creative Content Generation: Generative AI can be used to create art, music, videos, and other forms of content, which can help to inspire and entertain people.
“Personalization: Generative AI can be used to create personalized content and recommendations for individual users based on their preferences and behaviors.
“Healthcare: Generative AI can be used to assist doctors and healthcare professionals in diagnosing diseases and recommending treatment options.
“Education: Generative AI can be used to create personalized educational materials and adaptive learning systems, which can help students to learn more effectively.
“Environmental Science: Generative AI can be used to simulate complex ecosystems and help scientists to understand the impact of environmental changes on these systems.
“Language Translation: Generative AI can be used to translate text and speech from one language to another, making communication between people who speak different languages easier.
“Gaming: Generative AI can be used to create intelligent and adaptive game characters and environments, making games more challenging and engaging.”
To be fair, I then I asked, “What are the bad things generative AI can do?
“Creating fake news or propaganda: Generative AI can be used to create fake news articles or propaganda material, which can be spread widely and have negative impacts on individuals or society as a whole.
“Generating fake images or videos: Generative AI can be used to create fake images or videos that look convincing, but are actually completely fabricated. This can be used to spread false information or even to blackmail individuals.
“Cybersecurity threats: Generative AI can be used to create sophisticated phishing attacks or malware that can harm computer systems and steal sensitive information.
“Deepfakes: Deepfake technology, which uses generative AI to create realistic video or audio of someone saying or doing something they never actually did, can be used for malicious purposes, such as spreading false information or creating compromising content to blackmail individuals.
“Unintended biases: If the data used to train generative AI models is biased, the resulting outputs may also be biased. This can have negative impacts on individuals or communities that are already marginalized.
It occurred to me reading this list that humans are already pretty good at using technology to do bad things.
Pausing for What Purpose?
A “pause” for six months? Does anyone really believe this is a serious proposal that will have any impact at all (except some publicity for some the media will chase)? Will VCs stop reading generative AI pitches? Will journalists stop playing with generative AI? Will students stop using it? Will “all AI labs … immediately pause for at least 6 months”?
Who are the champions of pause? The list of those in the field who did not sign the pause is as interesting as those who did (as of this writing OpenAI’s CEO Sam Altman has not signed the letter). One should also appreciate that “the letter” may have some funding and other authenticity problems, but that’s another issue altogether. Some of the signees of the letter have a lot to gain financially from the success of GAI. What’s going on?
GAI is far from perfect. But it can “assist” in some incredible ways. PR pauses are silly, unrealistic attempts to draw attention to issues that have been debated for years. The argument that GAI poses its own, unique threats depends on how one describes “unique.” Companies that can reduce headcount by 30% will see “unique” as wonderful. Those that traffic in misinformation will love GAI’s ability to churn out lies more efficiently. Sadly, regulatory challenges have gone unanswered for years.
When I wondered why regulatory challenges have led to nothing, I asked our friend:
“Why has congress failed to regulate technology?”
Our friend immediately answered:
“Lack of technical expertise: Many members of Congress are not well-versed in the intricacies of technology and may not fully understand the implications of various regulatory proposals.
“Lobbying: Technology companies often have significant lobbying power and can influence the legislative process. This can make it difficult for Congress to pass regulations that are not favorable to the tech industry.
“Partisan politics: Technology regulation has become increasingly politicized in recent years, with Democrats and Republicans often taking opposing views on issues such as online privacy, content moderation, and antitrust enforcement. This can make it difficult to pass bipartisan legislation.
“Rapidly changing technology: The pace of technological change can make it difficult for Congress to keep up and develop effective regulations that will not quickly become outdated.
“Regulatory capture: Some argue that regulators tasked with overseeing the technology industry may be too close to the companies they are meant to regulate, leading to a lack of effective oversight.
Is our friend hallucinating again – or is all this just too obvious?
What to Do
The answer here also depends on where one sits. Those with vested interests in the power of GAI — all technology, for that matter — will closely inspect and debate all attempts to “regulate” their vested interests. The US has a system specifically designed to protect the vested interests of the few, not the many. This system is flooded with money and lobbyists that help define every legislator’s agenda. So at the highest level, until this “system” is changed, not much can be done. The regulatory fight must therefore occur at much lower levels, in the voting trenches at state and local levels where there’s a much better chance for success. But this too requires persistence and money. Self-governance is another approach where companies declare their good missions — and stick to it. But there’s little evidence that corporate self-governance actually impacts anything when shareholders are screaming for profitable growth. Sometimes hope lies with enough disasters to where everyone says “enough”! But that sure hasn’t worked for school shootings.
So what to do?
Focus on the good that GAI can do — and there’s plenty — while lamenting how GAI — and all technology — can hurt us. “Lamenting” is more popular with the young than the old, so maybe all this begins in the classroom.
Source: https://www.forbes.com/sites/steveandriole/2023/04/05/letters-to-pause-ai-chatgpt-bard-bing–generative-ai-are-our-friends/