LLM AIs Have Turned Into a Classic Moral Panic
A “moral panic” refers to a widespread fear, often irrational, that emerges within a society over a perceived threat to its moral values, social order, or safety. Moral panics typically involve exaggerated concerns about the behavior or activities of a particular group or phenomenon, which are often portrayed in the media as posing a danger to society. These panics can lead to public outcry, increased policing or regulation, and sometimes even social stigma or discrimination against the targeted group or behavior. However, moral panics are often based more on societal perceptions and anxieties rather than objective evidence of harm.
OpenAI. (2024). ChatGPT (3.5) [Large language model]. https://chat.openai.com
There are a lot of problems with the current batch of Large Language Model AIs such as OpenAI’s ChatGPT, Google’s Bard & Gemini, Anthropic’s Claude, and Meta’s llama.
But lordy, I can’t glance at or listen into a legacy media broadcast without somebody somewhere earnestly hand-wringing and catastrophizing about what AI is doing to their kids, democracy, the public good, human relationships, etc, etc.
It’s a smoke & mirrors game that has made discussing pertinent, useful problems with the technology nearly unsolvable. And it’s keeping too many people who would benefit most from it from learning & adopting the technology.
And yet, the moral panic is also totally normal in this stage of the adoption curve. Every technology that changes how humans communicate with each other has created a moral panic. One of my favorite newsletters is the Pessimist’s Archive. Go subscribe, but here’s a brief tour de force of how people have reacted to new media technologies in the recent past, because so many of the same questions come up again and again…
Won’t AI kill human creativity and make everything awful?
Won’t AI just suck away all human attention and flood the world with fluff?
Won’t AI remove all skills from all professions? What’s the point anyway?
Won’t AI deceive everyone and generate misinformation and ruin schools?
Don’t we need to give strict regulatory approval to all AI models?
Won’t AI just corrupt everyone’s minds and ruin the Internet?
Don’t AI makers have responsibility for the outcomes of how people use their models?
Won’t AI ruin our democracy and make all outcomes predictable?
How can we handle an intelligence that’s not human? What if it’s wrong?!
What if amateurs use AI to create articles and products and services?
What if AI just pulls us all into our own little worlds so we never interact with humans ever again?!?
AI Is a Tool. Humans Have The Problems.
In my own life, ChatGPT has completely turned things upside down. It’s made 80% of my business (digital marketing) irrelevant. But in all honesty, at a 30,000 foot view of the economy, that’s really fine. It’s the remaining 20% of my business that actually generated new, real value for clients. It’s a business model problem, not a technology problem.
And the same goes for all the societal problems people are wringing their hands over. All these “AI” problems are not unique to AI. They are unique to humans.
Copyright?
We’ve been hotly debating how to balance rewarding creators while ensuring the free flow of information since before 1709 and the Statute of Anne. And we are still hotly debating it now – even outside of AI.
Misinformation?
I mean the quote that I always think of comes from Mark Twain.
A lie can travel half way around the world while the truth is putting on its shoes.
Mark Twain?
Also, “You shall not bear false witness against your neighbor” is literally in the Ten Commandments. And the Code of Hammurabi provides the death penalty for lying. So, I think we’ve been dealing with spreading lies for quite a while before ChatGPT came around.
Democracy? Conspiracy Theories?
I just finished reading Empire of Liberty by Gordon Wood (about the early US republic) and The Craft by John Dickie (about the Freemasons)…democracy has always had problems and conspiracy theories have had much more dire consequences pre-technology than they do now. Also, the whole world has access to the same technology but some are seeing more success than others (free version).
AI has issues – mainly related to speed, scale, cost, and process. But every technology has had similar issues. I hope the moral panic calms down and we can go down the punch list of issues and solve them.
But the best way? Use the new technology, get familiar with it, and put it to real use.