May 16, 2023 | Reading Time: 3 minutes

AI isn’t what it’s said to be

ChatGPT and others are fancy autocomplete.

Share this article

Investor demand for stocks in companies that tout “Artificial Intelligence” accounts for all the US stock market gains this year. 

A steady stream of hucksters, whose incomes invariably depend on more venture capital funding for machine learning, are lining up to tell us how the technology is poised to reshape everything from movies to medicine to romance.

But “artificial intelligence” is a misnomer. 

Services like ChatGPT are not intelligent and describing them as such sets us up to be suckered by tech companies. 

Today’s machine learning tech isn’t much smarter than the “cybernetic tortoises” and “hardware mice” of the mid-20th century. The quest for machine intelligence goes back to the 1950s. The AI industry has gone through repeated hype-disappointment-shame cycles since that time. 

“AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving and even exercising creativity,” kvells McKinsey and Company, the management firm whose high-paid consultants have finessed the destruction of whole sectors of the economy and eased the paths of authoritarian regimes worldwide. 

This is false. 

Today’s “AI” doesn’t see or learn or remember in the way that sentient beings do, and it certainly doesn’t use creativity. 

Neural nets probe huge datasets and find the patterns that we tell them to look for. ChatGPT and its ilk are fancy autocomplete. There’s no insight behind their output. They don’t know what’s true, or even what words mean. Which is why bots like ChatGPT generate polished nonsense backed by manufactured sources. 

Today’s machine learning tech isn’t much smarter than the “cybernetic tortoises” and “hardware mice” of the mid-20th century. The quest for machine intelligence goes back to the 1950s. The AI industry has gone through repeated hype-disappointment-shame cycles since that time. 

You don’t need a neural net to see this pattern: a new advance offers marginal improvements, which are shamelessly oversold, consumers are disappointed, and anticipation is replaced by a sense of betrayal. 

Whereupon “AI” becomes a dirty word for the next decade, only for the cycle to repeat, thanks to new discoveries and short memories. The cycle is so predictable that researchers coined the term “AI Winter” to describe the alienation and embitterment phase of the cycle. 



The more tech CEOs talk up the brilliance of their algorithms, the more predisposed we are to regard whatever they churn out as smart. 

The so-called Barnum Effect that keeps fortune tellers in business may account for our willingness to perceive the generic outputs of machines as intelligent.

When you tell people that a paragraph of generic platitudes comes from a personalized astrological reading, recipients will marvel at the precision and insight of the pronouncements.

In truth, we ascribe our own meanings to vague statements, but we ascribe the sense-making to the imaginary agent, be it the Amazing Zoltan or ChatGPT. 

Tech CEOs want to give the algorithms all the credit, but their apparent brilliance is actually piggy-backing on the acumen of countless low-wage workers. The startup behind ChatGPT relies on an army of low-paid human contractors to label the data they use to train the system. 

If you’re a boss, replacing your copywriter with “artificial intelligence” sounds a lot better than entrusting your brand to a neural network that literally has no idea what it’s spitting out.

The CEO of Emirate airlines predicted that pilots might one day be replaced by machines. It sounds a lot better to say you want “artificial intelligence” to land the plane than that you’re prepared to let autocomplete take the wheel. 

Maybe one day we’ll have true machine minds, but we’re nowhere near that yet. And priming people to expect intelligence is setting them up to vastly overrate what the current tech is capable of — in ways that can be dangerous.”

Seventy years on, “artificial intelligence” is just a marketing buzzword and you’re doing the tech companies’ promotions for them when you use it uncritically.


Lindsay Beyerstein covers legal affairs, health care and politics for the Editorial Board. An award-winning documentary filmmaker, she’s a judge for the Sidney Hillman Foundation. Find her @beyerstein.

Leave a Comment





Want to comment on this post?
Click here to upgrade to a premium membership.