Opinion: AI Chatbots Cannot be Trusted

In its special section on AI (artificial intelligence) published on Nov. 3, The Advocate limited the debate around AI to the theft of work (especially of artists), and privacy.

The problem with this framing is that generative AI will take work away from many people who are not “artists.” It could take jobs away from artists, writers, and journalists – jobs a chatbot cannot replace because chatbots excel at producing persuasive nonsense, not the truth. There is a better way to approach using AI: We could use chatbots to study how people use language. Instead, tech companies have designed chatbots to teach users how to use language and the truth, as Karen Weise and Cade Metz quote from a Microsoft memo in their New York Times article, “When AI Chatbots Hallucinate,” that warns the company’s chatbot is “built to be persuasive, not truthful.”

AI cannot replace journalists. Without writers, specifically journalists, the truth is left unwritten. Ken Perez implies in his article, “Using Al Tools to Grow Your Business,” that AI will bring an

era of increased education and will do the research and thinking for users. He says that “companies that embrace it early will likely reap the greatest benefits.” Tony Acker’s article, “AI is Beneficial to Society And Can Be Used For Good,” argues that AI can help us access and learn about the world, and build a more sustainable one. However, his faith in large languages models and their chatbot successors is misplaced.

One problem with this focus on the implications of AI artwork is that it leaves out, as Justin Pot writes for The Atlantic magazine in “Google’s New Search Tool Could Eat The Internet Alive,”

that AI chatbots, which generate text in response to a user query, seem likely to send the internet into a death spiral.

“Google’s AI doom loop may lead us into a much smaller version of the internet, with fewer sites, fewer posts – and thus a worse experience for al of us,” writes Pot.

And thus ends the era of digital media because of the theft of something of more than [mere] aesthetic value: loss of the truth in a writer’s words. Because chatbots do not originate statements of any factual value, chatbots plagiarize and rely on the works of others and end up democratizing what is true, leaving us all guessing. And like the title of Jackie Lacroix’s article says in Inkstick Media, “AI Will Make Extremists More Effective, Too,” AI can be weaponized against society as much as it could benefit society.

The way we are using language models is not what they excel at: They excel at showing the relationship between words, which is what we should be using them for by studying what they

have learned about how people actually use language.

The way chatbots are being designed makes them, at best, a waste of energy and, at worst, harmful to users’ ability to find the truth.

The Advocate did a great job of doing promotion for tech companies, but little else.

Leave a comment

Your email address will not be published.


*