ChatGPT shows how far AI has come, and its limitations

ADVERTISEMENT

ChatGPT shows how far AI has come and its acute limitations — like being right

While the new chatbot has managed to avoid some of the problems of its predecessors, it’s not perfect.

ChatGPT shows how far AI has come and its acute limitations — like being right
THE NEWS

ChatGPT is the conversational chatbot from Open AI that has taken the internet by storm, with 1 million people signing up to use it in the five days after its November launch. Chatbots aren’t new (the first one came out in 1966), but ChatGPT is the shiniest and most advanced yet to hit the public domain.

What makes it special? People have used previous iterations of chatbots to write essays or have conversations. ChatGPT pushes the boundaries of models such as Microsoft’s Tay and Meta’s BlenderBot 3 into the next level — it’s able to take a guess at how to correct computer code and write humorous sonnets.

It can engage in a wide array of tasks, from writing sample tweets to jokes, essays and even code.

THE CONTEXT

While it is much more advanced than chatbot iterations released as recently as earlier this year, it has some of the same problems that the old versions do when it comes to the accuracy of its responses and how it reflects some of the less-desirable aspects of humanity.

One way that ChatGPT is different, however, is that it’s not starting from a blank slate (or “stateless”) every time you use it; rather, it remembers what you’ve asked or told it before. It builds upon those earlier efforts and corrects itself based on that information.

“The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” reads the Open AI page introducing ChatGPT.

This ability to iterate has made the new bot appealing to those unhappy over the controversy with Tay — which made news in 2017 because it started sending out racist tweets and citing Adolf Hitler based off input from conversations with users.

Benjamin Powers

TECHNOLOGY LENS

The most advanced chatbot out there — with room for improvement

The breadth of material ChatGPT can create is what has seemingly captivated the public’s attention. ChatGPT can learn how to write a 10-paragraph history of London in the style of Dr. Seuss or create a new “Seinfeld” scene in which Jerry needs to learn a bubble sort algorithm. (For those wondering, that’s a simple sorting algorithm.) If that last one doesn’t sound funny enough, you can even tell ChatGPT to “make it funnier.”

It’s part of Open AI’s GPT-3.5 series, which is a large language model. Large language models ingest copious amounts of data from the web — generally by searching and indexing the internet in general or specific websites.

It relies on a concept known as reinforcement learning from human feedback, a sort of machine-human dialogue. The person prompts or queries the machine, which responds and adapts as the person makes additional prompts or queries.

Unlike some earlier chatbots, ChatGPT explicitly rejects inappropriate phrases or questions, such as those relating to antisemitism and the creation of violent content. That being said, some users have been able to get around this by posing queries as hypotheticals.

Open AI is upfront about ChatGPT’s limitations though, noting that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” So while the material might come out sounding original and authentic, there is a good chance it may also be wrong. ChatGPT failed to solve certain math problems, for example, and when asked to create a biography of a historical figure, it got some dates incorrect.

These sorts of kinks are expected, and the mass testing is a reason that Open AI releases these kinds of systems into the wild, so that it can patch and improve them in future releases.

— Benjamin Powers

ETHICS LENS

The potential for misinformation

While ChatGPT has better built-in functions to help it avoid offensive responses and spreading misinformation, it’s no saint-bot.

Open AI uses a content moderation tool, the Moderation endpoint, to help protect against possible misuse, but the chatbot will sometimes still respond to “harmful instructions or exhibit biased behavior,” the company says on its website.

The application has the potential to be a significant purveyor of misinformation, so much so that ChatGPT itself has explicitly said that it “can be used for nefarious purposes, like spreading misinformation or impersonating someone online,” Slate reported.

And in a blog post, Open AI has also admitted that ChatGPT is susceptible to providing “plausible-sounding but incorrect or nonsensical answers.”

Gary Marcus, founder of Geometric Intelligence and author of “Rebooting AI,” told Grid he is concerned “that bad actors will use [ChatGPT] to generate misinformation at scale.”

“The cost of misinformation is basically going to zero,” Marcus explained, “and that means that the volume of it is going to go up.”

He added that people might already be using programs like ChatGPT to bolster search engine optimization by creating fake reviews.

“If you need fake information about something that looks plausible, you’ve got a way to monetize it,” said Marcus. “Either because you’re working for a state government that wants to have a misinformation company, or just somebody who wants to make money off clicks.”

On Monday, Stack Overflow temporarily banned users from sharing answers from ChatGPT because they were often just plain wrong. “While the answers which ChatGPT produces have a high rate of being incorrect,” the company explained, “they typically look like they might be good and the answers are very easy to produce.”

Marcus described the issue with Stack Overflow as “existential.”

“If [Stack Overflow] can’t solve this problem,” Marcus said, “then the value of their information diminishes and the site loses its reason for existence.”

It’s not just the spreading of blatantly false information that we need to be concerned about. As Tyler Cowen pointed out in the Washington Post, ChatGPT can interfere with our ability to accurately measure public opinion. For example, ChatGPT has the ability to impersonate constituents by writing false letters to Congress about a particular policy.

“Over time, interest groups will employ ChatGPT, and they will flood the political system with artificial but intelligent content,” Cowen reported.

— Khaya Himmelman

Correction

An earlier version of this article misstated how many users signed up for ChatGPT in its first five days. This version has been corrected.

Thanks to Lillian Barkley for copy editing this article.

  • Benjamin Powers
    Benjamin Powers

    Technology Reporter

    Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.

  • Khaya Himmelman
    Khaya Himmelman

    Reporter

    Khaya Himmelman is a reporter at Grid. A former misinformation reporter for the Dispatch, she is a graduate of Columbia Journalism School and Barnard College. Khaya has appeared on CNN to discuss misinformation in the media.

ADVERTISEMENT

Related Stories

How the internet is training AI to make better disinformation

How the internet is training AI to make better disinformationHow the internet is training AI to make better disinformation

AI is accelerating disinformation creation way beyond deepfakes.

An AI-powered séance is resurrecting the dead: How different art forms are reimagining the horror genre

An AI-powered séance is resurrecting the dead: How different art forms are reimagining the horror genreAn AI-powered séance is resurrecting the dead: How different art forms are reimagining the horror genre

Audiences recognize the look and feel of horror in movies and books. But how does horror manifest — and grow — other creative forms?