dark mode light mode Search Menu
Search

Better Fact-Check Your Chatbot

Jernej Furman on Flickr

Artificial intelligence has come a long way in a short time. Some AI providers claim that you can ask their AI real-world questions and it will do its best to provide you with that information.

The thing is, these chatbots don’t always tell you the truth. Sometimes they experience what are called “hallucinations,” and these can seriously hamper the validity of a chatbot’s message.

It’s kinda weird that bots can “hallucinate,” given that they don’t even have eyes. An AI’s hallucination is different from a human one. The AI believes false information to be undeniably true. So, it’s kinda like hallucinating.

What does an AI hallucination look like? It depends on what the AI was designed to achieve. In terms of a chatbot, it’s when the AI seemingly “makes up” a fact, then presents it as absolute truth; almost as if it was lying directly to your face. Of course, it’s not actually lying. The robot uprising hasn’t evolved to that point—yet. However, it will tell you things that are flat-out untrue, and sound very confident while doing so.

Here’s an example: I asked GPT3 via ChatGPT what beanz magazine is. The answer sounds legitimate at first, but then something odd happens.

Jolly Good Misinformation

“beanz magazine is a children’s magazine that covers a variety of topics such as science, nature, history, technology, art, and more. It is designed for children aged 7 to 11 and is published by Egmont UK Ltd, a leading children’s publisher in the United Kingdom.”

If you look at the front of this magazine, you won’t see “Egmont UK Ltd” anywhere. That’s because this magazine is published in the U.S., by a very nice guy named Tim Slavin. Egmont UK Ltd actually does exist, and it publishes magazines. But at the time of this writing, they didn’t feature beanz. So unless Tim has an evil twin somewhere in the UK, this is an example of an AI hallucination.

Off by a bit…or byte

So how do AI programs do this?

Well, hallucinations creep in during the learning process. An AI developer sets up a model through which an AI can learn about the world around it and connect the dots. If something goes wrong in that code, the AI can end up learning something incorrectly. Of course, the AI doesn’t know that it’s incorrect; it’s only learning about the world as it has been “taught” to do.

So the next time you’re using a chatbot, just remember that not everything it says is true, and ALWAYS do your own research and come to your own conclusions, or else you may end up believing a digital hallucination.

Learn More

What Is an AI Hallucination?

https://www.makeuseof.com/what-is-ai-hallucination-and-how-do-you-spot-it/