dark mode light mode Search Menu

How to Keep the New AI on the Up and Up

Mike MacKenzie on Flickr

Developers have been working long and hard on taking Artificial Intelligence to new levels where it can interact with people in more natural or human-like ways such as having a conversation.

The most recent version of AI to make waves is ChatGPT, a powerful tool that can do some amazing things. For example, you can talk to it, ask it to write a story, or even ask for advice. And, don’t be surprised if its answer is very believable. (I wonder if it would pass the Turing test, which challenges chatbots to trick humans into thinking they are real people.)

If this all sounds like some kind of super-secret government tech, it’s anything but. In fact, you can visit this website and talk to ChatGPT yourself: https://chat.openai.com/chat.

At the time of this writing, the website suggests asking the chatbot to explain quantum computing in simple terms or teach you how to make an HTTP request in Javascript. It’s clear that the developers would like to eventually make ChatGPT answer questions for people.

And that’s great, right? If a chatbot really could find information based on your question, there isn’t much point for search engines like Google. Just ask your AI butler a question and you’ll get a well-researched answer. But how accurate will it be?

People may rely on ChatGPT to perform research, answer burning questions, or perhaps even handle sensitive topics that they may not feel confident searching online for or asking someone about. If ChatGPT doesn’t provide correct information, it could end up doing harm to people who use it. We’ve already seen the damage people can do with fake information online. What if that information comes from a very confident-sounding AI instead?

But hey, no problem, right? All we need to do is ensure that ChatGPT is extremely accurate before we use it. Even if we achieve that goal, it poses another dilemma.

If you have an AI that can accurately and reliably research topics, you could arguably ask it to do your work for you. And while having ChatGPT do your English homework might be tempting, the work would not be your own, and that is plagiarism.

Schools and educational institutions have no tolerance for plagiarism, and for good reason. The first is that it is basically stealing and, secondly, the point of the homework or research assignments is to test a student’s understanding of the topic. If AI does the work, the student is basically pretending to know all about the topic when they really don’t, which can harm their future education.

So, if ChatGPT and similar AI are in our future, how do we either verify that the technology is producing accurate information (otherwise, it’s useless) or ensure people are producing authentic work based on AI research and not just copying? What do you think?