How to Keep the New AI on the Up and Up
Image by Mike MacKenzie on Flickr
Developers have been working long and hard on taking Artificial Intelligence to new levels where it can interact with people in more natural or human-like ways such as having a conversation.
The most recent version of AI to make waves is ChatGPT, a powerful tool that can do some amazing things. For example, you can talk to it, ask it to write a story, or even ask for advice. And, don’t be surprised if its answer is very believable. (I wonder if it would pass the Turing test, which challenges chatbots to trick humans into thinking they are real people.)
If this all sounds like some kind of super-secret government tech, it’s anything but. In fact, you can visit this website and talk to ChatGPT yourself: https://chat.openai.com/chat.
And that’s great, right? If a chatbot really could find information based on your question, there isn’t much point for search engines like Google. Just ask your AI butler a question and you’ll get a well-researched answer. But how accurate will it be?
People may rely on ChatGPT to perform research, answer burning questions, or perhaps even handle sensitive topics that they may not feel confident searching online for or asking someone about. If ChatGPT doesn’t provide correct information, it could end up doing harm to people who use it. We’ve already seen the damage people can do with fake information online. What if that information comes from a very confident-sounding AI instead?
But hey, no problem, right? All we need to do is ensure that ChatGPT is extremely accurate before we use it. Even if we achieve that goal, it poses another dilemma.
If you have an AI that can accurately and reliably research topics, you could arguably ask it to do your work for you. And while having ChatGPT do your English homework might be tempting, the work would not be your own, and that is plagiarism.
Schools and educational institutions have no tolerance for plagiarism, and for good reason. The first is that it is basically stealing and, secondly, the point of the homework or research assignments is to test a student’s understanding of the topic. If AI does the work, the student is basically pretending to know all about the topic when they really don’t, which can harm their future education.
So, if ChatGPT and similar AI are in our future, how do we either verify that the technology is producing accurate information (otherwise, it’s useless) or ensure people are producing authentic work based on AI research and not just copying? What do you think?
Also In The April 2023 Issue
Get ready for the ultimate Minecraft building challenge!
Learn some of the nifty ways humans navigated the world without compasses!
A delicious way to code some music
Tracing the history of computers all the way back to ancient Greece!
Make your glasses unique with this ‘snappy’ project
Meet Susie Martínez, Aerospace engineer and STEM fashion icon
Time to meet ChatGPTs great, great grandparents.
Even outer space needs a spring cleaning sometimes.
Help your farming robot find its way around!
Teaching kids how to innovate through Design Thinking
The Vikings had some really cool tools when it came to getting around!
Flex your graphic design with Inkscape!
Bring new dimensions to your art with sizecoding!
AI is powerful, but with that power comes great responsibility.
Tune in for a radio broadcast that is out of this world!
Interesting stories about computer science, software programming, and technology for April 2023.
Bridges: The age-old answer to “how will we cross this river?”