dark mode light mode Search Menu
Search

AI/ML: On the Brink of Thinking

deepak pal on flickr

Artificial intelligence (AI) and machine learning (ML) are everywhere. Some of the advances in the last couple of years are astounding: interactive story generators that, based on your input, create entire elaborate adventures in any genre; art tools that create paintings out of descriptions; dejargoners that can explain complicated scientific text in simple language.

Are you for real?

A lot of these advances have to do with large language models, ML systems that were exposed to literally billions of lines of text—more than all the people reading this article could read in a lifetime—in order to “learn” patterns in the languages. We’ll talk about “learning” and “patterns” shortly, but for now, I want to emphasize that the results of some of these computer programs have been so impressive that even some programmers and researchers have started to wonder, “wait, is this program thinking?” You may have even heard of a Google employee who publically claimed that a program called Lamda was self-aware and should be legally considered a person.

Uh, no. Lamda is NOT self-aware. We haven’t invented computer programs that think or are conscious or have personalities. And, there’s a reason why I’m so confident in saying this and why you should have healthy skepticism even as machine learning results get more impressive.

Don’t Even ______ About It!

So we come back to “learning” and “patterns” in these large language models. When we say “learning” in ML, it’s not learning like you or I do, it’s more like statistical correlation. What does that mean? Imagine playing a game: I give you a dollar every time you correctly guess how to fill in the blank in a sentence that has been drawn from a huge collection of sentences I’ve collected from Wikipedia, social media posts, and fanfiction online. Here’s your first sentence: “the sky is _____” What’s the safest guess? Blue, right? Now, is the sky always blue? No, not really. It’s many colors and has been described many ways throughout human history. But, blue is the most likely possibility and your best chance for getting the dollar. That’s what I mean by statistical correlation: the most likely way a blank should be filled in given all the examples of language in play.

How do you say that in cayzish?

To do well at this game you need to be good at knowing hackneyed phrases, memes, cliches, etc. If I say “a war-_____ country”, your best guess is “torn”. If I say “to _____ go”, “boldly” is a good guess given how long Star Trek has been around. In fact, we could even learn to play this game with a language you didn’t know, maybe even one I made up, if there are enough examples that you can guess what’s likely, just by seeing them enough times. Let’s call this language “cayzish”. You could even learn some fun correlations in cayzish, words and phrases that go together, so thatI could give you not just a fill-in-theblank but a story prompt “ghasdhg dsfjh tttt”, and you’d think about some of the things you’ve seen and come up with the full sentence “ghasdhg dsfjh tttt tywrrr agghahn, prttrt tttt gleehk!”

The problem, though, is that you still wouldn’t know what any of this meant. You may have come up with something grammatically correct in “cayzish”, or maybe you actually said something offensive about a “dsfjh”. Who knows? What you do know is the “training” you got when we played our little game where you were rewarded for correctly filling in the blanks.

No Brainers

Large language models are more complicated than this, but what I’ve described is kind of the bare basics of how they tend to “learn” language. They’re capable of doing really cool things but only because they’ve played a version of the “training” game literally billions upon billions of times, until they’ve gotten pretty good at coming up with common patterns in language. But this isn’t understanding language and it’s not thinking, it’s just a game of “what’s most likely”. Recent research into DALL-E 2 showed that it has absolutely no understanding of image prompts that have even simple relationship descriptions about the photo like “the lizard touched the dog”. The researchers observed that any time ML looks like it understands language, it’s a coincidence, like if “child holding a bucket” seems to work, it’s only because the only times DALL-E was trained on photos that had both a child and a bucket the child was holding the bucket.

I’m not saying we humans won’t ever make a “machine that thinks”. I’m also not saying that these large language models aren’t impressive. They absolutely are. But they are just a tool, and, to use a tool well, we need to understand what it’s really doing.

Learn More

What is artificial intelligence and how is it used?

https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp

Machine Learning

https://en.wikipedia.org/wiki/Machine_learning

Types of Machine Learning

https://www.coursera.org/articles/types-of-machine-learning

Machine Learning for Kids

https://machinelearningforkids.co.uk/

Can Computers Think?

https://theapeiron.co.uk/can-computers-think-a62b0a4f41d3

Can Computers Think?

https://theorangeduck.com/page/can-computers-think

Strong AI

https://www.ibm.com/cloud/learn/strong-ai

Strong AI vs. Weak AI

https://builtin.com/artificial-intelligence/strong-ai-weak-ai

AI vs. Human Intelligence

https://www.upgrad.com/blog/ai-vs-human-intelligence/

Artificial Intelligence vs. Human Intelligence

https://www.aretove.com/artificial-intelligence-versus-human-intelligence

Can Computers Think Like Humans?

https://www.npr.org/sections/alltechconsidered/2018/02/05/583321707/can-computers-learn-like-humans

How Computers Can Think for Themselves

https://www.youtube.com/watch?v=wIntcNdChDU