dark mode light mode Search Menu

Ethics and AI and Machine Learning

Image via www.vpnsrus.com (Mike MacKenzie on flickr)

Our scene is set somewhere in the continental United States, late in the afternoon, on this very day. It concerns two high school friends, Helen Dreyfus and Ramona Kurzweil, who are both burgeoning programmers: Our play begins when Helen arrives at Ramona’s house after school and is about to ask a question.

Helen: Oh, shoot, I left my laptop charger at home—do you have a spare USB-C I could use?

[Ramona types something on the command line of her computer and turns to Helen]: No

Helen: O…kay? Like you have one sitting right there, I can see it.

Ramona: I know, but you can’t borrow it. Algorithm says so.

[Ramona points at her laptop screen authoritatively.]

Helen: Wait, what algorithm?

Ramona: So, you know I’ve been studying machine learning, right? And, over the years, I’ve lost a lot of stuff that I have loaned to people – chargers, books, limited edition physical game releases, Mr. Minsky, my favorite stuffed bear.

Helen: That was in first grade!

Ramona: So, I realized I could train a neural network using photos of people I know, tagged by things of mine they’ve lost, and it would tell me the likelihood of if I could trust them.

[An awkward silence ensues before Helen speaks again.]

Helen: so what you’re saying is that we’ve been friends since we were four and you have a program that says you can’t let me borrow a charging cable.

Ramona: Sorry. The math doesn’t lie.

Helen: Math doesn’t lie, but is the math answering the right question?

Ramona: I…don’t get it

Helen: Okay, well let’s start simple: how did you train this neural network?

Ramona: Oh that’s easy! I started by writing down everything I could remember other people losing. Then I took photos I had of these people and then trained the network on classifying the photos using the new version of the Carrots library.

Helen: So, um, I don’t know how to ask this, but did you include a photo of me in the training set?

Ramona: Mr. Minsky. Yes. You know this.

Helen: And when you checked just now, did you use the same photo?

Ramona: Yeah, it’s the one from the 2024 DotA III regional championships

Helen: So…

[Ramona blinks a few times]

Ramona: Oh. Now I get it. I’m asking the model to tell me something I literally already trained it on. Okay. But that’s fine! I can just use a different photo of you and—

Helen: Okay, wait, but did you train it on photos of people who haven’t lost your things or just people who have?

Ramona: I mean there’s all sorts of people I’ve met who haven’t lost anything. It didn’t really make sense to include them all because it’d weight the network towards always saying “yes.”

Helen: Yeah, that’s true, but right now I think all you’ve done is train a network to always say no. Go on, try a photo of yourself, or Mr. Minsky, or – I don’t know – Superman.

Ramona: Oh c’mon, that’s just silly. I’m not asking if Super – okay, look, I’ll do it.

[Ramona snorts heavily through her nose.]

Helen: And?

Ramona: It says I can’t trust Mr. Minsky, myself, my mom, or Superman

Helen: See! Plus, even if you managed to properly balance positive and negative examples in the training, what would it even mean?

Ramona: Okay, you lost me again.

Helen: I mean, if you managed to train this model well, what would it be doing? If you run it on people you already know, then it’s only telling you what you already know. If you run it on new people, what’s it doing? Telling you whether you can trust someone based on how they look?

[Ramona whinces]

Ramona: Yeah, I guess when you put it that way it sounds, uhh, kinda not good. I guess I just got carried away because machine learning seems really cool and I wanted to do something with it.

Helen: And, I get that. But the problem with this stuff is that it’s not magic. It’s still hard to make sure you’re using it to solve problems in a way that doesn’t give you a fake sense of being right and objective. So, now, can I borrow your usb-c cable? My laptop is definitely dead.

Ramona: Just give it back before you leave, okay?

So we come to the end of our scene. Our characters were named after Hubert Dreyfus, a philosopher famous for his critiques of AI, Ray Kurzweil, a self-described futurist who is always overly optimistic about what AI will do, and Marvin Minsky who helped found one of the most important research labs on AI at MIT.

The story was fictional but the problems are very real: machine learning lets you take data and turn it into an algorithm, but what problem does the algorithm solve? You always have to be careful and test whether you’re solving what you think you’re solving and whether the problem even makes sense or, like our story, it’s as bad as thinking you can predict reliability from someone’s face.

Learn More

Great Promise But Potential For Peril


AI Ethics


Artificial Intelligence and Ethics


Ethics of Artificial Intelligence


Machine Ethics


Ethics in machine learning


Unethical Use of AI


Hubert Dreyfus’s views on artificial intelligence


Marvin Minsky


Ray Kurzweil


Carrots Library


AI Education for Young People


Ethics in AI


Teaching AI Ethics to Kids


AI Lessons for and From Younger generations


Conversational AI for Kids