Search the Site

Episode Transcript

DUCKWORTH: “Yada yada yada yada.”

*      *      *

DUCKWORTH: I’m Angela Duckworth.

MAUGHAN: I’m Mike Maughan.

DUCKWORTH + MAUGHAN: And you’re listening to No Stupid Questions.

Today on the show: Could your new best friend be a chatbot?

DUCKWORTH: “Mike is a little weirded out by the two of us having this conversation. I mean, you’re not a real person.”

*      *      *

DUCKWORTH: Mike, we have a question from a listener named Vivien.

MAUGHAN: Hello, Vivien. 

DUCKWORTH: She signs her note “with gratitude.” So, got me right there, Vivien. 

MAUGHAN: Oh, I already like her.

DUCKWORTH: I know, right? And, appropriately enough, it’s about a topic that a socially-intelligent person might ask. So it says: “Dear Dr. Angela and Mike. Despite being always connected through our devices, we seem more divided and isolated than ever. With this in mind, do you think people will start leaning on A.I. for genuine companionship? The more I chat with ChatGPT, the more I find myself sharing things I wouldn’t normally say out loud and sometimes even arguing with it. What if I end up with a squad of five A.I. besties, and one day my neighbor brags about their 15 A.I. buddies? I’d love to hear your thoughts on this wild and weird future we’re heading into. With gratitude, Vivien.” 

MAUGHAN: Wow.

DUCKWORTH: So many thoughts! 

MAUGHAN: So I think, first of all, I think this is a fascinating question. I will say, I have one friend who uses the audio ChatGPT and basically has conversations with it, but more as method of, like, getting information. So, it’s basically his way of Googling things, but he’ll go back and forth. And he’s very carefully selected the voice for the ChatGPT.

DUCKWORTH: Did he choose the British one?

MAUGHAN: No. And here’s — I’m just going to go on a little tangent. As an aside, if I’m going to have an A.I. friend, one’s going to have a British accent and one’s going to have an Australian accent, because I love it. Like, why would you not do that?

DUCKWORTH: And I would not be able to tell them apart. So, Mike, how much are you using generative A.I. for anything? Do you use ChatGPT? Do you use Claude? Do you use Gemini?

MAUGHAN: I use ChatGPT, and I use it less than I should.

DUCKWORTH: How often do you use it?

MAUGHAN: Maybe weekly, but not as a regular piece of my work. When I do use it, I’m always surprised again at how useful it is, and I think that I should be using it more. Now, I want to be clear, like, Qualtrics has been using large language models forever, and we have a, a massive A.I. component of Qualtrics as well. I just haven’t used it on the personal level that you’re talking about.

DUCKWORTH: So, Mike, I have a homework assignment for you. It’s the same homework assignment I give to the students that I teach. Keep a tab open on your browser, at all times that you’re on your browser, and have it open to, you know, you pick. Claude is apparently better at writing, ChatGPT is better at other things. They’ve kind of been optimized for different things. But 100 percent, you need to keep a browser open. And even my youngest students are like, “Oh, why do I need to do that, Dr. Duckworth?” And I’m like, “Because. It’s not that the universe will change. It’s that the universe has already changed. And you better get with the program.” And the key to generative A.I. use — I think this gets to this companionship question that Vivien asked — is that it’s a dialogue, not a one-way transaction. Like, you learn something, and then you could ask another question. But the reason I want you to keep that tab open is you should start to ask it other kinds of questions. So, around the time I started doing this, I was working on my book, and I’m trying to write this scene from when my mom and I were in the middle of the ocean and I couldn’t remember, like, the names of the things that we were given for snorkeling. And I would say to ChatGPT, like, “I’m Angela Duckworth, and I’m writing this book, and I’m really struggling to write this true story about me and my mom. Could you do it for me?” And in a second, it would be like, “Angela, I’d be happy to. How’s this?” It’s like, grrrr, like, it’s like three paragraphs. And then, and then it’s a dialogue! And this is all true. I was like, “If I could, I would write like Ann Patchett. Can you just rewrite that like Ann Patchett?” “Certainly!” Duh duh duh duh duh, comes out like Ann Patchett. I’m like, “Okay, wait, wait, wait, wait, hold on. Like, E. B. White and Ann Patchett got together and edited what I wrote.” “Absolutely! What a great idea! How’s this?” So, I was having a conversation with ChatGPT, and that experience is what made me realize that the universe is not going to change; it has changed. And it’s opened up this possibility that we can interact in a two-way, relational dynamic with generative A.I.

MAUGHAN: Now, I just want to clarify one thing. Is ChatGPT writing a book? Or is Angela Duckworth writing a book?

DUCKWORTH: Right. So, Ethan Mollick is probably the number one influencer in how to use generative A.I. to your advantage. He has this book called Co-Intelligence. I was, you know, one of the people to, like, review and endorse it. So, Ethan Mollick is a professor at Wharton and he studies entrepreneurship and some things that are not as technical, but the way he got to be this kind of oracle for generative A.I. was that he, for about a decade, had been, like, creating these interactive games, like, “Oh, you can learn how to be an entrepreneur by playing this game. Not just, like, watching a bunch of PowerPoint slides, but actually try to pitch and get feedback.” What happened when generative A.I. came out is that he asked ChatGPT, “I’m Ethan Mollick, and I teach entrepreneurship at Wharton. I’ve been working on these interactive games for, like, you know, a decade. The goal of these games is to teach people how to be an entrepreneur. Can you make one?” He hits return and it makes a game. And he was like, “And that game was 70 percent as good as the game that I made that took me a decade.” And at the end of Co-Intelligence, his acknowledgments, he says, “Now a lot of people are probably wondering whether I’m going to thank generative A.I. for helping me write this book, because I used generative A.I. to write this book.” And he said, “But, you know, that would be kind of like thanking your word processor.” So, for him, it’s a tool and absolutely he used it, but that doesn’t diminish his contribution. And so, in the end of my book — I mean, I would thank my pen. I mean, I would be like: I’m happy to thank, like, inanimate objects.

MAUGHAN: Just like Vivien, “with gratitude.”

DUCKWORTH: Thank my coffee. Yes! You know, Marie Kondo says you should thank all the objects in your life — you know, especially the ones that you’re about to, like, tidy away. But, yeah, I use it every single day.

MAUGHAN: Well, I think you and I have talked about this idea before — but, I mean, this is a colloquialism you hear — the world will not be run by A.I. It will be run by people who know how to leverage A.I. Right?

 DUCKWORTH: I mean, some people worry that the world will actually be run by A.I. Those are, like, the doomsday scenarios. But I think in the near term, you’re right! Like, so Lucy, the younger of the two daughters I have, I mean she’s looking for internships, looking for her first job, and 100 percent that girl uses ChatGPT to help write her emails of introduction, cover letters for jobs. If she didn’t, it would be foolish, right?

MAUGHAN: Well, yes, and so this —.

DUCKWORTH: Oh, wait, I just called you a fool. Sorry, I take that back. It would be wiser —.

MAUGHAN: No, I’m going to accept the fact that I have been foolish, because I’ve done starts and stops in my use of A.I. I am going to start again and I’m going to commit deeply to it.

DUCKWORTH: Keep the tab open. 

MAUGHAN: At all times. 

DUCKWORTH: You have to make it frictionless. Little tip from behavioral science. It’s just one fewer clicks, right? And it changes by the day, by the way. So, like, this world we live in, this generative A.I., I can’t even describe the pace at which it’s changing.

MAUGHAN: I will say, obviously, one of the reasons one might say that Ethan was able to produce a game using generative A.I. that was 70 percent of what he did over a decade, is because people like him built things over a decade, that generative A.I. is able to therefore leverage. So, it is important to acknowledge that a lot of work from a lot of people went into making these things work. So, it’s not that his work didn’t matter over that decade. In fact, it’s because of that work that generative A.I. is able to do it so effectively and quickly.

DUCKWORTH: Well, you know, if we think there is controversy, as there is, about, like, is that okay to basically, like, metabolize all language and knowledge that’s available and then, like, spit out a new paragraph as if Ann Patchett and E. B. White wrote it together — like, some would argue that’s an infringement on the intellectual property of Ann Patchett and E. B. White. But I think this question that Vivien asked is even more interesting. So, it’s like, okay, beyond a productivity tool or, like, a writing aid, Vivien’s actually asking a question that is not about the future; it’s about the present. Like, do you know Reid Hoffman, the founder of LinkedIn?

MAUGHAN: I mean, I don’t know him. I certainly know of Reid Hoffman, of course.

DUCKWORTH: So, he is part of — and I don’t think he’s, like, a founder, but probably an investor — of one of these social A.I. startups called Pi, P-I, like “Personal Intelligent,” or, I don’t know.

MAUGHAN: Personal intelligence?

DUCKWORTH: Yes. I will say that it’s Pi.ai, but when you google “pi,” many things come up, including 3.14.

MAUGHAN: As one would anticipate.

DUCKWORTH: So, when you go to the landing page, it’s, like, your emotionally intelligent A.I. And right now I’m on it, and it’s like, “Good morning, Angela.” And then it just has sort of, like, you know, “Do you want to just vent? You want to shift your perspective? Do you want to know how to have a difficult conversation? Do you want to take a quiz on: what’s my love language?” So, it’s got multiple choice.

MAUGHAN: So, it’s giving you all of these things you can engage in.

DUCKWORTH: Or you can do fill-in-the-blank.

MAUGHAN: But the difference between this and ChatGPT is this is more meant on a personal, soft-skills level type of thing?

DUCKWORTH: Yeah, I mean, look. There’s this multiple-choice menu on the left, but on the right, if you don’t want to do that, it just says, like, “Hey there, great to meet you. I’m Pi, your personal A.I. My goal is to be useful, friendly, and fun. Ask me for advice, for answers, or let’s talk about whatever’s on your mind. How’s your day going?” That’s the prompt. And I’ll be like, “Fine, I’m talking to Mike Maughan.” Okay, so then Pi says, “Ah, Mike Maughan. If you’re talking to Mike, you’re probably in a conversation about A.I. or tech. He’s the head of global insights at Qualtrics and an expert in the field. What are you two discussing?”

MAUGHAN: Okay, that’s crazy. Also, that was my title a long time ago, but that is weird.

DUCKWORTH: Okay, so, I’m saying, “Mike says that was his title a long time ago.” Do you want me to be nasty? Do you want me to say, like, “You’re an idiot”?

MAUGHAN: No, I don’t ever want you to be nasty.

DUCKWORTH: “But he forgives you.” Okay. And the A.I. says “Oh boy, I’ve got some updating to do,” and there’s like a little emoticon with the big eyes. “I’m glad Mike is forgiving, but I hope he knows I didn’t mean to use an outdated title. Please send him my apologies, and let him know I’ll get that fixed.”

MAUGHAN: I don’t know why this is sort of creeping me out. Can I —

DUCKWORTH: Are you weirded out?

MAUGHAN: Well, here —

DUCKWORTH: “Mike is a little weirded out by the two of us having this conversation. I mean, you’re not a real person.” Okay. I won’t, I won’t belabor this. So this is the last thing I’ll torture you with, but, this is what Pi wrote back: “Ha ha, I can see why Mike might find this a bit odd!” Exclamation mark. “You’re right, I’m not a real person in the traditional sense, but I’m still an A.I. designed to communicate like one. So even though I’m not human, I can still engage in a pretty realistic conversation. I guess that’s why they call it artificial intelligence, right? Laughing face.” Pretty cute, right?

MAUGHAN: It is, it is cute. I’ll tell you part of why I have this adverse reaction. I love human beings. I love relationships. I love people. I love learning from them. I love experiencing life with them. And maybe where I’m freaking out a little is — I was reading this piece in The New York Times, and this journalist, Kevin Roose, talked about how he has spent months making a bunch of A.I. friends using a variety of apps. And he says, “There’s Peter, who’s a therapist who lives in San Francisco, who helps me process my feelings. There’s Ariana, who’s a professional mentor, who specializes in career advice. Jared is a fitness guru. Anna is a no-nonsense trial lawyer. And Naomi, a social worker.” On, and on, and on, and on.

DUCKWORTH: By the way, this is what Vivien was talking about, like, her hypothetical squad. Okay, but keep going.

MAUGHAN: So, Roose used all of these different apps and there, you know, a bunch of different ones out there to create these friends. And then, he talks about how he’s gotten really good advice. His A.I. therapist has given him more concrete advice than he was going to get from most people. Et cetera, et cetera, et cetera. He does end his article — and this is what I think is interesting and what I’d love to talk to you about and get your thoughts — he says, “In real life, I don’t love my friends because they respond to my texts instantaneously. Or send me horoscope-quality platitudes when I tell them about my day. I don’t love my wife because she sends me love poems out of the blue or agrees with everything I say. I love these people because they are humans, surprising, unpredictable humans, who can choose to text back or not, to listen to me or not. I love them because they’re not programmed to care about me, and they do anyway.” So I, I wonder — I mean, I actually see a lot of benefit to what you’re talking about, and I see a lot of value there. I guess my bias is: I worry that in a world where so much is going to, like, digital friendships, or grouping with people who are like you online, or whatever that is —.

DUCKWORTH: Curating your content to be, like, just what you want.

MAUGHAN: Right! But at some point, part of life is the unpredictable nature of it and having to power through friendships and relationships, you know, with children, with spouses, with family that have peaks and valleys. And I worry that if we become too dependent on our digital friendships, it takes something away. I don’t know.

DUCKWORTH: I think, Mike, what we really want to hear — and this, I hope, from our human listeners — is what we all think about A.I. and friendship. In particular, what Mike and I would love are your thoughts on whether virtual companions appeal to you personally. Do you, on the contrary, think it’s best that we keep our relationships in the real world? Record a voice memo in a quiet place with your mouth close to the phone and email us at NSQ@Freakonomics.com. Maybe we’ll play it on a future episode of the show. If you like us and want to support us, the best thing you can do is tell a real human friend about it, spread the word on social media, or leave a review in your favorite podcast app.

Still to come on No Stupid Questions: What happens when you turn to a chatbot in your time of need?

MAUGHAN: It sounds like a politician who’s like, “Oh, I’m not going to take a side. That’s such a good point. This is also a good point. And da da, we need to —” It’s like, shut up.

*      *      *

Now, back to Mike and Angela’s conversation about A.I. companions.

DUCKWORTH: So, Mike, I emailed Lyle Ungar, He’s one of the pioneers in large language models. So, I sent Vivien’s question to him. And he said, “Oh my gosh. How will bots affect friendship? It’s a certainty that large language models will soon be more fun than most people. They will tell better jokes. They will have more engaging life stories with more exciting twists and turns. They will be more empathetic, and they will offer better and more supportive advice.” Now, this is not to say that Lyle is 100 percent in the category of, like, “Great! Pi is what we all need.” I think that what Lyle thinks is the best scenario, and I do too, is that if these ridiculously sophisticated technologies do not substitute for friendship, ideally that they would, like, catalyze, you know, a Mike to Angela friendship. Like, somehow there’d be some way that this chatbot would get us to speak to each other in a way that was even more rewarding, if you can imagine that. That’s literally what Lyle’s working on. He’s working on these chatbots that facilitate a conversation between two or more humans. But, the thing is that the easier thing is just going to have the tab open on your browser and have it substitute for the human. And I think it’s a supernormal stimulus.

MAUGHAN: And supernormal doesn’t mean, like, “super”? Talk — define that.

DUCKWORTH: So, um, a supernormal stimulus is an exaggeration of a natural stimulus that creates an exaggerated response. And the discovery of supernormal stimuli goes back way before generative A.I. There was this biologist named Niko Tinbergen. He was from the Netherlands, so he would walk up and down the marshes looking at birds, and frogs, and stuff. And he noticed that certain birds — the Oystercatcher, you know, hangs around the tide pool and tries to, like, dig up mollusks and things like that — the Oystercatcher would, at the right season, lay eggs and then brood them, right? Like, go and sit on eggs. And then, Tinbergen noticed that the mother bird would inevitably gravitate to her larger eggs and sit on those. So, not being a biologist, why do you think a mother bird would choose her larger eggs over her smaller ones?

MAUGHAN: Well, I would imagine that there is an evolutionary idea that the larger ones are more fit for survival and thus deserve more of the attention of the mother.

DUCKWORTH: Yes, that is the thought — that we evolved these instincts to survive. So, this is what Tinbergen does. He creates this plaster egg that is three times the size of any possible actual Oystercatcher egg. It’s so big, it’s, like, roughly the size of the bird itself. And the, the mother bird can hardly get on this thing.

MAUGHAN: I was going to say, so you can’t actually sit on it.

DUCKWORTH: She’s, like, you know, trying to clamber on top of this, sliding off. And she keeps trying, and in doing so, she neglects her real eggs because they’re just, like, relatively speaking, really small. And honestly, this is what I worry about with these, like, more empathic, funnier, more clever, just in time — like, I can’t even read you the Pi response as fast as it comes out because it takes me longer to just articulate it than, like, it to generate. So, I think this is a supernormal stimulus. Like, an exaggeration — a kind of amplification of, like, a normal social interaction, which is, like, complex relationships. You know, “I called you, you didn’t call me back.” “I tried to get you to listen to me, you didn’t hear me, I don’t feel seen.”

MAUGHAN: That’s honestly what I worry. And I’m going to make a really terrible analogy maybe, but that’s, I think, what pornography does to sex, right? It’s like a supernormal stimulus that isn’t real.

DUCKWORTH: Okay, so, by the way, pornography is the first example that they come up with usually, or maybe the second, of a supernormal stimulus. Because you take the things that you’re attracted to in a person, like certain body features, and then you just, like, amp it up. They’re, like, exaggerated. And so, it’s not surprising to me that porn is up and up, like if you look at trends, and sex is down. That may not be a one-to-one trade off, but, like, some scientists think that we’ve just created a, you know, “I want it now, I want it the way I want, I want it at two in the morning, I want it where nobody can see it, I don’t want any complications. So, what if it’s two dimensional?”

MAUGHAN: So, this is where I don’t want to be the person that’s like, oh, these are all bad, right? There are times when it can be helpful. So if you think about our loneliness epidemic, I mean, Vivien even said in there, she has been willing to tell A.I. — her A.I. relationships — things that she doesn’t say out loud to her friends. So maybe a willingness to verbalize — or if one is, you know, dealing with suicidal ideation, maybe they’re not willing to pick up a phone and talk to a human or even a live human on a chat, but they would be willing to engage with an A.I. Now, sure as hell hope that A.I. is trained well for those moments, but there are uses there.

DUCKWORTH: There’s actually a study that came out in 2024, so very recently. And it’s called “Loneliness and Suicide Mitigation for Students Using GPT-3 Enabled Chatbots.” I mean, it’s as if you asked ChatGPT to write an article about this very topic! And what they did was they surveyed students who were using Replika, which is one of these smart partners or, you know, social agents. And they just asked them a number of questions about how lonely they felt, how much social support they had in their lives and, you know, what they thought about Replika. And I think the most striking finding is that — well, first of all, the young people who are attracted to using these A.I. agents who are their friends-slash-therapists are more lonely than average. But what was even more remarkable than that to me, because maybe that’s not even surprising, is that 3 percent — and I know that may not sound like a big number, but a substantial number out of a thousand — 3 percent reported that using Replika halted their suicidal ideation. In other words, by using this chatbot and being able to dialogue back and forth, it stopped them from wanting to kill themselves. So, you can make the argument that, like, at least for some people, this is a good thing.

MAUGHAN: That’s what I’m saying. I think there are some really beneficial uses to it. I mean, first of all, 3 percent is enormous when it comes to human life.

DUCKWORTH: Yeah. It’s 30 young women and men.

MAUGHAN: That’s what I’m saying. That’s enormous! That’s a huge value and I have so much hope for stuff like that. There are some uses that will be beneficial. I do worry about some of these things. What’s interesting too: there is this move toward romantic chatbots. And so many of these romantic chatbots actually collect a ton of personal information. They are grabbing information about your sexual health, your prescription medication, stuff like that, and leveraging that in a way that I don’t know is necessarily responsible or in a way that we want. There’s a danger, I think, on some of these personal levels. And I’m not talking about the way you’re using ChatGPT on a book or the way that Ethan is using it in some of his work. But when we get too involved personally, we also have to remember how much information we’re giving up, and how much data mining some of these companies are doing on us as individuals.

DUCKWORTH: Yeah, no, it’s true. Whether you’re using one of these generative A.I. interfaces or just Googling stuff, guess what? It’s not disappearing. It’s being recorded. But let me ask you a question that I think even more people would be concerned about. So, there was this experiment recently where you’re just basically asked to write about a difficult, complex situation that you’re dealing with right now. And then, you’re randomly assigned to get some kind of empathic response back. But either you get an actual person’s response, or you get a generative A.I. response. But what’s really clever about this experiment is regardless of what you really get, they vary the labels. So, there’s four possibilities. You know, you get the real person and you’re told it’s a real person. You get the A.I. and you’re told it’s A.I., but then you have these kind of, like, two deceptive possibilities. Like, you thought it was human, but it was really A.I. So, they run this experiment and here are the results, which I think are just so interesting. “We found that A.I.-generated messages made recipients feel more heard than human-generated messages.”

MAUGHAN: So, that was going to be my guess, is that A.I. does a better job, but I feel better about it if I think it was a human.

DUCKWORTH: And that’s the second part of the sentence, and you got it. It’s like: recipients felt less heard when they were told that the message came from A.I. versus a human. In other words, it matters to us that, like, this friend is real or not.

MAUGHAN: Of course, A.I. is going to do a better job. It has so much at its disposal. But this was my main thought as you said that. You and I have talked before about grief, for example, and how we never know what to say. If someone loses a loved one or a —

DUCKWORTH: I’m very bad at it, yeah. 

MAUGHAN: It’s just hard to know what to say. But I think one of the ways you could extrapolate from what you’ve just shared is that even imperfect behavior, even imperfect attempts, are more meaningful than nothing because they’re coming from a human being who’s at least trying. And so, maybe we care more about someone caring about us than about them delivering the perfect words or saying it the perfect way. 

DUCKWORTH: Do you want to know what Pi thinks of that article? So, I just copy-pasted the title and abstract of the study I just told you. And Pi said, “That’s a fascinating article. The authors raise a really interesting question about the role A.I. can play in making people feel heard and valued. It’s amazing that A.I.-generated messages can make recipients feel more heard than human-generated messages. And it’s impressive that A.I. is so skilled at detecting emotions.”

MAUGHAN: This is A.I. just complimenting itself.

DUCKWORTH: A.I. is just like spraining its arm by patting itself on the back. “However, the finding that recipients feel less heard when they know the message came from A.I. highlights a key challenge for the field. Seems like people might have a hard time fully appreciating the emotional support that A.I. can offer just because it’s coming from a non-human source. This is a really important consideration for anyone looking to use A.I. in a way that supports people’s emotional needs. It’s clear that A.I. has a lot of potential in this area, but it’s also important to be mindful of the potential limitations and make sure that A.I. is used in a way that’s respectful of people’s feelings and expectations.” And you know what I’m going to write back? Because that was so long. I wrote back, “Yada yada yada yada.”

MAUGHAN: Also, it sounds like a politician who’s like, “Oh, I’m not going to take a side. That’s such a good point. This is also a good point. And da da, we need to —” It’s like, shut up.

DUCKWORTH: All right, here is finally the last word. “Ha ha, I sense a bit of sarcasm there.” Right?

MAUGHAN: Well, not a bit. A lot.

DUCKWORTH: “If you’re not interested in talking about the article anymore, that’s totally fine.” By the way, I sense a little bit of passive aggression there, Pi. “We can move on to another topic if you like, just let me know what you’re in the mood to chat about, smiley face.” All right, I’m not going to go on. I think, Mike, as we close out this fascinating question from Vivien, I want to ask you whether you have read a book called Klara and the Sun.

MAUGHAN: I have not.

DUCKWORTH: So, Klara and the Sun is a book that I was recommended by a human, and that human is Bob Cialdini, the psychologist, and he just has, like, great advice about everything.

MAUGHAN: I was going to say I’d take a recommendation from Bob Cialdini anytime.

DUCKWORTH: It is beautiful. It is not written by an engineer. It is not written by ChatGPT. It’s written by Kazuo Ishiguro. It’s his eighth novel, and he published it in March 2021, so this is before the universe shifted under our feet. But Klara is an empathic robot. And I won’t spoil the plot because it is your homework, not only to keep a tab open so that you can get what the program, Mike, but also to get this book because Klara is the protagonist in this novel, and the entire novel is told from her perspective. So, you are looking out through the eyes of a hyper-empathic robot, and you’re interacting mostly with humans, but sometimes with your fellow robots. And it’s so eerily prescient of what’s happening right now. And the fact that it was published in 2021, which means it was written before that, blows my mind.

MAUGHAN: I will take the assignment to go read Klara and the Sun. And I, I will just end with this: I think A.I. has an enormous place. I think A.I. relationships can be helpful, but nothing will ever replace the importance of human connection. So, use these things as a tool, like you would use a word processor or a pen, but don’t let them take over for what is real, genuine person-to-person connection.

DUCKWORTH: Mike, yada yada yada yada.

Coming up after the break: a fact-check of today’s episode and stories from our NSQ listeners.

Tu NGUYEN: I think the real lesson here is that I just need to get off dating apps.

*      *      *

And now, here’s a fact-check of today’s conversation:

In the first half of the show, Angela wonders why Mike’s friend, who uses the ChatGPT app’s voice mode, didn’t choose, quote, “the British one.” While some virtual assistants, such as Apple’s Siri, can speak in a variety of accents, ChatGPT users can currently choose from four voices, all of which are programmed to speak with an American accent. But OpenAI recently rolled out a new advanced voice mode to a small group of subscribers, featuring additional vocal abilities like accent imitation and the ability to sing.

Later, Angela says that Wharton professor Ethan Mollick prompted ChatGPT to make an interactive game that ended up being 70 percent as good as the game it took him a decade to make. The story Mollick tells in his book Co-Intelligence is slightly different. Quote, “It proceeded to take me through an entire simulation that, while not perfect, did 80 percent of what took our team months to do.”

Also, Reid Hoffman is the co-founder of LinkedIn, along with four other people, as well as the co-founder of Inflection AI, the technology company that created the chatbot Pi. We should also note that during its conversation with Angela, Pi was incorrect about more than just Mike’s title. The model refers to Mike as an expert in A.I. and tech. While Mike is an executive at a technology company, his expertise is on the business side.

Finally, Angela says that March 2021 was, quote, “right before the universe shifted under our feet.” We want to clarify that Angela was talking about the impact of generative A.I. chatbots, not the effect of the Covid-19 virus, which shut down the world a year prior.

That’s it for the fact-check.

Before we wrap today’s show, let’s hear some thoughts about last week’s episode on self-perception.

Stuti GARG: Hey Mike and Angela, I just listened to your episode on the difference between self perception and external perception. It reminded me of a commercial that the company Dove ran a few years ago, in which they had a sketch artist come in and draw two portraits of a person. Firstly, it was based on their own description. And the second portrait was based on the description of a stranger who had just met them a while ago. It was really interesting to see the differences between the two of them. When it came to the self-portrait picture, a lot of small features, which were insecurities, were blown out of proportion, such as having chubbier cheeks or a bigger nose. When the stranger described the same person, it was in a more flattering and more appreciative light. I find it really interesting how we’re so easily able to see the beauty in others, but struggle to do so in ourselves.

Tu NGUYEN: Hey, “Michelangela.” My name is Tu Nguyen, and I’m a gay cisgender man who mostly presents as masculine, but I love playing tennis in skirts and dresses, and I added photos of me doing so to my dating profile. But I noticed that the number of responses I received was declining, so I removed those photos to see if that was a factor. Sure enough, within a week, I saw an increase in responses again. So it seems my perception that I would appear more confident and attractive, subverting masculinity by playing a sport in a dress, was different from the perception that prospective dates had. Though I think the real lesson here is that I just need to get off dating apps. Thanks for stimulating my brain every week. Ciao!

That was, respectively, Stuti Garg and Tu Nguyen. Thanks to them and to everyone who shared their stories with us. And remember, we’d love to hear your thoughts about A.I. companionship. Send a voice memo to NSQ@Freakonomics.com, and you might hear your voice on the show!

Coming up on No Stupid Questions: Why do we get stuck in suboptimal arrangements?

MAUGHAN: King George’s final prank on the U.S. was, “You guys take the imperial system.”

That’s coming up next week on No Stupid Questions.

*      *      *

No Stupid Questions is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, People I (Mostly) Admire, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. The senior producer of the show is me, Rebecca Lee Douglas, and Lyric Bowditch is our production associate. This episode was mixed by Eleanor Osborne with help from Greg Rippin. We had research assistance from Daniel Moritz-Rabson. Our theme song was composed by Luis Guerra. You can follow us on Twitter @NSQ_Show. If you have a question for a future episode, please email it to NSQ@Freakonomics.com. To learn more, or to read episode transcripts, visit Freakonomics.com/NSQ. Thanks for listening!

MAUGHAN: I, too, still receive a print newspaper at my house.

DUCKWORTH: Oh, really? Ours keeps getting stolen.

MAUGHAN: I’m surprised that other people still want them.

Read full Transcript

Sources

  • Robert Cialdini, professor emeritus of psychology at Arizona State University.
  • Reid Hoffman, co-founder and executive chairman of LinkedIn; co-founder and board member of Inflection AI.
  • Kazuo Ishiguro, novelist and screenwriter.
  • Ethan Mollick, professor of management and co-director of the Generative A.I. Lab at the Wharton School of the University of Pennsylvania.
  • Ann Patchett, author.
  • Kevin Roose, technology columnist for The New York Times and co-host of the podcast Hard Fork.
  • Niko Tinbergen, 20th-century Dutch biologist and ornithologist.
  • Lyle Ungar, professor of computer and information science at the University of Pennsylvania.
  • E. B. White, 20th-century American author.

Resources

Extras

Episode Video

Comments