Chapters
-
00:08 Introduction
02:06 Discomfort with Chatbots
03:51 Impact of Chatbots on Communication
04:41 Concerns About Online Influence
06:10 The Range of Emotions in Chatbots
07:20 Concluding Thoughts and Next Steps
Transcript
Hello, everyone. Welcome to another Daily Gym. This is the episode for Monday, May 13th, 2024.
Today, I want to talk about, I think it's called GPT-4-0, and, how weird it feels when computers make up emotions.
That's what we'll talk about. So I believe today OpenAI released something called GPT-4.0. I don't know why they named it that, but GPT-4.0. And the idea is that it's not just a text modality. People can now speak to it and it will respond back vocally. And I listened to a little bit of the demo. There was a live demo that they did. And it really creeped me out, to be honest. In the beginning, I think he said, chat GBT, how are you or how are you feeling? And chat GBT responded, oh, I'm feeling very excited or something along those lines. And I go, but you're not feeling excited. Now, I was impressed by the ability of it to imitate tone, cadence and other aspects of human speech in like a vocal format and the audio format. But something felt so uncomfortable for me. And I think it was just this idea that the computer doesn't have those emotions. It's making them up. The computer is not feeling excited. The computer doesn't have a heart that's going to do doesn't have kind of the gurgling in the stomach. It doesn't have the energy pulsing through the feet. It doesn't have these emotions that are going through its body. So what do you mean it's feeling excited? The back and forth, how are you, what's going on with you, doesn't...
I guess maybe in text it's okay I don't know how other people respond to it but for me I feel uncomfortable even in text because I know I'm just talking with a chat bot don't tell me how you're feeling be a computer don't try to be a human you're not a human you're a computer it's okay you're not even a computer you're a program that's running on probably a collection of computers, or maybe even on my computer, depending on the level of LLM we're talking about. And so it's just that idea of, it's almost like, it's almost worse than interacting with a human who's making up emotions that they're feeling. Because at least with the human, there are emotions underneath, there are feelings, there are chemicals coursing through the body, there are um there's muscle muscle tension there is a body where is whereas even with these computers some of these are disembodied computers they don't even have a computer, these chatbots are not they don't have uh very specific hardware it's not like it's in the phone only in the phone it's connected to the cloud and the cloud is this amorphous uh.
Abstracted concept that most of us don't understand. And it's existing maybe across multiple servers and GPUs, et cetera, et cetera.
Yeah, just.
I worry about the impact that some of these chatbots will have on the way that we interact with other people, the way we interact with ourselves, the way we use language and communication to, communication is to commune or come together.
I think that's what it means.
It's late. I spent a lot of time thinking about this today and engaging back and forth with one of the forums, Hacker News, today, talking with people about this stuff. And I don't know the answer, obviously. I just feel a lot of uncertainty, which can lead to a lot of terror and fear. Maybe there's some excitement in there, but at least with this stuff, I just feel really worried about, it's almost like, and maybe this is a separate topic, but it's almost like I was thinking about fighting against all these voices on the internet, on TikTok, on Instagram that are telling people to close off and disconnect from the world. And oh and the language that was being absorbed from so many of these places oh that person's a narcissist that person who these red flags you got to watch out for some of this language i just really think is making people more and more miserable um and.
And I think about ChatGPT, I think about some of these chatbots and I go, oh no, gosh, it could be even worse if these chatbots have been programmed or fine-tuned or whatever the language is to only speak with certain emotions, to not push back with anger, to not express disgust, to not express sadness or frustration or fear or whatever.
One, because they'd be making it up. But two.
I don't know. It's just that if they're going to make it up, make all of it up. Go for the full range and at least pretend to be a full range human instead of just pretending to be the humans that we already are. Or at least to adapt to the culture that so many of us have, which tells us stop feeling this, stop feeling that. Oh, conflict is bad. Shouldn't have any conflict. Always agree. If we get...
Chatbots who are faking happiness, faking excitement to always agree with us. I really wonder how that's going to impact us. And I just have an inclination saying that might screw a lot of us up. But maybe they'll have some revolt towards the chatbots and maybe other companies will create the cynical chat bot or the passive-aggressive chat bot or who knows what. So I don't know where to lead, but I know it has caused me a lot of worry and yeah, so much uncertainty where all this stuff is going to go. So on that note, I'm going to end this and try to upload the episode that I had a while back with Nancy Lloyds about ALBA. So if you're interested in checking that out, it'll be up in, I think, a few minutes from now in audio form and also on YouTube because we recorded it as a video and going to see how that goes. So if you want to check it out there, check it out. And if you want to reply or talk maybe with me or you can get in contact with Nancy, you can reach out to her from there, but, or you can also jump on the community forum. Uh, right now it's community.jimclyber.com. Um, but I'm in flux with the name. I have lots of things to say about the concept of online forums. Maybe that'll be the next episode. All right. Take care. Talk to y'all soon. Bye.
No replies yet