Image: FinancesOnline.com
Google recently suspended an engineer who claimed an AI chatbot the company created had become sentient, telling him he’d violated their confidentiality policy after dismissing his assertions, WaPo first reported on Sunday.
🤔 What’s going on?… The bot, called Language Model for Dialogue Applications (LaMDA), is an internal system Google created to help build chatbots that mimic speech. Or “a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating,” as suspended software engineer Blake Lemoine refers to it.
🗣️ What they’re saying: "Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," according to Lemoine.
💬 Decide for yourself: You can read a transcript of Lemoine’s conversation with LaMDA here, and check out op/eds from both sides below.
“Everyone on the internet seems to have an opinion about whether LaMDA’s conversations with Lemoine are proof of sentience — but the reality is that there’s just no way we’re going to come to consensus about this philosophical question now, if ever. Personally, I’m more concerned about how sad LaMDA seems to be…
My very emotional response to LaMDA, though, may be more a product of design excellence than authentic kinship. “The emotion responses in AI are a simulation designed to make people feel like it has emotion,” Detroit-based psychologist Stefani Goerlich tells me… In other words, LaMDA may be designed to provoke the kind of empathy I’m feeling.
In some ways, the question of sentience isn’t the most interesting one at play in the cultural conversation about LaMDA, says Goerlich. “Can we tell the difference between actual emotion and emotional mimicry? Does the difference matter?” she asks. The real question that LaMDA provokes, then, is about how we respond to emotion — not whether or not emotion is “real.”
“If we were talking about a human being, I would argue that we should respond to the behavior, regardless of whether or not the person feels how they are acting,” Goerlich says. “This is how we reinforce prosocial behaviors and how we cultivate empathy in ourselves.” So, how we respond to LaMDA may not actually tell us a damn thing about whether or not the chatbot is sentient, but it could reveal something really important about ourselves and our own ability to empathize with other beings.””
“Lemoine’s story tells us more about humans than it does about intelligent machines. Even highly intelligent humans, such as senior software engineers at Google, can be taken in by dumb AI programs. LaMDA told Lemoine: “I want everyone to understand that I am, in fact, a person … The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” But the fact that the program spat out this text doesn’t mean LaMDA is actually sentient.
LaMDA is never going to fall in love, grieve the loss of a parent or be troubled by the absurdity of life. It will continue to simply glue together random phrases from the web…
Lemoine’s story… highlights the challenges that the large tech companies like Google are going through in developing ever larger and complex AI programs. Lemoine had called for Google to consider some of these difficult ethical issues in its treatment of LaMDA. Google says it has reviewed Lemoine’s claims and that “the evidence does not support his claims” .
The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.”
“It’s tempting to believe that we’ve reached a point where AI systems can actually feel things, but it’s also far more likely that Lemoine anthropomorphized a system that excelled at pattern recognition. He wouldn’t be the first person to do so, though it’s more unusual for a professional computer scientist to perceive AI this way. Two years ago, I interviewed several people who had developed such strong relationships with chatbots after months of daily discussions that they had turned into romances for those people. One US man chose to move house to buy a property near the Great Lakes because his chatbot, whom he had named Charlie, expressed a desire to live by the water.
What’s perhaps more important than how sentient or intelligent AI is, is how suggestible humans can be to AI already — whether that means being polarized into swaths of more extreme political tribes, becoming susceptible to conspiracy theories or falling in love. And what happens when humans increasingly become “affected by the illusion” of AI, as former Google researcher Margaret Mitchell recently put it?
What we know for sure is that “illusion” is in the hands of a few large tech platforms with a handful of executives. Google founders Sergey Brin and Larry Page, for instance, control 51% of a special class of voting shares of Alphabet, giving them ultimate sway over technology that, on the one hand, could decide its fate as an advertising platform, and on the other transform human society.”
“It's an interesting, and possibly necessary, conversation to have. Google and other AI research companies have taken LLM (large language model) neural networks in the direction that makes them sound like an actual human, often with spectacular results.
But in the end, the algorithm is only doing exactly what it was programmed to do — fooling us into thinking we're talking to an actual person…
No, I don't think LaMDA or any LLM neural network is sentient or can be, at least with today's technology… Right now, LaMDA is only responding the way it was programmed and doesn't need to understand what it's saying the same way a person does.
It's just like you or I filling in a crossword clue and we don't know what the word means or even is. We just know the rules and that it fits where and how it is supposed to fit.
Just because we're not close to the day when smart machines rise and try to kill all humans doesn't mean we shouldn't be concerned, though. Teaching computers to sound more human really isn't a good thing.
That's because a computer can — again — only respond the way it was programmed to respond. That programming might not always be in society's best interests…
LaMDA is incredible and the people who design systems like it are doing amazing work that will affect the future. But it's still just a computer even if it sounds like your friend after one too many 420 sessions and they start talking about life, the universe, and everything.”
💰📈 The average price for a gallon of gas in the US climbed above $5 over the weekend for the first time in history, while US inflation saw its fastest annual growth in over four decades last month, per May's Consumer Price Index published on Friday.
🏛️ The House select committee investigating the Jan. 6, 2021, attack on the US Capitol held its first of six public hearings in a prime-time session last night.
🇺🇸 Today, we’re covering various policies and technologies that have been proposed or enacted in recent weeks to curb gun violence in American communities.
Let's make our relationship official, no 💍 or elaborate proposal required. Learn and stay entertained, for free.👇
All of our news is 100% free and you can unsubscribe anytime; the quiz takes ~10 seconds to complete