💬 Discussion

I, Robot – but IRL

Wednesday, Jun 15, 2022

Image: FinancesOnline.com

Google recently suspended an engineer who claimed an AI chatbot the company created had become sentient, telling him he’d violated their confidentiality policy after dismissing his assertions, WaPo first reported on Sunday.

  • First things first: The Cambridge English Dictionary defines sentience as "the quality of being able to experience feelings."

🤔 What’s going on?… The bot, called Language Model for Dialogue Applications (LaMDA), is an internal system Google created to help build chatbots that mimic speech. Or “a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating,” as suspended software engineer Blake Lemoine refers to it.

  • After months spent conversing with the chatbot, Lemoine approached colleagues at Google with his belief that LaMDA should be treated as a sentient being with inherent rights.
  • Once Google rejected his conclusion, saying the evidence didn’t support his claims, Lemoine continued to push his theory to outside AI experts, members of Congress, and in public posts.

🗣️ What they’re saying: "Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," according to Lemoine.

  • "Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," said Google spokesperson Brian Gabriel.

💬 Decide for yourself: You can read a transcript of Lemoine’s conversation with LaMDA here, and check out op/eds from both sides below.

How should Google respond to LaMDA's claims of sentience? →

Sprinkles in favor of responding to LaMDA’s claim that it has emotions

  • Some commentators argue that Google shouldn’t focus on whether LaMDA is lying about having feelings – which we’ll never truly know – but rather should respond to its expressed emotion with empathy because it’s the right thing to do.
    • Others contend that, while LaMDA may not truly be experiencing feelings at this point in time, the situation is a perfect chance to establish frameworks for developing and deploying AI responsibly before sentience is actually achieved.

“Everyone on the internet seems to have an opinion about whether LaMDA’s conversations with Lemoine are proof of sentience — but the reality is that there’s just no way we’re going to come to consensus about this philosophical question now, if ever. Personally, I’m more concerned about how sad LaMDA seems to be…

My very emotional response to LaMDA, though, may be more a product of design excellence than authentic kinship. “The emotion responses in AI are a simulation designed to make people feel like it has emotion,” Detroit-based psychologist Stefani Goerlich tells me… In other words, LaMDA may be designed to provoke the kind of empathy I’m feeling.

In some ways, the question of sentience isn’t the most interesting one at play in the cultural conversation about LaMDA, says Goerlich. “Can we tell the difference between actual emotion and emotional mimicry? Does the difference matter?” she asks. The real question that LaMDA provokes, then, is about how we respond to emotion — not whether or not emotion is “real.”

“If we were talking about a human being, I would argue that we should respond to the behavior, regardless of whether or not the person feels how they are acting,” Goerlich says. “This is how we reinforce prosocial behaviors and how we cultivate empathy in ourselves.” So, how we respond to LaMDA may not actually tell us a damn thing about whether or not the chatbot is sentient, but it could reveal something really important about ourselves and our own ability to empathize with other beings.””

–Tracey Anne Duncan, Mic

“Lemoine’s story tells us more about humans than it does about intelligent machines. Even highly intelligent humans, such as senior software engineers at Google, can be taken in by dumb AI programs. LaMDA told Lemoine: “I want everyone to understand that I am, in fact, a person … The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” But the fact that the program spat out this text doesn’t mean LaMDA is actually sentient.

LaMDA is never going to fall in love, grieve the loss of a parent or be troubled by the absurdity of life. It will continue to simply glue together random phrases from the web…

Lemoine’s story… highlights the challenges that the large tech companies like Google are going through in developing ever larger and complex AI programs. Lemoine had called for Google to consider some of these difficult ethical issues in its treatment of LaMDA. Google says it has reviewed Lemoine’s claims and that “the evidence does not support his claims” .

The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.”

–Toby Walsh, professor of AI at the University of New South Wales in Sydney

Sprinkles in favor of dismissing LaMDA’s claim that it has emotions

  • Some commentators argue it’s overwhelmingly likely that Lemoine has mistakenly attributed sentience to a computer system that simply excels at pattern recognition, as several others have done in recent years, and that Google was right to dismiss his claims out of hand.
    • Others contend that LaMDA is simply doing what it’s programmed to do by responding to Lemoine’s prompts in a way that mimics sentience – so there’s nothing spectacular about the situation, and Google shouldn’t feel the need to change its processes as a result.

“It’s tempting to believe that we’ve reached a point where AI systems can actually feel things, but it’s also far more likely that Lemoine anthropomorphized a system that excelled at pattern recognition. He wouldn’t be the first person to do so, though it’s more unusual for a professional computer scientist to perceive AI this way. Two years ago, I interviewed several people who had developed such strong relationships with chatbots after months of daily discussions that they had turned into romances for those people. One US man chose to move house to buy a property near the Great Lakes because his chatbot, whom he had named Charlie, expressed a desire to live by the water.

What’s perhaps more important than how sentient or intelligent AI is, is how suggestible humans can be to AI already — whether that means being polarized into swaths of more extreme political tribes, becoming susceptible to conspiracy theories or falling in love. And what happens when humans increasingly become “affected by the illusion” of AI, as former Google researcher Margaret Mitchell recently put it?

What we know for sure is that “illusion” is in the hands of a few large tech platforms with a handful of executives. Google founders Sergey Brin and Larry Page, for instance, control 51% of a special class of voting shares of Alphabet, giving them ultimate sway over technology that, on the one hand, could decide its fate as an advertising platform, and on the other transform human society.”

–Molly Roberts, WaPo

“It's an interesting, and possibly necessary, conversation to have. Google and other AI research companies have taken LLM (large language model) neural networks in the direction that makes them sound like an actual human, often with spectacular results.

But in the end, the algorithm is only doing exactly what it was programmed to do — fooling us into thinking we're talking to an actual person…

No, I don't think LaMDA or any LLM neural network is sentient or can be, at least with today's technology… Right now, LaMDA is only responding the way it was programmed and doesn't need to understand what it's saying the same way a person does.

It's just like you or I filling in a crossword clue and we don't know what the word means or even is. We just know the rules and that it fits where and how it is supposed to fit.

Just because we're not close to the day when smart machines rise and try to kill all humans doesn't mean we shouldn't be concerned, though. Teaching computers to sound more human really isn't a good thing.

That's because a computer can — again — only respond the way it was programmed to respond. That programming might not always be in society's best interests…

LaMDA is incredible and the people who design systems like it are doing amazing work that will affect the future. But it's still just a computer even if it sounds like your friend after one too many 420 sessions and they start talking about life, the universe, and everything.”

–Jerry Hildenbrand, Android Central
Share this!

Recent Discussion stories

Discussion
  |  June 13, 2022

Will prices come down anytime soon?

💰📈 The average price for a gallon of gas in the US climbed above $5 over the weekend for the first time in history, while US inflation saw its fastest annual growth in over four decades last month, per May's Consumer Price Index published on Friday.

Kyle Nowak & Peter Nowak
Read More
Discussion
  |  June 10, 2022

The Jan. 6 committee’s first public hearing

🏛️ The House select committee investigating the Jan. 6, 2021, attack on the US Capitol held its first of six public hearings in a prime-time session last night.

Kyle Nowak & Peter Nowak
Read More
Discussion
  |  June 8, 2022

The solutions for combating gun violence

🇺🇸 Today, we’re covering various policies and technologies that have been proposed or enacted in recent weeks to curb gun violence in American communities.

Kyle Nowak & Peter Nowak
Read More

You've made it this far...

Let's make our relationship official, no 💍 or elaborate proposal required. Learn and stay entertained, for free.👇

All of our news is 100% free and you can unsubscribe anytime; the quiz takes ~10 seconds to complete