TECHNOLOGY

Should People Know They’re Talking To An Algorithm? After A Controversial Debut, Google Now Says Yes

By David Pierson
Los Angeles Times

WWR Article Summary (tl;dr) On Tuesday, Google unveiled an assistant that’s able to make calls to schedule appointments while speaking in a nearly flawless human voice replete with “ums” and “ahs.” At no point during the presentation did Google’s software reveal it isn’t human, sparking a debate about ethics and consent.

Los Angeles Times

Psychology professor Michelle Drouin wanted to know how people would react if a chatbot could emulate a human.

So she and her fellow researchers split 350 undergraduate students into three groups. One group was told they would be interacting with a bot. Another was told it was a real person. And the last was only informed after the interaction they had been communicating with an algorithm.

The first two groups were nearly equally happy with the experience. But the last group was not.

“They felt an eeriness about it,” said Drouin of Purdue University Fort Wayne. “They thought they were being deceived.”

It’s a reaction Silicon Valley companies may want to pay close attention to as they continue to push the boundaries of artificial intelligence to create ever more realistic virtual assistants.

On Tuesday, Google unveiled an assistant that’s able to make calls to schedule appointments while speaking in a nearly flawless human voice replete with “ums” and “ahs.” At no point during the presentation did Google’s software reveal it isn’t human, sparking a debate about ethics and consent.

On Thursday, Google reversed course by saying explicitly that the service, known as Google Duplex, would include a disclosure that it’s not a person.

“We understand and value the discussion around Google Duplex, as we’ve said from the beginning, transparency in the technology is important,” a company spokesperson said. “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified.”

The concern that people would be duped by Google’s new feature underscores the urgency for society to determine what kind of relationship it wants with its artificial aids as technology companies get closer to traversing the so-called uncanny valley. That’s a term used to describe the gulf between a robot or software that has just enough imperfections to raise skepticism and another that’s indistinguishable from a human.

Will we demand to know when we’re speaking with a bot? Or will we accept that we’ll wind up unwittingly conversing with algorithms? Will we bestow them with the same kindness and empathy we demand of each other? Or do we see them merely as tools, unworthy of the values that bind civil society together? These are questions ethicists, developers and designers are puzzling over. How we respond could have far reaching implications for how humans ultimately treat each other.

“It will change us,” said Pamela Pavliscak, a professor at Pratt Institute and founder of Change Sciences, a consulting firm specializing in the emotional connection people have with technology. “We design tech and tech, in turn, designs us. It would be pretty naive to think we’ll stay the same.”

Using a pair of recorded phone calls, Google Chief Executive Sundar Pichai demonstrated this week to attendees at the company’s developers conference how its virtual assistant could call and book a hair appointment and try to reserve a table at a restaurant.

“Mm-hmm,” the ersatz assistant says in a natural female voice when asked to wait a moment by the salon employee.
The assistant then negotiates alternative times for the appointment, throwing in an “um” for good measure. There’s no indication that the salon worker has any idea the voice on the other end is computer-generated.

The same goes for a restaurant worker speaking to a male virtual assistant, who uses colloquialisms such as “Oh, I gotcha” to express understanding.

Google’s willingness to disclose that Duplex isn’t a real person might pressure other companies to do the same. But Americans, who are already accustomed to robocalls and automated call menus _ might someday nonetheless find themselves asking if the voice on the other end is real.

Pichai said the service could be a boon for businesses by erasing one more barrier for customers, who may not have time to make such calls. It all fits neatly into the company’s new tagline, “Make Google do it,” which urges consumers to leverage the assistant from their phones, smart speakers, cars, TVs and laptops.

Lest anyone forget, the more menial tasks people outsource to Google, the more the company knows about their habits. That serves two purposes: It allows Google to provide consumers with better search results, and it allows advertisers to target audiences more precisely. And by making a virtual assistant sound real, users lower their guards, giving Google even more valuable data.

That in itself troubles privacy advocates. Will an authentic sounding assistant lull users into revealing their most private details to a company that amassed $95 billion in advertising revenue last year? What if the assistant is calling a doctor? Does Google now know about someone’s health history?

It’s something Google will need to address given today’s heightened concerns about privacy after Facebook’s Cambridge Analytica scandal. But ethicists say the bigger concerns are the unintended consequences of a world increasingly flooded with bots and automated services in place of genuine human interaction.

“The reason this can go sideways is because human communication and relationships are based on reciprocity,” said David Ryan Polgar, a technology ethicist. “What if I’m spending time thinking about someone and writing to them but the other person isn’t? They’re not putting in the same effort but they still want the benefit of a deepened relationship.”

As a result, he said, communication becomes cheapened.

“It becomes transactional, solely about the words and not the meaning,” said Polgar, who thinks Google and other AI developers have an ethical responsibility to disclose that their assistants aren’t human.

Dennis Mortensen, chief executive and founder of AI virtual assistant company x.ai, said he fears people will eventually determine it’s a waste of time being kind to one another if an increasing amount of interaction is automated. But he has hope that won’t come to pass.

“You can engineer good behavior,” said Mortensen. He cited Google’s announcement this week that it will add a feature to its virtual assistant that encourages children to say “please” and “thank you.” Amazon has a similar program for its assistant.

Mortensen, whose company solely helps users schedule meetings, said he’s also seen polite responses to his virtual assistants, Amy and Andrew. Of the hundreds of thousands of meetings x.ai schedules each month, about 11 percent of users reply to the assistants just to express gratitude, even though Mortensen discloses that the service is AI.

Drouin, the psychology professor at Purdue University Fort Wayne, believes people are willing to accept algorithmic relationships if they are upfront. She says AI has unfairly gotten a bad rap from popular culture. She thinks the technology can eventually succeed in places humans can’t, like delivering comfort to people suffering from alienation and loneliness.

“There’s a lot of people who don’t have rich and plentiful human relationships,” she said. “What options do they have? If we can advance AI we can fill a need for these people. The movement toward more human-like forms is allowing people to feel more connected.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top