Why A.I. Ought to Be Afraid of Us

0
75
Why A.I. Should Be Afraid of Us

Artificial intelligence is gradually catching up with ours. AI algorithms can now consistently beat us in chess, poker and multiplayer video games, create images of human faces indistinguishable from real ones, write news articles (not this one!) And even write love stories and drive cars better than most teenagers.

But AI isn't perfect when Woebot is an indicator. Woebot, as Karen Brown wrote in the Science Times this week, is an AI-powered smartphone app that aims to provide low-cost advice by guiding users through dialogues through basic cognitive behavioral therapy techniques. However, many psychologists doubt whether an A.I. The algorithm can ever express the kind of empathy that is required for interpersonal therapy to work.

"These apps really cut down on the essential ingredient that much evidence shows that helps in therapy, which is the therapeutic relationship," said Linda Michaels, a Chicago-based therapist and co-chair of the Psychotherapy Action Network Network, told the Times .

Of course, empathy is not a one-way street, and we humans don't show much more of it for bots than bots for us. Numerous studies have shown that people placed in a situation where they can cooperate with a benevolent AI are less likely to do so than if the bot were a real person.

“Something seems to be missing from reciprocity,” said Ophelia Deroy, philosopher at the Ludwig Maximilians University in Munich. "In principle, we would treat a complete stranger better than AI."

In a recent study, Dr. Deroy and her neuroscientific colleagues try to understand why this is so. The researchers paired human subjects with invisible partners, sometimes human and sometimes AI; Each couple then played a series of classic business games – Trust, Prisoner's Dilemma, Chicken, and Stag Hunt, and one of them called Reciprocity – designed to measure and reward cooperation.

Our lack of reciprocity towards A.I. is commonly accepted as an expression of a lack of trust. It is, after all, hyper-rational and callous, certainly only to itself, barely cooperating, so why should we? Dr. Deroy and her colleagues came to a different and perhaps less reassuring conclusion. Their study found that people were less likely to cooperate with a bot, even if the bot is interested in cooperating. It's not that we don't trust the bot, but we do: the bot is guaranteed benevolent, a capital S-sucker, so we're taking advantage of it.

This conclusion was confirmed by the subsequent discussions with the study participants. "Not only did they tend not to reciprocate the cooperative intentions of the artificial agents," said Dr. Deroy, "but if they were basically abusing the bot's trust, they were not reporting guilt while they were doing it on humans." She added, "You can just ignore the bot and don't feel like you've broken a mutual obligation."

This could have an impact on the real world. When we think of AI we often think of Alexas and Siris of our future world with whom we may have some kind of intimate relationship. But most of our interactions will be one-off, often wordless, encounters. Imagine you are driving on the motorway and a car tries to pull in in front of you. If you notice that the car is driverless, you will be much less likely to get in. And if the AI. Failure to explain your bad behavior could result in an accident.

"What supports cooperation in society on any scale is the establishment of certain standards," said Dr. Deroy. “The social function of guilt is precisely to get people to follow social norms that lead them to compromise, to cooperate with others. And we didn't evolve to have social or moral norms for non-sentient creatures and bots. "

That is, of course, half the premise of “Westworld”. (To my surprise, Dr. Deroy had never heard of the HBO series.) But a guilt-free landscape could have ramifications, she noted. “We're creatures of habit. So what guarantees that the behavior that is repetitive and where you show less courtesy, less moral obligation, less cooperation does not affect and contaminate the rest of your behavior when you interact with another person?

There are similar consequences for AI. "When people treat them badly, they are programmed to learn from what they experience," she said. "An AI. that was put on the street and programmed for benevolence should start not being so friendly to people or it will get stuck in traffic forever. ”(That is basically the other half of Westworld's premise.)

There we have it: the real Turing test is Road Rage. When a self-driving car starts honking wildly from behind because you cut it off, you know that humanity has reached the peak of achievement. By then, hopefully AI therapy will be mature enough to help driverless cars solve their anger management problems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here