Yevgeniy Vorobeychik doesn’t know exactly what sentience is. 

Because he’s an engineer, not a philosopher, Vorobeychik can’t say what it’s like to be a bat or a tree or a rock. He can’t quantify the importance of embodiment to consciousness. He’s not even sure that there’s an inherent problem with people reacting to an artificial intelligence in ways similar to how they react to other people.

Eugene Vorobeychik
Vorobeychik

But when it comes to recent claims made by a Google employee that one of the company’s algorithms is, in fact, sentient, Vorobeychik is certain of one thing: “At the moment there is no ethical issue,” he said. “Not even remotely.”

No matter what the future holds for these programs , Vorobeychik said that his stance will remain the same. “Something that is algorithmic and artificial (such as AI) is ultimately created as a tool for us to use,” he said. “And that is all that it should be.”

Vorobeychik, an associate professor of computer science and engineering at the McKelvey School of Engineering at Washington University in St. Louis, doesn’t believe that Google’s AI, or any AI today, is sentient – whatever that may mean. Debates about whether Google’s LaMDA is a “person” are not substantive at all, Vorobeychik added.

LaMDA, which stands for Language Model for Dialogue Applications, is a kind of uber-chatbot. But unlike most chatbots, which focus on a single task (e.g., customer service), LaMDA is able to move from topic to topic in strikingly natural ways. It is a remarkable computer model, Vorobeychik readily observes. It is not, however, a system built to “reason.” A dramatic oversimplification:

Asking a question serves as context. The question is translated into something numerical. Ultimately, the AI generates one word at a time; each word serves as additional context to help generate the next word. It is fundamentally an algorithmic process.  

“It’s a word-generation algorithm,” he said. “It strikes me as very odd to describe that as sentient.”

That’s not to say there are not, and won’t continue to be, concerns about how people interact with AIs. Humans are primed to respond to human-like-things in human-like ways. If someone is using an AI as a therapist, or for comfort because they are lonely, “that could be a problem. It doesn’t have to be,” Vorobeychik said, “but it could.”  

It could be a problem for the person. Any ethical questions related to AIs should center on humans’ relationships with them, Vorobeychik said. Debates  about our potential obligations to some hypothetically sentient AI forget who the real “subjects” are. 

“The only relevant ethical concern is the impact it would have on human communities,” Vorobeychik said. “I don’t see why we should have any concern about AI itself as such, whatever its reasoning capacity may be.”

Leave a Comment

Comments and respectful dialogue are encouraged, but content will be moderated. Please, no personal attacks, obscenity or profanity, selling of commercial products, or endorsements of political candidates or positions. We reserve the right to remove any inappropriate comments. We also cannot address individual medical concerns or provide medical advice in this forum.