MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Thursday, 09 May 2024

Google’s AI engineer sent on paid leave

Company’s human resources department said he had violated Google’s confidentiality policy

Nico Grant, Cade Metz San Francisco Published 14.06.22, 01:05 AM
Blake Lemoine, the engineer, says that Google’s language model has a soul. The company disagrees.

Blake Lemoine, the engineer, says that Google’s language model has a soul. The company disagrees. Reuters file picture

Google placed an engineer on paid leave recently after dismissing his claim that its artificial intelligence is sentient, surfacing yet another fracas about the company’s most advanced technology.

Blake Lemoine, a senior software engineer in Google’s Responsible AI organisation, said in an interview that he was put on leave on Monday. The company’s human resources department said he had violated Google’s confidentiality policy. The day before his suspension, Lemoine said, he handed over documents to a US. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.

ADVERTISEMENT

Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.” The Washington Post first reported Lemoine’s suspension.

For months, Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul.

Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Lemoine did.

Some AI researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims. “If you used these systems, you would never say such things,” said Emaad Khwaja, a researcher at the University of California, Berkeley.

New York Times News Service

Follow us on:
ADVERTISEMENT