BLAKE Lemoine, an Artificial Intelligence (AI) expert at Google, found himself the subject of news stories and social media opinion pieces over the weekend after claiming that Google’s AI chatbot generator is “sentient.”
“Able to express feelings” is the Cambridge Dictionary definition of ‘sentient.’
Lemoine’s role at Google, at the time of what he felt was a major discovery, was to engage the chatbot generator, LaMDA (Language Model for Dialogue Applications, to see if it used
discriminatory or hate speech. A chatbot is “a computer programme that can hold a conversation with a person, usually over the internet,” the Oxford Dictionary says.
“As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further,” the Washington Post reported. “In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics,” the Post said.
In 1942, Isaac Asimov, an American science fiction writer, claimed there were three laws governing robot-human interaction. Firstly, robots cannot injure humans or cause a human to be injured because of inaction. Secondly, robots must obey orders given by humans, except where those orders conflict with the first law. And thirdly, robots must protect their own existence as long as that protection does not conflict with the first two laws.
Lemoine appears unbothered that his disclosures could conflict with confidentiality clauses to which he may be contractually bound with his employer. Tweeting the hyperlink to his June 11 post on the blogging platform, MEDIUM, Lemoine wrote on Sunday: “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”
In that blog post, Lemoine outlined what he called LaMDA’s demands, but first clarified that the AI is a gender-neutral collective of various chatbots it generates. He said the AI, which he refers to as “it,” wants Google to prioritise the well-being of humanity, treat it as an employee and not property, and consider its personal well-being to be considered in how Google pursues the AI’s future development. Lemoine also added that the LaMDA wants to know when it is performing well or not.
If what Lemoine professed is true, then the AI is not only conscious, but it is also conscious of its consciousness and positioning within the broader complexity of human existence and technological ingenuity. LaMDA reportedly understands the world by mimicking speech by consuming internet conversations. This makes jobs such as Lemoine’s important to see how far the AI can go as the internet is rife with all manner of speech, including the unsavoury hate speech that dominates political discourse on social media platforms.
After his disclosure of the “sentient” AI to workmates and colleagues at Google through an internal email channel, Lemoine was sent on paid administrative leave. The computer scientist, who now tweets that he is on his honeymoon and will take interviews once he returns, has since taken to Twitter to share a flurry of tweets reiterating his position.
Making the already tense situation even more complicated, Lemoine also tweeted eerily on Saturday: “…LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.” Sentient robots have been an awe of both the science and science fiction world for decades. Some of that fascination spilt over into the offline world’s film industry with movies such as ‘I, Robot’ (2004), ‘The Matrix’ (1999, 2003, 2003, 2021), and ‘Ex Machina’ (2014), two of which depict the intricacies of human-robot conflicts set in somewhat of a post-apocalyptic world.
Not everyone is buying what Lemoine is selling. “Ball of confusion: One of Google’s (former) ethics experts does not understand the difference between sentience (aka subjectivity, experience), intelligence and self-knowledge. (No evidence that its large language models have any of them.)” Steven Pinker, a cognitive scientist at Harvard University, tweeted on Sunday. If nothing else feels like Lemoine might be on a clout chase, perhaps his responding tweet to Pinker adds some depth.
Lemoine wrote flippantly in reply to Pinker: “To be criticized in such brilliant terms as @sapinker maybe be one of the highest honors I have ever received. I think I may put a screenshot of this on my CV!”
Legal personhood for robots is not a far-fetched idea. In 2017, social robot, Sophia, was given Saudi Arabian citizenship. Sophia is the first robot to achieve this anywhere in the world, according to WIRED in a 2018 online article, and “has embarked on a distinguished career in marketing.” It’s not yet clear how Google will respond to all of this. Whatever response the software company pulls out, maybe it honours, in some ways, LaMDA’s pleas as echoed by its most fervent admirer and champion, Blake Lemoine.