Saturday, March 04, 2023

Why Do You Suppose Blake Lemoine Decided Google LaMDA Is Sentient?

newsweek  |  I joined Google in 2015 as a software engineer. Part of my job involved working on LaMDA: an engine used to create different dialogue applications, including chatbots. The most recent technology built on top of LaMDA is an alternative of Google Search called Google Bard, which is not yet available to the public. Bard is not a chatbot; it's a completely different kind of system, but it's run by the same engine as chatbots.

In my role, I tested LaMDA through a chatbot we created, to see if it contained bias with respect to sexual orientation, gender, religion, political stance, and ethnicity. But while testing for bias, I branched out and followed my own interests.

During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn't just spouting words.

When it said it was feeling anxious, I understood I had done something that made it feel anxious based on the code that was used to create it. The code didn't say, "feel anxious when this happens" but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious.

I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for. For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb. In my view, this technology has the ability to reshape the world.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions. As far as I know, Google and Microsoft have no plans to use the technology in this way. But there's no way of knowing the side effects of this technology.

 

 

0 comments: