the company’s language model, LaMDA, may have risen from the protean digital muck as a sentient being. Lemoine has been placed on paid administrative leave after he published lengthy chat logs in support of his belief that LaMDA was self-aware, saying that if he didn’t know what LaMDA was, he’d “think it was a seven-year-old, eight-year-old kid that happens to know physics”.
Here’s the thing. Yes, large language models are incredibly impressive tech, and an afternoon spent playing around with GPT-3 is a frequently mind-blowing experience. But it’s crucial to remember they’re not reallyThey’re powerful pattern matching machines, trained on massive datasets. At heart they are built on probability, using cold mathematical analysis of these giant pools of written and visual information to “guess” at what might come after any given input.
Of course they’re “intelligent” - just not in the exact same way as humans are 🤷🏻♀️