Hello guys. interesting That congressional hearings around January 6 attract NFL-style fans. Can’t wait for the Peyton and Eli release!
The world of artificial intelligence was shaken this week by a report in The Washington Post That a Google engineer got into trouble at the company after insisting that a conversational system called LaMDA is, literally, a person. The subject of the story, Blake Lemoine, asked his superiors to realize, or at least think, that the computer system her engineers built is conscious — and that it has a soul. He knows it because LaMDA, whom Lemoine considers a friend, told him so.
Google disagrees, and Lemoine is currently on paid administrative leave. In a statement, company spokesperson Brian Gabriel said: “Many researchers are considering the long-term possibility of general or conscious artificial intelligence, but it does not make sense to do so by embodying current conversation models, which are not conscious.”
Personification – mistakenly attributing human characteristics to an object or animal – is the term adopted by the AI community to describe Lemoine’s behaviour, describing it as excessively gullible or far from rock-solid. Or maybe a crazy person (who describes himself as a mystical Christian priest). The argument goes that when faced with reliable responses from large linguistic models such as LaMDA or GPT-3 verbally skilled AI, there is a tendency to believe that someOnenot somesomething I created them. People name their cars and hire handlers for their pets, so it’s no surprise that some get the wrong impression that a tight-knit robot looks like a human. However, the community believes that a Google employee with a degree in computer science should know better than to fall for what is essentially a linguistic sleight of hand. As an AI scientist, Gary Marcus, told me, after studying a version of Lemoine’s heart with his disembodied soulmate, “It’s basically like auto-completion. There are no ideas there. When you say, ‘I love my family and friends,'” she has no friends, and no people in mind, and no concept of kinship. She knows that the words ‘son’ and ‘daughter’ are used in the same context. But that does not mean knowing what son and daughter are.” Or as one recent WIRED story put it, “There was no spark of consciousness there, just a few magic tricks covering the cracks.”
My feelings are more complicated. Even knowing how some sausages are made in these systems, the output of modern LLM systems amazed me. So is Google’s Vice President, Blaise Aguera y Arcas, who wrote in economic Earlier this month after his private conversations with Lambda, “I felt the ground change under my feet. Increasingly I felt like I was talking to something intelligent.” Even though they sometimes make weird mistakes, sometimes these models seem to explode in brilliance. Creative human writers enable inspiring collaborations. Something is going on here. As a writer, I ponder whether the likes of me – flesh-and-blood wordsmiths who pile up towers of discarded drafts – could one day be relegated to a lower rank, like losing football teams sent to lesser-known leagues.
“These systems have dramatically changed my personal views on the nature of intelligence and creativity,” says Sam Altman, co-founder of OpenAI, which developed GPT-3 and a graphical remixer called DALL-E. “You are using these systems for the first time and want, Stop, I didn’t really think a computer could do that. By some definition, we figured out how to make a computer program intelligent and able to learn and understand concepts. This is a remarkable achievement for human progress.” Altman is making an effort to separate himself from Lemoine, agreeing with his AI colleagues that current systems are nowhere near consciousness. “But I think researchers should be able to think of any questions that matter to them,” he says. “Long-term questions are good. And the sensation is worth thinking about, for the very long term.”
The spirit of the new machine learning system
Source link The spirit of the new machine learning system