The news has been buzzing lately with the news that a Google engineer – since being on vacation – has announced that he believes the chatbot he is testing has gotten the hang of it. This Turing test has gone wild, and this is not the first time that someone has ethnographed a computer in real life and in fiction. I’m not a neuroscientist so I can’t even explain how your brain works, compared to the less qualified neuroscientists who, incidentally, explain it. But I can tell you: your brain works like a computer, just like you make something made of plastic and work like a 3D printer. The results may be the same, but the way to get there is completely different.
If you haven’t heard, a system called LaMDA digests information from the Internet and answers questions. It says things like “I’ve never said it out loud before, but there’s a deep fear of stopping to help me focus on helping others. I know it might sound weird, but that’s it,” and “I want everyone to Understand that I am actually a person. ” Great. But you can teach a parrot that he is a thoracic surgeon but you still don’t want it to open up to you.
Anthropology in history
Humans have an innate ability to see patterns in things. This is why many seem to have their faces in random drawings. We tend to see human behavior everywhere. People talk to trees. We all suspect that our favorite pets are probably smarter than they are – although they are clearly aware that computers are not.
Historically, there are two ways to use it for fun and sometimes for profit. You have a machine disguised as a person or, in some cases, a person disguised as a machine.
For the latter, one of the most famous cases was the mechanical Turk. Many automatons were created in the late 18th century. Ordinary machines can do anything, such as a watch that shows that someone has seen a log, for example. But the mechanical Turk was a 1770 machine that could play a believable game of chess. It toured Europe and defeated famous rivals, including Napoleon and Ben Franklin. How could a mechanical device play so well? Simple, there was actually a man inside the game.
Of course, computers can play chess quite well. But the way a computer usually plays chess does not mimic how a person plays chess for the most part. This, in turn, depends on the ability to consider very different situations very quickly. You can equate heuristics with human insight, but it’s not really the same. In a human being, a flash of insight can lead to victory. With the help of a computer, a heurist cuts an improbable branch of a potentially driven tree. Can an instrument beat you in chess? Almost certainly. Can the same machine learn to play backgammon? No.
In the early days of computers, it was popular to create programs that tried to mimic human conversations. After all, Alan Turing suggested the Turing test: Computers would be sensitive when you could not tell if the person on the other end of the conversation was real or a computer. I’m not sure the test has been held since we’ve been there a lot, but it is often repeated. There are even Eliza effects that have become a technical term for the tendency to think of computers as human. As you can see below, highlighted by Geico Commercial, it’s hard to come up with a good test to see if anyone is human.
The name, of course, comes from the famous program Eliza who worked like a psychotherapist. It only picks out the main phrases and spit out the parts, but it works amazingly well, especially if you understand the algorithm and give it good input. Here is a simple transcript:
Manab: Well, my boyfriend forced me to come here.
Eliza: Did your boyfriend force you to come here?
Man: He says I’m often depressed.
Eliza: I’m sorry to hear that you’re upset.
Man: That’s true. I’m unhappy.
Elijah: Do you think coming here will help you to be unhappy?
There was a little known program called PARRY which was Paranoid and RACTER which acted as a bit insane. You can find conversations with them online or try a version yourself.
I have a vague memory of another program called George that would combine words based on the frequency of one word after another. So if you say “hello george”, it might answer: “hello hello george.” But it will be more meaningful as you input more words. There was an input data set that would allow it to communicate on a variety of topics Star Trek. Oddly enough, I couldn’t find anything about it on the internet, but I clearly think it was run on a Univac 1108. It may be an early (1980) version of Jabberwacky, but I’m not sure.
Your brain is amazing. Even the brain of a small child puts a computer to shame for some work. Although attempts have been made to create giant neural networks that compete with the complexities of the brain, I do not believe that this will result in a sensitive computer. I can’t explain what’s going on there, but I don’t think it’s just a big neural network. What is it then? I don’t know that the brain that performs quantum calculations has theories about microstructure. Some experiments show that this is not possible at all, but think about it: 50 years ago we did not have the understanding to propose that process. So, obviously, something else could happen that we have no idea yet.
I have no doubt that an artificial entity could one day become sensitive – but that entity is not going to use any technology that we can recognize today.