Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Monday, July 15, 2019

10 Famous Thought Experiments That Just Boggle Your Mind Part 8

Read part 7: Monkeys and Typewriters

[Found a 10-year old draft of this post that was never published. So here it is with some new contents added.]

3. The Chinese Room (Turing Test)

Source: Wikicomms
The Chinese Room is a famous thought experiment first proposed in the early 1980s by John Searle, a prominent American philosopher. Searle first hypothetically assumes that there exists a computer program that can translate Chinese into English and vise versa. Now imagine a man who only speaks English is placed in a sealed room with only a slot in the door. What he has are an English version of the computer program and plenty of scratch paper, pencils, erasers, and file cabinets. He would then receive Chinese characters through the slot, process them following the program instructions, and then produce Chinese characters on paper that he can slip out through the slot on the door. Although he doesn’t speak a word of Chinese, Searle argues that through this process the man in the room could convince anyone on the outside that he was a fluent speaker of Chinese.

Searle wanted to answer the question of whether a machine can truly "understand" Chinese, or does it only simulate the ability to understand Chinese. By creating this thought experiment, Searle argues that the man in the room does not really understand Chinese, therefore, the machine can also be a simulation and doesn't have a "mind" to understand the information, even if the machine can produce responses that gives people the impression of human intelligence.





This thought experiment raises a big question: Does something that appear to be intelligent truly possesses the intelligence. Searle calls the system that actually possesses the intelligence "Strong AI", and the system that doesn't "Weak AI". He argues that this thought experiment proves that "Strong AI" is false.

Today, there are many AI/Machine Learning algorithms at work that perform tasks for us human beings, from chatbots to self-driving cars. Especially with the popularity of deep neural networks, computers can do an amazing job at recognizing things such as human, cars, or stop signs. But how does the machine know an object is a stop sign? With the deep learning approach, we humans don't really know. Interestingly, with just changing the values of a few pixels, an object that still looked like a stop sign to humans now becomes a sign for speed limit 45 sign. And that's the danger of using blackbox systems, where misclassifying the sign could mean the difference of life and death.




This thought experiment is an extension of the Turing Test, which deserves a blog post entirely dedicated to that topic. Turing proposed that if a computer can fool human judges into thinking they are conversing with a real human through a typing only chat program, then the computer is considered to have artificial intelligence and has passed the Turing Test.

Based on this simple definition, many programs could be considered as having passed the Turing Test. in 1964, a program named ELIZA out of the MIT AI Lab gained fame by making users believe they were actually chatting with a psychotherapist, when in fact, the program simply was parroting back at patients what they'd just said. Then later in 2007, a Russian chatbot that emulated a woman was able to fool many lonely Russian males into giving out personal and financial details, given that these males probably couldn't think straight especially after excessive vodka consumption. In both cases, these chatbot programs appeared to be intelligent, when they don't really truly understand what humans had said to them.

[Fun fact: You can talk to a Martian version of ELIZA in Google Earth.]





On March 23, 2016, Microsoft released a chatter bot named Tay via Twitter. Only 16 hours later, Microsoft had to shutdown the bot because it started posting all kinds of inflammatory tweets. Was Tay really a racist? Of course not! But it sure looked like it was intelligent enough to get conversations going.


In one of our Innovation Weeks at work, I actually played with a bunch of chatbots, including cleverbot and Mitsuku (Pandorabots), and integrated them with smart home/smart assistant functions. Mitsuku chatbot has won the Loebner Prize 4 times out of the last 6 years, so it is really up there with respect to its capabilities. During the live demo, when I asked the bot "where is my wife?" it actually replied, "Did you check the bathroom?" Very impressive!! Things got a bit weirder when I had a volunteer talking with the bot and the bot started asking him, "How is your father?"

Earlier this year, OpenAI's researchers unveiled GPT-2, a text-generating algorithm that can write news articles if you give it the beginning part.
[GPT-2] has no other external input, and no prior understanding of what language is, or how it works,” Howard tells The Verge. “Yet it can complete extremely complex series of words, including summarizing an article, translating languages, and much more.
This is a perfect example of the Chinese Room scenario. We have AI that can behave as if it is intelligent, yet has no understanding of language. I guess we have to be super careful about what tasks we give such AI agent/algorithms/models. It might be able to fool us most of the time. But when it comes the time when it fails, because it is only a fancy simulation, we are in big trouble.


Read part 9: Schrodinger's Cat



Picture of the Day:

Amazing depiction of how I felt about my thesis during grad school years... (picture credit: http://www.phdcomics.com)

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

2 comments:

  1. Anonymous2:22 PM

    Riddles are to complicated for my melon!

    ReplyDelete
  2. I am super confuzled.

    ReplyDelete