Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Tuesday, July 16, 2019

10 Famous Thought Experiments That Just Boggle Your Mind Part 9

Read part 8: The Chinese Room

2. Schrodinger's Cat

Schrödinger’s Cat is a paradox relating to quantum mechanics that was first proposed by the physicist Erwin Schrödinger. He hypothesized a scenario where a cat is sealed inside a box for one hour along with a radioactive element and a vial of deadly poison. There is a 50/50 chance that the radioactive element will decay over the course of the hour. If it does, then a hammer connected to a Geiger counter will trigger, break the vial, release the poison, and kill the cat. Since there is an equal chance that this will or will not happen, Schrödinger argued that before the box is opened the cat is simultaneously both alive and dead.

Schrödinger meant to use this thought experiment to demonstrate the absurdity of Bohr's Copenhagen interpretation of quantum mechanics, where a quantum system remains in superposition until it interacts with, or is observed by the external world. Turned out that:
Successful experiments involving superpositions of relatively large (by the standards of quantum physics) objects have been performed.



I still remember the day when my Statistics Professor Dr. Reese threw a coin to the floor and then immediately put his foot on top of it. "The event has already occurred. But what is the outcome?" he asked. That was a great example of the Bayesian world, where the reality has already happened, but we still don't know -- the uncertainty.

Uncertainty is actually everywhere in AI/Machine Learning/Robotics challenges. For example, an object detection model detected a person, but only with 50% confidence. That means the object detected could be a person or not a person at all. And the bounding box detected could be spot on, perfectly bound the person inside, or it could be 50% off with a 50% IOU (Intersection Over Union). So how do you use this information if you have to decide if there is a visitor by your door? In this case, the event has already happened (there's an object there, whether person or not). There are also cases where you don't know what future holds. If you are in a self-driving car, the self-driving car has no idea what the car next to it will do in the next minute, it could just stay in its lane, or it could change into your lane and collide with your car. A well-designed self-driving car has to be able to deal with uncertainty, maybe when it notices the car next lane starts to weave, maybe it's a good idea to break a bit and keep a good distance from it. But just like Schrödinger’s Cat, before you open the box, anything is possible.

One solution is to try to obtain more observations that might give you more information and reduce the uncertainty. Maybe if you shake the box and hear the cat meow, then you know the cat is likely still alive. Maybe you wait 20 years, and then likely the cat is dead. Actually you don't even have to wait that long, a cat probably will die if it eats no food for a week? What is the cat is pregnant when you put it in the box. You would now have multiple cats that are both dead and alive and you don't even know how many cats you have.


Let's end this with a picture of cute kittens. Didn't Jeff Dean build a deep neural network that learned the concept of a cat all by itself simply by watching lots and lots of youtube videos?


Read part 10: Brain in a Vat

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com

Monday, July 15, 2019

10 Famous Thought Experiments That Just Boggle Your Mind Part 8

Read part 7: Monkeys and Typewriters

[Found a 10-year old draft of this post that was never published. So here it is with some new contents added.]

3. The Chinese Room (Turing Test)

Source: Wikicomms
The Chinese Room is a famous thought experiment first proposed in the early 1980s by John Searle, a prominent American philosopher. Searle first hypothetically assumes that there exists a computer program that can translate Chinese into English and vise versa. Now imagine a man who only speaks English is placed in a sealed room with only a slot in the door. What he has are an English version of the computer program and plenty of scratch paper, pencils, erasers, and file cabinets. He would then receive Chinese characters through the slot, process them following the program instructions, and then produce Chinese characters on paper that he can slip out through the slot on the door. Although he doesn’t speak a word of Chinese, Searle argues that through this process the man in the room could convince anyone on the outside that he was a fluent speaker of Chinese.

Searle wanted to answer the question of whether a machine can truly "understand" Chinese, or does it only simulate the ability to understand Chinese. By creating this thought experiment, Searle argues that the man in the room does not really understand Chinese, therefore, the machine can also be a simulation and doesn't have a "mind" to understand the information, even if the machine can produce responses that gives people the impression of human intelligence.





This thought experiment raises a big question: Does something that appear to be intelligent truly possesses the intelligence. Searle calls the system that actually possesses the intelligence "Strong AI", and the system that doesn't "Weak AI". He argues that this thought experiment proves that "Strong AI" is false.

Today, there are many AI/Machine Learning algorithms at work that perform tasks for us human beings, from chatbots to self-driving cars. Especially with the popularity of deep neural networks, computers can do an amazing job at recognizing things such as human, cars, or stop signs. But how does the machine know an object is a stop sign? With the deep learning approach, we humans don't really know. Interestingly, with just changing the values of a few pixels, an object that still looked like a stop sign to humans now becomes a sign for speed limit 45 sign. And that's the danger of using blackbox systems, where misclassifying the sign could mean the difference of life and death.




This thought experiment is an extension of the Turing Test, which deserves a blog post entirely dedicated to that topic. Turing proposed that if a computer can fool human judges into thinking they are conversing with a real human through a typing only chat program, then the computer is considered to have artificial intelligence and has passed the Turing Test.

Based on this simple definition, many programs could be considered as having passed the Turing Test. in 1964, a program named ELIZA out of the MIT AI Lab gained fame by making users believe they were actually chatting with a psychotherapist, when in fact, the program simply was parroting back at patients what they'd just said. Then later in 2007, a Russian chatbot that emulated a woman was able to fool many lonely Russian males into giving out personal and financial details, given that these males probably couldn't think straight especially after excessive vodka consumption. In both cases, these chatbot programs appeared to be intelligent, when they don't really truly understand what humans had said to them.

[Fun fact: You can talk to a Martian version of ELIZA in Google Earth.]





On March 23, 2016, Microsoft released a chatter bot named Tay via Twitter. Only 16 hours later, Microsoft had to shutdown the bot because it started posting all kinds of inflammatory tweets. Was Tay really a racist? Of course not! But it sure looked like it was intelligent enough to get conversations going.


In one of our Innovation Weeks at work, I actually played with a bunch of chatbots, including cleverbot and Mitsuku (Pandorabots), and integrated them with smart home/smart assistant functions. Mitsuku chatbot has won the Loebner Prize 4 times out of the last 6 years, so it is really up there with respect to its capabilities. During the live demo, when I asked the bot "where is my wife?" it actually replied, "Did you check the bathroom?" Very impressive!! Things got a bit weirder when I had a volunteer talking with the bot and the bot started asking him, "How is your father?"

Earlier this year, OpenAI's researchers unveiled GPT-2, a text-generating algorithm that can write news articles if you give it the beginning part.
[GPT-2] has no other external input, and no prior understanding of what language is, or how it works,” Howard tells The Verge. “Yet it can complete extremely complex series of words, including summarizing an article, translating languages, and much more.
This is a perfect example of the Chinese Room scenario. We have AI that can behave as if it is intelligent, yet has no understanding of language. I guess we have to be super careful about what tasks we give such AI agent/algorithms/models. It might be able to fool us most of the time. But when it comes the time when it fails, because it is only a fancy simulation, we are in big trouble.


Read part 9: Schrodinger's Cat



Picture of the Day:

Amazing depiction of how I felt about my thesis during grad school years... (picture credit: http://www.phdcomics.com)

BTW: The easiest way to remember my blog address is http://lanny.lannyland.com