9. The Gettier Problem (The Cow in the field)
One of the major thought experiments in epistemology (the field of philosophy that deals with knowledge) is what is known as “The Cow in the Field.” It concerns a farmer who is worried his prize cow has wandered off. When the milkman comes to the farm, he tells the farmer not to worry, because he’s seen that the cow is in a nearby field. Though he’s nearly sure the man is right, the farmer takes a look for himself, sees the familiar black and white shape of his cow, and is satisfied that he knows the cow is there. Later on, the milkman drops by the field to double-check. The cow is indeed there, but it’s hidden in a grove of trees. There is also a large sheet of black and white paper caught in a tree, and it is obvious that the farmer mistook it for his cow. The question, then: even though the cow was in the field, was the farmer correct when he said he knew it was there?
The Cow in the Field was first used by Edmund Gettier as a criticism of the popular definition of knowledge as “justified true belief”—that is, that something becomes knowledge when a person believes it; it is factually true; and they have a verifiable justification for their belief. In the experiment, the farmer’s belief that the cow was there was justified by the testimony of the milkman and his own verification of a black and white object sitting in the field. It also happened to be true, as the milkman later confirmed. But despite all this, the farmer did not truly know the cow was there, because his reasoning for believing it turned out to be based on false premises. Gettier used this experiment, along with a few other examples, as proof of his argument that the definition of knowledge as justified true belief needed to be amended. The video below shows another example of the Gettier Problem.
A robot or an AI agent can acquire knowledge in several distinct ways. The easiest one (at least for the programmer) is to memorize facts. For example: the capital of the United States is Washington D.C., earth is a sphere, and a triangle has three sides. These are beliefs we forcefully inject into the agent's brain and the agent might blindly take them in as faith. AI agents are great at storing facts and can store large quantities of facts. This is similar (roughly) to us human learning in elementary schools.
Another way of acquiring knowledge is to learn the rules and then apply rules to different problems. For example: don't run into an obstacle. Abstracting and representing rules can be quite challenging for designers, that's why robots today don't have a lot of rules programmed into them. Having too many rules can also exponentially increase the computational complexity can cause internal conflicts, unless the robot is designed to ignore rules at times or only apply rules that can help optimize or maximize certain utilities, like how we humans do at our convenience. However, once the rules are implemented, robots are great at executing them (as long as the rules are clearly defined). For example, we already have AI agents that can solve or generate proofs for very complicated math problems, even better than the human counterparts. This method is similar (roughly) to us human learning in middle schools. Learning by demonstration probably falls under this category as well.
A third way of acquiring knowledge for robots and AI agents is by the means of traditional machine learning using existing data sets. Whether supervised learning (where records in a data set are labeled by human) or unsupervised learning (no labels), the basic idea is that the agent would try to "rationalize" the data sets and then find some consistent properties, or "insights", in the data sets, and then be able to apply them to new information (generalize). This is similar (roughly) to us human learning in college where we are exposed to a lot of facts, but we have to come up with general senses of these facts and then conclude with our own, newly identified rules. Agents are normally bounded by "features" identified by human who provided the data sets. However, few smart agents can try to come up with "features" of their own and it falls under the name of "Active Learning".
Yet another way of acquiring knowledge for these artificial beings is through Bayesian networks (logical nodes interconnected like a neural network). Given that a good Bayesian network exists (or one that's pretty good at self-evolving), the agent first have some a priori beliefs of things (e.g., sky is blue and grass is green) acquired either through previous mentioned methods or simply non-informative (e.g., a woman is probably just as lazy as a man). Then through observations, the agent learns from experience and obtain a posteriori knowledge. The new knowledge might be completely opposite to the a priori beliefs, therefore the agent modifies its beliefs of the existing rules, previous facts, and the world and everything in the universe. You probably already see where I am going. This is similar (roughly) to us human beings learning in grad schools.
Not to ridicule things, by the time the agent becomes really imaginative and start to question everything simply based on reports from lower-level agents (hmm...grad school robots?), we make it a professor. (I hope my advisor is not reading this...)
Anyway, back to the original topic, IMHO, we can't always rely on justified true beliefs, but isn't at least trying to justify the belief better than blind beliefs? Of course when it comes to authority, my robots don't have to justify its beliefs, because to them, I am God!
Read Part 3: The Ticking Time Bomb
Video of the Day:
Great examples of illusions. So we shouldn't trust what we see with our eyes. Does this mean we shouldn't trust everything we see?
BTW: The easiest way to remember my blog address is http://lanny.lannyland.com
That is kind of confusing since you don't know if the farmer was referring to the actual cow or the papers with the spots.
ReplyDeleteWhy is there a mask ad?
ReplyDeleteThat video was so weird.
ReplyDelete