Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Showing posts with label Paper Review. Show all posts
Showing posts with label Paper Review. Show all posts

Thursday, January 08, 2009

Paper Review: A User Interface Using Fingerprint Recognition - Holding Commands and Data Objects on Fingers

This paper by Atsushi Sugiura and Yoshiyuki Koseki from NEC was published at UIST '98.

The main idea of the paper is that by using a fingerprint scanner, the unique fingerprint of each finger can be identified, and commands or objects can then be associated with each finger as though each finger was carrying things with it.

The authors specifically compared their FUI (Fingerprint User Interface) with that of the Pick-and-Drop, a pen-based direct manipulation technique that allowed a user to virtually hold a data object in a pen. The advantages of FUI are that there is no special tools required (the fingers themselves would suffice), and there is no ID management since each finger already has a name. Also because a normal person has 10 fingers, it is possible to hold 10 objects/command, which is only achievable by 10 pens in the Pick-and-Drop system.

When associating fingers with commands, FUI can be great for operations where the user cannot look at the operation panel, for example, operating machines in darkness or manipulate them in a bag/pocket, because the user would not need to visually identify different buttons for different functions. FUI is also good when the user wants to conceal commands from others, such as opening the cash register when being robbed: one finger would simply open it while another finger would open it and also send a signal to the police.

Fingers can also be used as containers to store information. For example, text strings can be stored with multiple fingers, and when scanning the finger on another machine or in another application, the strings can be retrieved and pasted into the new machine/app. To the user, it felt just like that each finger is carrying very specific information.

It is important to note the drawbacks of such an interface. Firstly, scanning and identifying fingerprints can be very slow (about 2 seconds). The paper was written in 1998. At that time, it took 1.7 seconds to identify one fingerprint. More than 10 years later, technology today is not much better. The Dell XPS m1530 laptops have fingerprint scanner built in, which allows the user to associate different commands with different fingers. However, it still takes about 1.5 second for the application to identify the fingerprint. This delay can be very annoying to the end user.


Secondly, many times, the end user would forget or get confused about which finger carries what information/command. Thirdly, FUI would not work well for one hand operations when that hand has to hold the device (such as a mobile phone for a user who is driving). Fourthly, the fingerprint identification algorithm can make mistakes. And when that happens, it is important that the user can invoke undo operations to recover. And for things that cannot be undone (such as the lunch of a nuclear missile), this technology better not be used.

Some of the example applications presented in the paper are: operating a CD player where a finger could be "play" while another finger be "fast forward"; using fingers to store bookmarks for web browsing. An interesting application suggested by the authors is that a phone number can be stored with a finger at home, and then at public phones, the number can be retrieved from the finger to automatically dial that number. While this idea is cool, I personally wouldn't think it's a good idea because that would have to allow the public phone to have access to people's fingerprint data together with access to user's data repository (whether the home computer or an online storage), which exposes the user's private information to possible unauthorized accesses.

The main advantages of fingerprints are: they are unique to each user, and the user normally don't lose them (as opposed to keys). However, they are also unchangeable, so once others get hold such information, they can impersonate the user for the rest of his life.







If you are sleepy but want to stay awake, do push-ups. At least you'll be stronger.

Tuesday, January 06, 2009

Paper Review: Bridging Physical and Virtual Worlds with Electronic Tags

This paper is written by Roy Want, Kenneth P. Fishkin, Anuj Gujar, and Bevely L. Harrison from Xerox PARC that was published in CHI 1999.

This paper is an extension from previous work that attempts to connect physical objects with virtual representations or computational functionality by using RFID (Radio Frequency ID) tags. The problem this paper is trying to solve here is how to leverage strengths and intuitiveness of the physical world to provide users additional ways to interact with applications and information in virtual worlds by following physical objects’ natural affordances. This problem is important because it enriches interfaces that enable the users to interact with information and computational devices naturally. It is important to both interface designers and general users.

The main insight of this paper is that RFID tags should be used instead of bar codes or glyphs, because they allow seamless augmentation with unobtrusive tagging. They are very small (easy to hide), inexpensive, and required no precise alignment and registration. The RFID tags themselves do not need on-board power, last a long time, and can be easily added to physical objects. The authors also pointed out two disadvantages: the administrator will need to associate functionalities to the tags and maintain it; because of the unobtrusiveness, the users might not know what are tagged and with what semantics.

The authors experimented with several prototype applications, such as tagging a French dictionary; so when sensed, it will invoke a language translation program to translate the currently displayed document. Another example is a tagged bookmark that will bind the current page to the tag and use it later to retrieve the page. Other examples include tagged books/documents, business cards, photo cube, and wristwatch. The power of this idea is that a tag can be associated with any semantics such as functionality associated, or context based services.

The evaluations used in this paper are the prototyping applications. Since this paper discusses the general idea of using RFID tags with physical objects, such an evaluation is proficient. For papers that might propose using RFID tags for a very specific problem under specific context (such as using RFID tags for an intelligent walker for eldercare), more extensive evaluation under that context would have been very necessary.



Since we are talking about RFID technology here, I'll extend on this topic some more. The kind of tags discussed in the paper look like the one shown in the image on the right. Another type that is more commonly used with merchandises nowadays look like the picture below that. What makes the technology so attractive is that these tags are passively powered. So the tags themselves do not include batteries. And when we point a RFID reader to the tag, it gets powered by the reader and then send radio signals back. That's why the more powerful the reader (and the antenna length and sensitivity), the longer range we have in sensing. These tags are also super cheap and very durable and can last a long time. These two educational videos below explain what RFID is and how it works.




Currently, the most widely usage of RFID technolgoy is in the manufacturing and retailing industry. The technology enables quick and easy way to identify products. However, they also arouse very severe security and privacy concerns. For example, I have read news articles that thieves equipped with powerful RFID readers could scan semi-trucks at truck stops to identify what kind of goods are transported and were able to successfully identify a truck load of brand new Dell computers, which they stole later. Extending from that, someone could also easily identify what kind of appliances or furniture you have at your home. The following videos demonstrate how hackers could easily start your car or duplicate your passport information when the technology was applied carelessly.




A few years back, one government lab in California (I cannot remember which one now) wanted to require all employees to be implanted with RFID tags for security reasons, because unlike badges, implanted tags cannot be stolen or lost (in fact, this actually makes it less secure because tag IDs can be easily duplicated). This ultimately resulted in a new law being signed into effect by Governor Schwarzenegger making it illegal in California to track employees by implanted RFID chips.

Public detectability and ease for duplicability are the two culprits to be blamed. However, what I want to point out is the RFID technology can still be applied to many areas as long as we go through careful design. And also the usefulness and feasibility really depend on the context. Like for example, with the French Dictionary example shown in the paper, security and privacy are not really the concerns, and RFID tagging can indeed be used creatively and with lots of fun.





Don't wait till the last day to submit your conference paper. The power supply of your computer might die and you might not be able to get to your files in time.

Monday, January 05, 2009

Paper Review: Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms

This is a paper by Hiroshi Ishii and Brygg Ullmer from the MIT Media Lab published at CHI 1997 Conference. Note that it's a paper over 10 years old. However, some of the ideas proposed in this paper are still fun, exciting and powerful even today.

The main idea is that people should not be restricted to standard GUI (Graphical User Interface) when interacting with computer systems, or more accurately, interacting with information. Instead, the authors suggested using physical objects in the real world, because we have developed rich languages and cultrues with valued haptic interaction with real physical objects.
Our intention is to take advantage of natural physical affordances to achieve a heightened legibility and seamlessness of interaction between people and information.
So how exactly would one go about doing this? The authors focused on graspable objects, which they called phicons (stands for physical icons), and associated them with functions using metaphors. Three prototypes were presented as demonstrations: metaDesk, transBoard and ambientRoom.
metaDESK

This design included a nearly horizontal back-projected graphical surface (the desk), an arm-mounted LCD screen, and a passive optically transparent "lens". Here a phicon could be a small model of the famous Great Dome building at MIT. Once it is placed on the desk, the display will show a 2D map with the location of the Great Dome building right underneath the phicon. Then as the user moves or rotate the phicon, the map will move and rotate accordingly. A second phicon (model of the Media Lab building) can also be placed on the map, and then they can be used simutaneously to scale the map. The arm-mounted LCD can be used to display 3D model of the map and let the user traverse the 3D model as the arm is moved. The transparent "lens" can be used kind of like a magnifying glass on the desktop display to review hidden information about each building.

The video below is a demonstration of the metaDESK system.





ambientROOM

The idea behind this system is that while we get information from what we are focusing on, such as the person we are having a dialog with, we also get information from ambient sources, such as passing by traffic, the lighting and weather condition. Therefore, if we could present information as ambient background and then allow users to manipulate information with phicons, we can let the user mostly focus on his main tasks, such as reading emails, and still be able to monitor other information flow inactively, and also be alerted of abnormal situations from background information sources. The example given in the paper was about displaying a web site traffic as ambient background. First, the authors tried using sound of raindrops to simulate web page hits. Eventually they settled for ripples on the surface of water by light projecting onto a water tank.

The video below is a demonstration of this sytem.




transBOARD

This was implemented on a SoftBoard product that monitors the activity of tagged physical pens and erasers with a scanning infrared laser. "hyperCARDs" (barcode-tagged paper cards) are used as container of digital strokes and broadcast live to remote users who might be monitoring the session with an identical hyperCARD. Then this hyperCARD can be brought home or office like index cards.

What makes this paper exciting to me is the idea of using real world physical objects and their natural affordances as metaphors to interact with information, especially, to interact with a robot. In the past research of our lab, a model airplane was also used as a phicon to command a UAV (Unmanned Air Vehicle). The operator could simply hold and turn the model airplane, and the UAV would perform the same maneuver in mid-air. The metaphor is very intuitive and mapped very well with the information we need to manipulate.

At the end of the paper, there is also a fun discussion about optical metaphors, and how they can be coupled with digital information. It is an overall interesting read. But to get a better understanding of how the prototype systems work, the videos are better.



Librate your mind. Inspirations can be found from the many everday objects around you.

Friday, January 02, 2009

Paper Review: Astronauts Must Program Robots

This is a position paper presented at the To Boldly Go Where No Human-Robot Team has Gone Before AAAI Spring Symposium by Dr. Yim from University of Pennsylvania. A lab mate of mine had the privilege of attending this. I only get to read the paper.

Now, don’t let the title fool you. This is not a paper talking about having astronauts sitting in space coding C++. The main idea of this paper is that given the complexity and many possible unexpected events of Mars and Lunar operations, the astronauts need to construct tools (“programming robots”) from a rich set of modules to complete various tasks. Therefore, a new programming model to fit such needs is necessary. The paper defines a robot as “a machine or device that operates automatically or by remote control” and program as “to provide (a machine) with a set of coded working instructions.” Note that these are very broad definitions. We could even define a copier as a robot and specifying how many copies to create as programming.

The paper argues that in addition to robot controlled autonomy, user interfaces and data filtering also required programming, and the context for programming these should be a system modular in both hardware and software. Divide and conquer is a common approach that works well, however, the tradeoff is the granularity of modules. Astronauts are highly skilled/intelligent people. The question becomes: would sending the programming tools to them be an effective and efficient use of resources? And the problem now falls under the Human-Robot Interaction domain.

The paper also presented and effectively argued against some of the common counter arguments:

- Local teleoperation is all that is required
Paper: Some forms of autonomy are likely to be incorporated as part of the user-interface.
Lanny: Yep! I step on gas, and the car goes.

- Programming can occur remotely from ground-based engineers
Paper: Terrain interaction on Mars cannot be duplicated exactly on earth.
Lanny: Don’t forget the 20+ minutes of communication delay.

- We can send highly capable robots that can handle any contingency circumstance
Paper: that’s impossible.
Lanny: “Good morning, Dave!” (See 2001, A Space Odyssey)

- Adding programmability and versatility will reduce efficiency and robustness.
Paper: The cost of not having flexibility and versatility will likely overweigh the loss of efficiency
Lanny: Depending on the context.

The solution proposed in this paper is to use a robot system composed of mostly identical modules such as the PolyBot G2 robot shown below.
When putting multiple of these together, they can form a 4-legged robot, or a bipedal one.


During a IROS 2003 workshop, participants were challenged to make the robot using these modules that can gamble using a slot machine, and you can see the winning configuration in this video:

The paper concludes that assuming the complexity of Mars and Lunar habitation, maybe the level of sophistication in programming needs to be extended to a similar level of complexity.

This type of robots is also called Shape-changing or Reconfigurable Robots. And here’s a video demonstrating the rich set of functions one can create with such robots:


This paper is certainly a fun read, however, I’d also like to point out some of the drawbacks of such modular systems and other possible solutions.

One major drawback of such modular systems is that it is a homogenous system without any specialized sensors, actuators or processors, and there is either no need or too expensive to have these components on each of the modules. A heterogeneous system with small groups of homogenous components might be a better idea under most contexts. However, that also adds multiple levels of complexities to the configuration and programming.

Identifying and configuring a workable solution using these modular systems also adds more complexity to the problem and adds more workload to the astronauts (taking away time and effort from other tasks). For example, what would be a good shape of the robot for a specific task and how can such a shape be built? The configuration of modules might not be intuitive to the astronauts and while in space, it is also difficult/expensive/dangerous for them to evaluate/test the configurations with possibly unexpected consequences.

If the modular system uses a multi-agent approach for decision making, the decision process can be very difficult for the astronauts to understand. If the decision making is centralized, then extra effort is required to generate and understand communications from each individual module.

Depending on the context, a specialized robot might be more desirable (for example, a transformer robot that can turn into a Roomba vacuum cleaner and a car is cool but unnecessary). Therefore, adding a bit more flexibility to the specialized robot might be better than going completely modular.

Therefore, I think a good solution for astronauts should be a combination of human, specialized robots with added flexibility, and highly flexible robots (modular ones) as additional tools to deal with the unexpected.




"HAL: I am sorry Dave, I am afraid I can't do that."




Picture of the Day:


Thursday, July 03, 2008

Paper Review: The Seven Ages of Information Retrieval (2)

This is a make up post! Continuing from previous post.

In this survey paper, the author used the analogy from Shakespeare’s seven ages of man to describe and predict the different stages in the evolution of the Information Retrieval (IR) systems. Note that the paper was written in 1996, which was very near to the beginning of the Internet/Dot Com booming era. At the current time of 2008, which is only two years away from the final stage of IR (2010) described by the author, we are certainly at an unfair advantage of being able to validate and criticize some of the predictions the author made, just as the author also had the same advantage over Vannevar Bush’s predictions at 1945.

The paper made good contribution to the field by describing the history of the IR systems from 1945 to 1996 with abundant information on the various technologies developed, IR systems built, and how they affected the research in IR. The paper is especially well organized and easy to understand. It started by introducing Bush’s predictions and also ended with the confirmation that Bush’s predictions will be achieved in one lifetime. This made the paper complete. The author also used comparing the simple statistical approach to more sophisticated Artificial Intelligence (AI) approach as a main thread throughout the different ages, which connected the seven ages well.

However, the paper also has some shortcomings. Firstly, AI is a big field that also used probabilistic/Bayesian methods all over the many subfields. There is not really a clear cut between AI and IR. For example, Natural Language Processing (NLP) is commonly considered a subfield in AI, but many NLP techniques are also the same as IR techniques.

Secondly, the author did not provide enough coverage for the AI side of the story, probably because he considered himself one in the IR camp. For example, Artificial Neural Networks (ANN) started around 1975, and Backpropagation (a form of ANN) gained its recognition in 1986 (itself actually dated in 1974). ANN can be used to detect patterns in text documents and is a great tool for IR, but was never mentioned in the paper. Another example is computer vision. The field of computer vision started in the 1970s and by 1995, many algorithms have been developed to analyze image contents. The paper didn’t mention any existing computer vision algorithms/techniques. Support Vector Machine (SVM), another great tool for IR, also came out in 1996, but I suspect it came out after the author wrote this paper.

The paper also failed to mention many important IR techniques such as td-idf, discriminant function. Specifically, it did not cover in enough depth with respect to evaluation methods such as K-L divergence, F1-Measure etc. More coverage of techniques/methods like these would have improved the quality of the survey paper.

Additionally, some of the graphs (Figure 4, 5, 9) in the paper do not contribute much to the content of the paper. Adding more information to these graphs to show correlation of things, or combining these graphs would be more beneficial to the readers.

In the latter part of the paper, the author made predictions about the possible evolution of IR and also pointed out potential problems. Since we know how technology evolved from 1996 to 2008, I’ll address some of them here.
The author mentioned that there would be enough guidance companies on the Web to help serve each user, so the lack of any fundamental advances in knowledge organization will not matter. What do we do when we need to look for information online these days? We search using Google or Wikipedia, and most of the time, we are relatively happy with the search results. Google made it its mission to “organize the world's information and make it universally accessible and useful”. And the mission for Wikipedia is to “empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally”. This leads to an interesting idea: with the help of Google and Wikipedia, maybe we can make the Internet the “Expert System” or “Knowledge Database” and have agents learn from it directly.

The author also worried about commercial publishing on the Internet. These days, the music industry has made (probably was forced to) online distribution an integral part of its sales channel. The sale of e-books, although not mainstream, is slowly growing its market share, and various fancy e-book readers (e.g. the Amazon Kindle) are also getting better and making headlines. Google book search and Amazon’s book preview function are also getting more and more attention.

In the paper, the author cautioned about storage and transfer constraints in digital video. Thanks to even lower storage cost and many competing broadband service providers, today, a majority of Internet users have fast connections and use various streaming video websites such as YouTube.com (video contents provide by users), hulu.com or ku6.com (content provided by commercial content owner) to watch video online. Even Google ads these days contain video contents. The paper proposed that in the 2000s, more research is needed for image, audio, and video content extraction. He was right on. Even today extracting information out of abundant rich-multimedia content is still a very challenging problem for many researchers. Other than the traditional type of information media, now we also have new media such as Google Earth, where you can retrieve information from a hybrid of satellite images, regular maps, and street views (360 degrees), coupled with driving directions and estimated travel time.

In the retirement age of IR, the author predicted that “the central library buildings on campus have been reclaimed for other uses, as students access all the works they need from dormitory room computer”. I don’t think this will happen in two years. Libraries in universities still play very major roles for students and teachers, and university bookstores are still making huge profits off poor students. And one has to admit that holding a physical book in hand is a very different experience from reading a book online.

To reduce the amount of junks and cluttering on the Internet, the author suggested maybe anonymous posting should not be allowed on the Internet. This sounds so funny for modern day people when privacy is such a big concern, though this remains a big problem. Think about the trillions of documents out there with more and more rich-multimedia contents, plus the flourishing blogs, forums and social networks. Google suggested using page ratings by users which was not well greeted. I think we just have to rely on the advancement in AI and search engines to deal with it. Techniques such as taking consideration of user preferences and past history are certainly the right way to go.

The author further pointed out some potential problems such as illegal copying (pirating in today’s term), copyright law itself, difficulty for people to upload, legal liability and public policy debates restricting technological development and availability. These remain challenges for IR systems today (the word RIAA and peer-peer network suddenly emerged in my head for some reason!) and probably will take more than two years to resolve.

In my personal opinion, I think AI will start to play a leading role in IR in the following years and one day we will have true question answering type of information retrieval at the finger tip of every Internet user. This concludes my review of this paper! Thanks for reading it!




Listen to smart people.

Wednesday, July 02, 2008

Paper Review: The Seven Ages of Information Retrieval (1)

This is a make up post!

In these paper review series, I will summarize interesting papers I read in a non-technical way (as best as I can) and write my own opinions too. Since my research interest is in AI robotics, the papers I review will mostly relate to interesting topics. So this is a good way of reading about research ideas but not worry too much about the math involved. I will also provide links to the actual paper, so if you want, you can read the actual paper after reading my review.

The paper I review today talks about the history of Information Retrieval. How would this relate to robots? I will review the answer in my future posts, so stay tuned.

Here's the PDF link for the paper "The Seven Ages of Information Retrieval" by Michael Lesk. And here below is the first part of my review:

This paper uses Shakespeare’s concept of seven ages of man to describe/predict the evolution of Information Retrieval from 1945 to 2010. Throughout the paper, the author tried to compare two “competing” approaches to IR: simple statistical methods – statistics (Warren Weaver’s approach) and sophisticated information analysis – artificial intelligence (Vannevar Bush’s approach). Keep in mind that the paper was written in 1996, just at the beginning of the Internet/dot com boom. That gives us this unfair advantage of being able to criticize some of the predictions the author made (just as the author had the advantage in criticizing Bush’s predictions).

In the childhood stage of IR (1945-1955), people still worked with very old technology. Having no idea how technology completely changed people’s lives starting from the end of the century, Bush predicted about the evolution of IR. He believed that photographic inventions (such as ultramicrofiche) would have great impact on libraries and IR, which the author didn’t agree. Bush also predicted automatic typing from dictation and OCR, which was not quite achieved at 1996. However, his prediction about the capabilities of computer systems became reality in the 1960s. The 7.5TB/user storage he predicted was far from 1996’s reality. Bush predicted individual interfaces personalized to the user and people would search from notes before search in scientific papers, but not until after the 1970s, it was difficult to get information into computers. The first IR system was built in the 1950s, which used indexes and concordances.

In the schoolboy stage (1960s), the first large scale information systems were built. Computers can search indexes must better than human, which demanded more detailed indexing. However, indexing could also become too expensive, hence arose the idea of free-text searching, which eliminates the need for manual indexing. Objections pointed out that selecting the right words might not be the correct label for a given subject. One solution is official vocabularies. The idea of recall and precision also came out as methods for evaluating IR systems, and they showed that free-text indexing was as effective as manual indexing and much cheaper. New IR techniques such as relevance feedback, multi-lingual retrieval were invented. The 1960s also was the start of research into natural language question-answering, and AI researchers began building systems to retrieval actual answers instead of documents, which turned out to be fragile.

In the adulthood stage (1970s), development of computer typesetting, word processing and the availability of time-sharing systems allowed IR to mature into real systems. Some of the early large-scale systems include Dialog, Orbit, BRS, OCLC, and Lexis. The most important research progress was the rise of probabilistic information retrieval with techniques such as term frequency. On the AI side, the key subjects in the 1970s were speech recognition and the beginning of expert systems. AI researchers felt they were attacking more fundamental and complex problems and that there would be inherent limits in the IR string-searching approach. They built programs that mapped information into standard patterns, but these tend to operate off databases rather than text files. The IR camp felt the AI researchers did not do evaluated experiments, and in fact built only prototypes which were at grave risk of not generalizing.

In the maturity stage (1980s), more information was available in machine-readable form and kept that way. There was also an enormous increase in the number of databases available on the online systems. Online Public Access Catalog (OPACS) developed during this period and many current magazines and newspapers were now online. There was increasing interest in new kinds of retrieval methods such as sense disambiguation using machine-readable dictionaries and computational linguistics. These would all fall under the statistical kind of retrieval. Because of the size of large commercial systems, evaluation of IR became very difficult. The widespread use of CD-ROM was a key technology change, which fit well with traditional information publishing economics and developed into a real threat to the online systems. Meanwhile, the AI community continued expert systems and knowledge representation languages. However, later in the decade, the failure of expert systems to deliver on their initial promises caused a movement away from this area, which marked the “AI winter”.

In the mid-life crisis stage (1990s), another technology revolution came out, the Internet. What’s remarkable is not that everyone is accessing information, but that everyone is providing information on a free basis. This matches the model Bush forecasted where each user is organizing information of personal interest and trading this with others. Classification type search engines (such as Yahoo) also came out. Internet also became a standard medium for publishing. Another important technology was scanning, which lowered the cost or digitizing publications. The Federal government also started a Digital Library research initiative. However, there is still very large scatter in the performance of retrieval systems, not only by question but even over individual relevant documents within answer lists. The author didn’t mention how the AI side was during this period.

In the fulfillment stage (2000s), the author predicted how IR might evolve. He believed that more ordinary questions can be answered by reference to online materials rather than paper materials, new books are offered online and there are guidance companies on the web so that the lack of any fundamental advances in knowledge organization will not matter. He thought the area required more research was in the handling of images, sounds and video. It was noted that online publish won’t pose a problem for academic publishing, but will do for commercial publishing. He further discussed the dramatic storage requirements for video contents.

In the retirement stage (2010), the author forecasted that the basic job of conversion to machine-readable form is done and great deal of multimedia information will be available, which are as easy to deal with as text. Internationalism will become a major issue. As to research, work will focus on improving the systems and learn new ways to use the new IR systems. There might even be PhDs in probabilistic retrieval.

The author further pointed out some potential problems such as illegal copying (pirating in today’s terms), copyright law itself, abundance of junk and cluttering on the Internet, difficulty for people to upload, legal liability and public policy debates restricting technological development and availability. At the end, the author also expressed positive views that Bush’s dream will be achieved in one lifetime and the job of organizing information could have higher status in the very near future.

[To be continued....]



Bill Gates does the Robot!
(See hi res video at http://www.microsoft.com)
(Rumor says no more Jerry Seinfeld and Bill Gates duo!)