Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Saturday, February 21, 2009

Robot of the Day: Tetris-Bot, Lego Robot Playing Tetris

Remember the Rubik's Cube solving robots in a previous post? Well, as robots are gradually taking on our world, they are also taking on more and more of our games, and this time, it's Tetris -- one of the most popular video games in the world -- hmm, this really reminds me of those long, sleepless nights of a poor college student!

Pointing a web cam at a computer screen, hooking it to a Lego Mindstorms NXT robot, and setting the robot next to a keyboard, Branislov Kisacanin successfully created a Tetris-Bot that's capable of playing Tetris all by itself. Although Branislov claims that this was an educational project for his kids, chances are, he had a lot more fun than his kids.

The setup really had three pieces. The first piece is a camera capturing video of a computer screen running the game Tetris. A digital signal processing board then processes the video and determine how the falling piece should be moved. The DSP board then tells the NXT robot what to do using LED lights. Then the NXT robot uses its three fingers (hands) to punch three keys on a keyboard to move left, move right, or rotate. Although the robot is capable of punching 3 keystrokes per second, it moves at a much slower pace.

The creator Branislov must had a strong engineering background from his choice of using a DSP board for signal processing. If I were to create such a robot, I'd probably use a computer to perform the computer vision task. Recognizing the Tetris pieces and their orientation is not a very difficult task because of the color simplicity. Then the program just have to use a data structure to represent the state of the game and then choose moves that will maximize a certain utility (defined by the programmer). The video below demos the capability of the Tetris-Bot. The actual robot doesn't appear until 1:48, so skip forward if you want to hurry.



Tetris-Bot here plays like a novice player. My guess is that it will probably forever stuck on level 1 because of it's physical constraints. What would be really nice and fun is to implement some kind of learning algorithm so the robot actually learns what strategies to play from its own experiences and then does some advanced planning by thinking about what to do based on the pieces shown ahead of time. If the algorithm can adjust its parameters (such as threshold values on when to get rid of rows quickly vs. when to wait for a long stick), then the Tetris-Bot would look a lot smarter and more intelligent.

This is yet another example of what kind of robots you can build at your home at your free time using commercially available robotics kit. I know what I am getting for my kids' birthday -- I am very serious about my kids' education! Aren't you?

So if robots are doing our work and playing our games for us, what is left for humans to do? Well, I can think of at least three things:
  • building better robots
  • blog about robots, and
  • work on my translation projects
Wait, aren't I doing these already? :) That is, of course, until we have robots that build better robots, robots that blog about robots, and robots that can translate better than I do ... and I am sure glad I won't live long enough to that day!

Video of the Day:

This is excellent engineering too: OK Go - This Too Shall Pass

Friday, February 20, 2009

Random Thoughts: Adventure in Japan -- Part 2

Adventure in Japan - Part 1

It has been a while since I returned from Osaka, Japan, but I thought I'd share a bit more of my experience in Japan for people who would like to visit Japan one day. Let me start off with some traveling tips.
  • For a lot of people (61 countries and regions to be exact) including US citizens, visiting Japan for non-paid activities for 90 days or less does not require a visa. Just buy a plane ticket and go. It's that easy!

  • There are several ways to get Japanese Yen. You can get it from your local bank before the trip. However, be aware that you have to pre-order, and it might take them up to 5 business days to get the money ready for you. They also charge a service fee ($10 for US Bank) for the exchange (from US Dollar to Yen or later from Yen to US Dollar after you return). This option works well if you exchange large quantities of money. A more convenient way to get Japanese Yen for a short term visitor is to get the Japanese money from ATMs at the Japanese airport. You will be charged about 3% for the exchange plus the ATM fee (probably $2). This option is better for small amount of exchange.

  • Before visiting Japan, I was told that most places in Japan would take credit cards such as American Express of Visa. After visiting Japan, I learned the hard way that this is not true. Japanese businesses mostly don't take credit cards. Even McDonald's in downtown Osaka refused to take any credit cards.

  • Power outlets in Japan are different from North America. North America has polarized outlets (one big one small). Japan has non-polarized outlets (both small). Also they don't have three holes, only two. If you have polarized plugs, then you need an adapter. The hotel might loan you one for free.

  • Standard voltage in Japan is 100V. Make sure your devices can operate at 100V. If not, you need a transformer.
For the rest of the blog post, I'll focus on one single topic: Japanese Food.

The conference provided free lunch everyday in the form of a very traditional style of Japanese food: Bento Box. According to Wikipedia:
Bento (弁当) is a single-portion takeout or home-packed meal common in Japanese cuisine. A traditional bento consists of rice, fish or meat, and one or more pickled or cooked vegetables, usually in a box-shaped container.
The three pictures below show the three kinds of bento box lunches I was fortunate to try out. Each bento box contained a great variety of things, including rice, sea food, and lots of pickled things. Everything in a bento box is served cold, removing the need to heat up things using a microwave. I must confess that although the bento boxes looked very colorful and pretty, cold rice and too much pickled meat/vegetables just didn't quite agree with me. And I must mention that all the beautiful wooden boxes were properly recycled to save trees!


Because of the generosity of the HRI 2010 conference organizers (they covered most of the meals) and my very busy schedule, I only had the chance to visit one traditional Japanese restaurant during the trip. The picture below on the left shows the front of the small restaurant in downtown Osaka named Money House. The picture on the right shows the hall way inside, just wide enough for one person, a typical setup for traditional Japanese restaurants.


Since Japan is entirely made up of islands, it was not surprising to see lots of sea food dishes on the menu. Since a friend in our dinner group was an American who had lived in Japan for 8 years, he took charge of all the ordering, and we got to experience some interesting food. For example, deep fried squids (left), octopus balls (middle), and of course, raw fish (right). The first two actually tasted great despite the weirdness, however, I shied away from the raw fish, because I don't ever eat raw meat (e.g., a rare steak).


Some other dishes are very similar to Chinese dishes, such as dumplings, stir fried clams, and boiled green soy beans.


There were dishes that tasted very American too, such as the big Chicken Nugget shown below. Alcohol is also a big part of a Japanese culture (see all those bottles in the middle picture), and I wonder how many people in Japan drink and drive. The dinner was great! There is only one thing I'd like to complain though: why were all the dishes served in such small plates? See the stack of small plates in the last picture? We are a bunch of hungry grad students and I am not kidding when I say we can eat a lot!


For a group of 13 people, the dinner cost per person was 3000 yen (roughly about $35 USD), quite expensive in American standards, but it was well worth it. How often does one get the chance to eat a real authentic Japanese dinner? And by the way, they did not take credit cards. :)




The easiest way to put a baby to sleep is to give him classical music!

Thursday, February 19, 2009

Paper Review: Using Maximum Entropy for Text Classification

This paper is written by Kamal Nigam, John Lafferty, and Andrew McCallum, all from Carnnegie Mellon University. It was presented at IJCAI-99 workshop on machine learning for information filtering.

This paper talks about the use of maximum entropy techniques for text classification and compares the performance to that of naïve Bayes.

Maximum entropy is a general technique for estimating probability distributions from data. The main principle in maximum entropy is that when nothing is known, the distribution should be as uniform as possible, that is, have maximal entropy. In text classification scenarios, maximum entropy estimates the conditional distribution of the class label given a document. The paper uses word counts as features.



Training data is used to set constraints on the conditional distribution. Maximum entropy first identifies a set of feature functions that will be useful for classification, then for each feature, measures its expected value over the training data and take this to be a constraint for the model distribution.

Improved Iterative Scaling (IIS) is a hillclimbing algorithm for calculating the parameters of a maximum entropy classifier given a set of constraints. It performs hillclimbing in parameter log likelihood space. At each step IIS finds an incrementally more likely set of parameters and converges to the globally optimal set of parameters.

Maximum entropy can suffer from overfitting and introducing a prior on the model can reduce overfitting and improve performance. To integrate a prior into maximum entropy, the paper proposes using maximum a posteriori estimation for the exponential model instead of maximum likelihood estimation. A Gaussian prior is used in all the experiments.

One good thing about maximum entropy is that it does not suffer from any independence assumptions.

The paper used three data sets to compare the performance of maximum entropy to naïve Bayes. The three data sets are WebKB, Industry Sector, and Newsgroups. In WebKB data set, the maximum entropy was able to reduce classification error by more than 40%. For the other two data sets, maximum entropy overfitted and performed worse than naïve Bayes.


Video of the Day:

Liu Qian performing magic tricks at the Chinese New Year Show. Can you figure out how he did the tricks?

Wednesday, February 18, 2009

Full Moon Crescent Saber: Chapter 1 (2)


The girl was young and tender.
Ding Peng felt as if he could no longer breathe, and his heart pounded three times faster than usual.
He had never come so close to a girl.
That was not to say that there were no young girls in his hometown, or that he had never seen any.
He always tried very hard to be abstinent and had used numerous methods to do so: shoving snow into his pants, soaking his head in the creek, pricking himself in the leg with a needle, running, mountain climbing, doing cartwheels….
Before obtaining his fame, he would not allow such things to distract him. He would not let anything waste his strength.
But now, all of a sudden, he saw a naked woman, a young, beautiful, naked woman.
With snow white skin, firm breasts, slender and sleek legs….
It took him all his strength to turn his head away, but the woman ran to him and held him in her arms, begging while gasping.
“Help me! You must save me!”
She was so close to him. Her breaths were warm and sweet. He could even hear her heartbeats.
His mouth was so dry that he couldn’t even utter a single word.
The girl had realized the change in his body, and her face reddened. Trying her best to cover herself up with her hands, she asked.
“You…eh…can…can you take off your clothes and lend it to me?”
Although the robe was the only clothes he had, he took it off without hesitation. The girl calmed down a bit after draping his robe over herself.
“Thanks!” she said earnestly.
Ding Peng finally calmed down a bit himself and could finally speak out.
“Is there someone chasing you?”
The girl nodded, and tears quickly welled up in her eyes.
“This place is out of the way and hard to find. Even if someone comes for you, you don’t have to be afraid,” said Ding Peng.
He is a man, born with the instinct to protect women, not to mention such a beautiful girl. He held her hands in his.
“As long as me and my sword are here, you don’t have to be afraid.”
“Thank you,” the girl said gently, feeling reassured.
She seemed to have said those words before. Then she looked downward and closed her mouth.
Ding Peng didn’t know what say.
He was going to ask, “Why are you running? Who’s after you? Why are they chasing you?”
But he forgot to ask, and she didn’t say.
Though she draped the robe over herself, such a short robe simply cannot cover up a fully-grown girl entirely.
A girl like her has too many inviting places on her body.
His heart was still thumbing, only too rapidly.
After a long while he finally noticed that her eyes were fixed on his packet of beef stew.
This meal could very well be his last meal, for he only had one copper penny left.
However, he said without a second thought, “These foods are clean. Why don’t you have some?”
“Thanks!” the girl said again.
“Help yourself!” replied Ding Peng.
The girl really helped herself promptly.
Ding Peng could never have imagined how such a beautiful girl could eat like a horse.
She must have been hungry for a long time and suffered deeply.
He could even picture in his mind the kind of tragedy the girl had endured.
A lonely girl, stripped of her clothes by a bunch of villains, locked down in a cellar, without any food. After quite some struggle, she finally managed to escape.
As he imagined the scenes in his mind, she had almost eaten up all his belongings.
She finished off all the beef and bean curds. She even ate all the steamed buns. All that was left were no more than a dozen peanuts.
Even she herself was somewhat embarrassed. “You can have these.” she pushed the peanuts over and said in an almost inaudible voice.
Ding Peng smiled.
He really wanted to cry, but somehow he couldn’t help but smile.
The girl also smiled, her face blushing, as red as a pretty flower in the sunshine.
A smile not only can make people happy, but also can shorten the distance between two persons.
They were both more relaxed by now, and the girl finally told her story.
Ding Peng’s imagination was actually not too far from what she told.
The girl had indeed been kidnapped by a bunch of villains. She had been stripped of her clothes and locked in a cellar. For several days, she didn’t eat anything. Those villains thought she was too hungry to move about, so they became careless, and she took the opportunity and escaped.
“I am so lucky to have run into you!” She found words so pale for her gratitude toward him.
“Where are they? I’ll go with you to find them!” Ding Peng asked as he rubbed the hilt of his sword.
“You cannot go!” the girl gasped.
“Why?” asked Ding Peng.
“There are some things I cannot say, but I promise I’ll tell you later,” the girl said with hesitation.
It seemed that the story was more profound than what had surfaced. If she couldn’t tell, he wouldn’t ask.
“I need to find a person, and then I’ll be alright,” the girl said again.
“Who are you looking for?”
“An elder of mine. He is over seventy years old, but still likes to wear bright red clothes. If you see him, you’ll definitely recognize him.”
“Would you find him for me?” the girl lifted her head and asked gently, her beautiful eyes filled with plea.
Ding Peng of course couldn’t go. He indeed couldn’t go, and he really shouldn’t go.
It was less than two hours from the fight that would decide his fate for his entire life.
He was still hungry, and he hadn’t practiced his sword moves. He must cultivate his mood and retain his strength so he could face Liu Ruosong. How could he just go and find an old man he had never met before for the sake of a stranger girl?
Yet he simply couldn’t let the word “no” out of his mouth. It was really no easy task to say “no” to a face of a beautiful girl. It would really require a great deal of courage and a lot of nerves. A man can only learn how to say “no” after going through many painful experiences.
“Where can I find this old gentleman?” Ding Peng sighed in his heart and finally asked.
“You will help me find him?” The girl’s eyes brightened.
Ding Peng had no choice but to nod. The girl jumped up and hugged him.
“You are such a nice guy! I’ll never forget you!”
Ding Peng knew that in the rest of his life, it would be very difficult to forget this girl as well.
“If you follow the creek and go up, you’ll see an old tree with very strange shapes at the end of the creek. He is always there playing the game of Go when the weather is good.”
Today’s weather was very good indeed.
“Once you see him, it is very important that you mess up his game board first. That’s the only way he would listen to you and then follow you over!”
Aren’t all board game enthusiasts like that? Even if the sky is falling, they’d still finish their present game first.
“I’ll wait here. Whether you find him or not, please hurry back.”

Now support the translator Lanny by following my blog and leaving comments! :)

Picture of the Day:


My daughter turned 5 recently, but I only have candles for one 6 and two 4s (I know, I am a cheapskate). How many different ways can you find to get 5 by adding math operators to these three numbers? Come on guys! You should at least be able to come up with the four obvious ones!! And there are a lot more. If you come up with one, write it down as a comment, so other people can focus on the unsolved ones...

Tuesday, February 17, 2009

Paper Review: Detecting Spam Web Pages through Content Analysis

This paper was written by Ntoulas (UCLA) and et al. (Microsoft Research) and 15th international conference on World Wide Web, 2006.

This paper is continuing work following two other papers on detecting spam web pages by the same group of authors. It focuses on content analysis as apposed to links. The authors propose 10 heuristics and investigate how well these heuristics correlate with spam web pages using a dataset of 17,168 pages. These heuristics/metrics are then combined as features in addition to 28 others to build a training dataset, so machine learning classifiers can be used to classify spam web pages. Out of the several classifiers experimented, C4.5 decision tree algorithm performed the best, so bagging and boosting are used to improve the performance and the results are reported in terms of accuracy and the precision recall matrix.

The main contributions of this reference paper include detailed analysis of the 10 proposed heuristics and the idea of using machine learning classifiers to combine them in the specific spam web page detection application. Taking advantage of the large web page collection (over 105 million) and a good-sized labeled dataset (17,168 pages), the paper is able to show some nice statistical properties of web documents (spam or non-spam) and good performances of existing classifying methods when using these properties as features of a training set.
Not being an export in the IR field, I cannot tell which of the proposed 10 heuristics are novel ideas with respect to spam web page detection. However, fraction of visible content and compression ratio seem to be very creative ideas and look very promising. Using each heuristic by itself does not produce good performance, so the paper combined them into a multi-dimensional feature space. Note here that this method has been used in many research domains with various applications.

One common question IR researchers tend to ask is: how good is your dataset? In section 2, the paper did a good job acknowledging the biases of the document collection and then further provided good justifications. This makes the paper more sincere and convincing. The paper also did a good job explaining things clearly. For instance, in section 4.8, the example provided made it very easy to distinguish “Fraction of page drawn from globally popular words” from “Fraction of globally popular words”. Another example is in section 4.6 when the paper explained how some pages inflated during compression. I specifically liked how the authors explained the concepts of bagging and boosting briefly in this paper. They could have simply directed the readers to the references, but the brief introduction dramatically improves the experience for those readers who have not worked with such concepts (or are rusty on them such as in my case).
Although well-written, the paper still has some drawbacks and limitations. Firstly, section 6, related work, should really have been placed right after introduction. That way, readers can get a better picture of how this problem has been tackled in the IR community and also easily see how this paper differs. Also, this section gives a good definition of “content spam”, and it makes much more sense to talk about possible solutions after we have a clear definition.

Secondly, in section 3, the paper talks about 80% of all pages (as a result of uniform random sampling) being manually classified? I strongly suspect that is what the authors meant to say. 80% of over 105 million pages will take A LONG TIME to classify, period! Apparently this collection is not the same DS dataset mentioned in section 4 because the DS dataset only contained pages in English. So what is this collection? It apparently is a larger labeled dataset than the DS dataset. From Figures 6, 8, 10, and 11, we see the line graph touching the x-axis due to possibly not enough data. Using this larger labeled dataset (of the English portion) might have produced better graphs. Another thing I’d like to mention here is that spam web page is a “subjective classification” (at least for me it is). Naturally I’d think the large data collection was labeled under a divide-and-conquer approach, so each document is only looked at by one evaluator. If this were true, then the subjectivity of the evaluators plays an important role on the label. A better approach would have been having multiple evaluators working on the same set of web pages and label following the majority vote to minimize each evaluator’s subjectivity.

Thirdly, when building the training set, the proposed 10 heuristics are combined with 28 other features before applying the classifier. I think it would be better to compare results of using only these 10 features, using only those original 28 features, and using all features combined. That way, we can better evaluate how well these additional 10 heuristics contributed to the improvement of the classifiers.

Additionally, in section 4.1, the paper says “there is a clear correlation between word count and prevalence of spam” according to Figure 4. I failed to see the correlation.

Lastly, the experiment results are only for English web pages. Since the analysis in section 3 (Figure 3) clearly indicate that French and German web pages contained bigger portions of spam web pages, it would be great to see how proposed solution works with those languages. I understand the difficulty of working with other languages, but it would really improve the paper even if only some very initial experiments were performed and results reported.

There are other minor problems with the paper as well. For example, for each heuristic, the paper reported the mode, median, and mean. I think it is also necessary to provide variance (or standard deviation) because it is an important descriptor of a distribution. I would also suggest using a much lighter color so that the line graph is more readable for the portions where it overlaps with the bar graph. Dr. Snell once said that we should always print out your paper in black and white to make sure it looks okay, and I am strong believer of that! Also in section 4.3, the authors meant to say the horizontal axis represents the average “word length” within a page instead of “number of words”.

I think it’s worth mentioning that the authors did an awesome job in the conclusions and future work section. Detecting web spam is really like an “arms race” between the spam filter designers and spammers. As new technologies are developed to filter spam, spammers will always work hard to come up with ways to break the filtering technology. This is an ongoing battle and degradation of the classification performance over time is simply unavoidable.

This is a well-written paper that showed excellent performance, and I certainly enjoyed reading it. I’d like to end this report with a quote directly from the paper which is so well said:

“Victory does not require perfection, just a rate of detection that alters the economic balance for a would-be spammer. It is our hope that continued research on this front can make effective spam more expensive than genuine content.”






I just learned recently that Superman's father is the Godfather!