Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Tuesday, February 17, 2009

Paper Review: Detecting Spam Web Pages through Content Analysis

This paper was written by Ntoulas (UCLA) and et al. (Microsoft Research) and 15th international conference on World Wide Web, 2006.

This paper is continuing work following two other papers on detecting spam web pages by the same group of authors. It focuses on content analysis as apposed to links. The authors propose 10 heuristics and investigate how well these heuristics correlate with spam web pages using a dataset of 17,168 pages. These heuristics/metrics are then combined as features in addition to 28 others to build a training dataset, so machine learning classifiers can be used to classify spam web pages. Out of the several classifiers experimented, C4.5 decision tree algorithm performed the best, so bagging and boosting are used to improve the performance and the results are reported in terms of accuracy and the precision recall matrix.

The main contributions of this reference paper include detailed analysis of the 10 proposed heuristics and the idea of using machine learning classifiers to combine them in the specific spam web page detection application. Taking advantage of the large web page collection (over 105 million) and a good-sized labeled dataset (17,168 pages), the paper is able to show some nice statistical properties of web documents (spam or non-spam) and good performances of existing classifying methods when using these properties as features of a training set.
Not being an export in the IR field, I cannot tell which of the proposed 10 heuristics are novel ideas with respect to spam web page detection. However, fraction of visible content and compression ratio seem to be very creative ideas and look very promising. Using each heuristic by itself does not produce good performance, so the paper combined them into a multi-dimensional feature space. Note here that this method has been used in many research domains with various applications.

One common question IR researchers tend to ask is: how good is your dataset? In section 2, the paper did a good job acknowledging the biases of the document collection and then further provided good justifications. This makes the paper more sincere and convincing. The paper also did a good job explaining things clearly. For instance, in section 4.8, the example provided made it very easy to distinguish “Fraction of page drawn from globally popular words” from “Fraction of globally popular words”. Another example is in section 4.6 when the paper explained how some pages inflated during compression. I specifically liked how the authors explained the concepts of bagging and boosting briefly in this paper. They could have simply directed the readers to the references, but the brief introduction dramatically improves the experience for those readers who have not worked with such concepts (or are rusty on them such as in my case).
Although well-written, the paper still has some drawbacks and limitations. Firstly, section 6, related work, should really have been placed right after introduction. That way, readers can get a better picture of how this problem has been tackled in the IR community and also easily see how this paper differs. Also, this section gives a good definition of “content spam”, and it makes much more sense to talk about possible solutions after we have a clear definition.

Secondly, in section 3, the paper talks about 80% of all pages (as a result of uniform random sampling) being manually classified? I strongly suspect that is what the authors meant to say. 80% of over 105 million pages will take A LONG TIME to classify, period! Apparently this collection is not the same DS dataset mentioned in section 4 because the DS dataset only contained pages in English. So what is this collection? It apparently is a larger labeled dataset than the DS dataset. From Figures 6, 8, 10, and 11, we see the line graph touching the x-axis due to possibly not enough data. Using this larger labeled dataset (of the English portion) might have produced better graphs. Another thing I’d like to mention here is that spam web page is a “subjective classification” (at least for me it is). Naturally I’d think the large data collection was labeled under a divide-and-conquer approach, so each document is only looked at by one evaluator. If this were true, then the subjectivity of the evaluators plays an important role on the label. A better approach would have been having multiple evaluators working on the same set of web pages and label following the majority vote to minimize each evaluator’s subjectivity.

Thirdly, when building the training set, the proposed 10 heuristics are combined with 28 other features before applying the classifier. I think it would be better to compare results of using only these 10 features, using only those original 28 features, and using all features combined. That way, we can better evaluate how well these additional 10 heuristics contributed to the improvement of the classifiers.

Additionally, in section 4.1, the paper says “there is a clear correlation between word count and prevalence of spam” according to Figure 4. I failed to see the correlation.

Lastly, the experiment results are only for English web pages. Since the analysis in section 3 (Figure 3) clearly indicate that French and German web pages contained bigger portions of spam web pages, it would be great to see how proposed solution works with those languages. I understand the difficulty of working with other languages, but it would really improve the paper even if only some very initial experiments were performed and results reported.

There are other minor problems with the paper as well. For example, for each heuristic, the paper reported the mode, median, and mean. I think it is also necessary to provide variance (or standard deviation) because it is an important descriptor of a distribution. I would also suggest using a much lighter color so that the line graph is more readable for the portions where it overlaps with the bar graph. Dr. Snell once said that we should always print out your paper in black and white to make sure it looks okay, and I am strong believer of that! Also in section 4.3, the authors meant to say the horizontal axis represents the average “word length” within a page instead of “number of words”.

I think it’s worth mentioning that the authors did an awesome job in the conclusions and future work section. Detecting web spam is really like an “arms race” between the spam filter designers and spammers. As new technologies are developed to filter spam, spammers will always work hard to come up with ways to break the filtering technology. This is an ongoing battle and degradation of the classification performance over time is simply unavoidable.

This is a well-written paper that showed excellent performance, and I certainly enjoyed reading it. I’d like to end this report with a quote directly from the paper which is so well said:

“Victory does not require perfection, just a rate of detection that alters the economic balance for a would-be spammer. It is our hope that continued research on this front can make effective spam more expensive than genuine content.”






I just learned recently that Superman's father is the Godfather!

Monday, February 16, 2009

Robot of the Day: Wakamaru, the Robot Actor and Salesman

On the second day of the Human-Robot Interaction (HRI) 2010 conference, Dr. Ishiguro, one of the main organizers of this year's HRI conference, led us to a small traditional Japanese theater, and presented us a robotic play titled Hataraku Watashi (I, Worker), where I finally had the pleasure to meet the famous robot actor (and actress) in person. I have heard of them and their play from news media a long time ago.

The two robots stared in the theatrical production are the Wakamura robots made by Mitsubishi, named after the child name of a famous ancient Japanese general, although these yellow, 1 meter tall, and 30 kg robots were originally designed for companionship for elderly and disabled, selling at a hefty price of $14000 each.

The project was headed by Dr. Ishiguro at Osaka University, who sent his grad students to theater classes and also invited famous Japanese playwright, Oriza Hirata, to write a story. The result was a 20-minute piece named I, worker starring two Wakamura robots alongside two human actors. The robots played two depressed household servants who work for a young couple. Learning from the young couple's life experience, the robots grew tired of their mundane lifestyle and longed to break free and see the world.

Although the robots are not capable of facial expressions, their head and limb movements and the autonomous navigation capabilities successfully conveyed the depressing feeling to the audience. Most of the audience that day did not speak Japanese, but fortunately, Dillon, an American who works at ATR research institute in Osaka volunteered the translation on a big monitor, so we were able to follow the story. One interesting thing we noticed was that the robots apologized a lot, probably due to Japanese culture. The video below shows sections of the play in Japanese.


Since the robots were playing robots in the play, it is pretty hard to beat their performance with real human actors, but when asked about how they felt about the two robots in rehearsals and the real play, the human actor and actress actually almost thought of these robots as real human actors. So what if one day we have plays that comprise of robot actors only, when robots are becoming more sophisticated? What if one day we start to see robots sitting in the audience together with human? Don't think that would be interesting and entertaining by itself?

Other than acting, these Wakamaru robots are also acting as salesman in clothing stores now, and one found a job in a Uniqlo store in downtown New York. This robot is not only capable of conversations, it can also recommend promotions to customers, and best of all, it even asks customers to exercise with it, something that could be in great demand here in the US where obesity is a severe problem.



Video of the Day:

I also saw this at the HRI conference (yes, it's a chimpanzee, not a robot), and thought you'd all get a kick out of it!

Sunday, February 15, 2009

Robot of the Day: Ishiguro and his "twin brother" Geminoid

Ever wished you could have a secret twin brother so he could go to classes for you while you sleep in or go out for a field trip? Well, maybe that dream could come true some day thanks to robotics and android technologies!

When Dr. Ishiguro decided to build an android robot for his research, he thought to himself, "why not build one that just looks like me?" And not long after, his new "twin brother", Genimoid, was born into this crazy human world!

Dr. Hiroshi Ishiguro of the Osaka University is the general co-chair of this years HRI conference. He was also one of the panelist in the panel discussion in the HRI Young Pioneers Workshop I was attending, so I finally met him in person, exccept for a fraction of a second, I wondered if it was really him, or his "twin brother" that was sitting at the front row of the room. :)

Dr. Ishiguro's ultimate goal in researching android and human-robot interaction is to really learn and understand about human race itself. His grad students had built behaviors resembling his behaviors into the robot, but Dr. Ishiguro didn't think he had actually behaved like that. I guess sometimes we don't really know ourselves, and looking at oneself as from an external viewing angle might be a very strange and suprising experience.




Geminoid had limited capabilities. He could move his head, his hands, and twich his legs. He blinks and moves his lips when he talks. He could also show some limited facial expressions. There are 50+ motors inside him, though he was not built to walk around, so should we say, he was born paralyzed? But he could hear, see, and speak, and one of his applications is for tele-presense, so Dr. Ishiguro could speak at a remote location, and the robot will lip sync with him through controls over the Internet.




Dr. Ishiguro's advice on career were very simple: 1) Do really good work, and 2) Work on new things. "If you do that," he said, "then good things will just happen to you!"

Picture of the Day:

If you have not seen this movie, I would recommend it. See what consequences you might have to face when you can just duplicate yourself.

Saturday, February 14, 2009

Random Thourghts: Adventure in Japan -- Part 1

Hello everyone! Today is March 1st, 2010 (again, I am still living in this parallel universe), and this is Lanny blogging live from Osaka, Japan! :)

For those of you who already know, I'll be spending the next five days here with my adviser, Dr. Mike Goodrich, attending the Human-Robot Interaction Conference. This is the fist time I visit Japan. Thought I'd share with you some of the fun adventures and "culture shocks" during my trip, so you'll be prepared when you decide to visit Japan someday in the future.

=======================================================

Left home at exactly 4:00am on Sunday morning (February 28, 2010) and checked in at the City Plaza Osaka hotel downtown Osaka at approximately 8:00pm Monday evening (March 1, 2010). Does it really take this long? The truth is: yes, it does take a long time, but not this long. Osaka time is 14 hours ahead of Utah time (MST), so the trip "only" took 24 hours. What a long day!

The flight out of Salt Lake City to San Francisco was at 6:00am local time. Probably because our itinerary included international flights, we could not check in using the easy terminal, and had to stand in a long line to check in at the desk even though we only had carry on luggage. This only gave us 30 minutes to go through security check and rush to our gate, during which, I forgot to collect the little plastic bag containing my hand lotion and hair spray (probably because it didn't work well with the conveyor belt system and did not come out in time. Well, guess I'll just have dry hands and bad hair during the trip then! The good news was, we made the flight!!

The lay over at San Francisco was 4.5 hours. One waiting passenger at the International Terminal got so bored that he started exercising Tai-Chi, which successfully helped us kill about 20 minutes.


 
Tai-Chi in SFO International Terminal


The plane we flew in is a Boeing 777, big enough to have 2 seats on each side and 5 seats in the middle (where we sat at). The flight duration was 12 hours, and the distance between SF and Osaka is about 5800 miles.


 
Boeing 777 at SFO International Terminal


One thing nice about going to Japan from the US is that you don't need a visa. Going through the customs was quick and easy, but soon I had my first "culture shock" at the Osaka Airport restroom. While I was washing hands, a woman janitor just decided to walk in the men's restroom and began cleaning while others were still, you know, doing their business at the urinals. According to Mike, who lived in Japan before, this is a very common thing. Totally weird!

To get to the hotel downtown, we had to take a train first, and then transfer to a subway. We successfully bought our train tickets at at the station by showing the name of our destination in writing to the ticket agent. He gave us a warm reception and kept talking to us in Japanese as if we actually understood what he was saying. The ticket was a bit pricey: 1390 Yen, which is approximately $14 USD. Th exchange rate is 90 some Yen to 1 USD, so I simply calculate as if 1 USD is 100 Yen. The picture below shows a normal train at the station. We actually took a different one with a bullet-shape head.


 
Normal train at the underground train station.



 
Inside the Rapit Bullet Train



 
Looking out from the Rapit Bullet Train


The Rapit Bullet Train was quite empty, however, we did have to seat at our designated seats. Quite to my pleasant surprise, it had English anouncements for stations. Between stations, a train attendent lady would walk the entire six cabins to check tickets. The attendent lay was extremely polite -- she would bow every single time when she entered or left a cabin. Since she walked back and forth, I saw her bowing probably at least 10 times.

At a transfer station, we had to transfer from the train system to the city subway system. It took us a long time because we weren't sure what tickets to buy and which subway to get on -- there was no human agent to help us this time. Eventually we just boldly jumped on a subway and luckily, it was the right now. The subway fare is much cheaper: 230 Yen, which is about $2.5 USD. Another interesting thing I noticed was that the train and the subways would always play nice short melodies to indicate the arrival or leaving a station. Unlike the train, the subway didn't have English announcement, so we had to count number of stops.

When we exited the subway station in downtown, there was a slight shower. Immediately I saw a Starbucks Coffee shop, a 7-Eleven, and a McDonald (shown below, sorry, a bit blurry) around us. Man, it feels just like home! However, we didn't have any instructions to follow from the station to the hotel, and we didn't even know which direction we were going (really started to miss the nice grid street system in Utah). We had hoped to be able to just spot the hotel (since it is a big unique-looking building), but there are many big buildings downtown, and we failed. The shower also began to get worse.


 
McDonald at downtown Osaka

Desperate, we stopped a girl on the street and showed her a picture of the hotel without even attempt to talk to her. The girl then replied and gave us instructions with perfect English -- What a miracle! Turned out the hotel was only a few minutes walk from the subway station exit, we just didn't know which direction to go.

People in Japan drive on the left side of the street. I don't really know why. They weren't a British colony as far as I know. They also walk on the left side of the street. I kept forgetting about it and kept bump into people. How rude of me!


 
City Plaza Osaka hotel right at downtown Osaka


We were sure glad to finally found our hotel and checked in. It is a very nice hotel, and the twin-bed room was much more spacious than I had expected (given the fact that this is in Japan). Soon I found more differences between the American culture and the Japanese culture.

In Japanese bathrooms, shower and bath are two different things and therefore use different parts of the room. Toilet is actually in a different room on the other side, and man, what a FANCY toilet!! I won't go into more details about it, but you can see the picture below and judge yourself.

Japnese style bath room with seperate shower area


 
Fancy Toilet System


Well, that's enough for today. Look out for more updates directly from Osaka Japan in my blog soon!




You don't need to know Japanese to survive Osaka, and I am the living proof!

Friday, February 13, 2009

Paper Review: Finding Question-Answer Pairs from Online Forums

This paper was written by Cong (Aalborg University) et al. and presented at the 31st annual international ACM SIGIR conference on Research and development in information retrieval, 2008.

Question-Answer System is currently a very hot topic in the IR community and attracted many researchers. This 2004 paper (published in ACM SIGIR’08) is one among many in this area. The problem the paper tries to solve is how to mine knowledge in the form of question-answer pairs specifically from a forum setting. The knowledge can then be used for QA services, to improve forum management, or to augment the knowledge base of chatbot. This is a challenging paper to read because it touches many different concepts and ideas from many research disciplines besides IR such as machine learning (n-fold cross-validation), Information Theory (Entropy), NLP (KL-divergence), Bayesian statistics (Markov Chain Convergence), and Graph Theory (graph propagation).

The main contributions of this reference paper include: 1) a classification-based method for question detection by using sequential pattern features automatically extracted from both questions and non-questions in forums; 2) an unsupervised graph-based propagation approach, which can also be integrated with classification method when training data is available, for ranking candidate answers. The paper also presented a good amount of experimental results including a 2 (datasets) 4 (methods) 3 (measures) design for question detection, a 2 (w. w/o answers) 9 (methods) 3 (measures) design for answer detection, and a 2 (w. w/o answers) 2 (KL convergence or all three) 2 (propagate w. w/o initial score) 3 (measures) design for evaluating the graph-based method, and showed the performance superiority of the proposed algorithm compared to existing methods.

The paper proposed several novel ideas in solving the proposed problem. First, the paper defined support and confidence and also introduced minimum thresholds for both, which are additional constraints previous works did not have. “Minimum support threshold ensures that the discovered patters are general and minimum confidence threshold ensures that all discovered LSPs are discriminating and are cable of predicting question or non-question sentences.” (Quoted from the paper.)

Second, the paper introduced the idea of combining the distance between a candidate answer and the question, and the authority of an author, with KL convergence using a linear interpolation. Because of the specific forum setting, these additional factors improved the algorithm performance. However, note that these additional factors also poses limit to the applicability of the algorithm to other Question-Answer mining applications.

Third, the paper proposed a graph based propagation method that uses a graph to represent inter-relationships among nodes (answers) using generator and offspring ideas to generate edges (weights). With this graph, the paper suggests propagating authority through the graph. The authors argued (briefly) that because this can be treated as a Markov Chain, therefore, the propagation will converge. This idea of using a graph to propagate authority information is great because it takes into consideration of how inter-relationship between pair of nodes can be used to help ranking (following the PageRank idea). The idea of integrating classification (two ways) with graph propagation is another great idea. However, I find this Markovian argument weak. No rationality is given about why this can be treated as a Markovian process, and the transitional probability mentioned is not convincing.

The experiment design in this paper is extremely good. First, when using two annotators to annotate the dataset, the paper created two datasets, the Q-Tunion and Q-TInter and evaluated different algorithms using both datasets. This effectively shows that the algorithms performances showed same trends even with disagreeing annotators. The paper also showed detailed performance comparisons using multiple measures across different datasets and different algorithms/methods. This way, the superiority of the proposed algorithm is clear and convincing.

Additionally, the paper used many good examples in the first half of the paper explain complex concepts or to provide justifications. This is good writing! I wish the paper had done the same thing for the latter part of the algorithm description.

The paper also has some significant drawbacks. First, the paper certainly covered a great deal of information and ideas, especially because of the experiment design, a large amount of performance values are contrasted and analyzed. Even the authors used the phrase “due to space limitations” three times in the paper. It is truly a difficult task to cram everything into 8 pages, which is a constraint for a conference paper. And by doing so, lots of useful information are omitted (e.g. the section about how the Markov process can be justified) and some part of the paper just seemed difficult to understand (section 4.2.2). There are also places where proper references should be given, but are omitted probably due to space limitations (e.g. references for Ripper classification algorithm and power method algorithm). It is probably a better idea to either publish this as a journal paper where more space is available, or write a technical report on this subject and reference to it from this paper.

Second, it also seems that the authors were in a rush to meet the paper deadline. This is shown by the carelessness in many of the math notations used in the paper. For example, the Greek letter λ is used in equations (3), (5), (6), and (10) where they meant different things. The letter ‘a’ used to represent an answer sometimes is bolded and sometimes is italicized when all of them meant the same thing. Recursive updates are also represented using “=” instead of “”, such as in equation (10), and temporal indexes are not used. There are quite a few important spelling errors such as the sentence right after equation (3) where ‘w’ was written as “x”. My personal opinion is that if you are going to put your name on the paper, then better show some professionalism.

Third, the paper proposed a quite complex model with many parameters. Especially, the algorithm used many parameters that were set by empirical results. The author did mention that they did not evaluate different parameter values in one place and discussed the sensitivity of the empirical parameter in another; however, these empirical parameters make one wonder whether the algorithm will generalize well with other datasets or whether these parameters might be correlated in someway. A better approach would probably be either justify qualitatively or quantitively the sensitivity of these empirical parameters by discussing more intuitions behind them or showing experiment results using different values with several datasets of different domains and scale. The paper also proposed a few “magical” equations, such as author(i), equation (10), without rationalize how the formulas came about. (Again, publishing as a journal paper would have lessened such problems.)

There are other minor problems with the paper as well. For example, the paper mentioned several times that the improvements are statistically significant (p-value < 0.001), but without much more detail on how the statistical significances are calculated, I can only assume that they came from the 10-fold cross validation. In my opinion, statistical significance would not be a very good indicator of improvements in such set up. The paper also gave me the impression that they are ranking candidate answers by P(q|a). I would think ranking by P(a|q) would have been more appropriate.

Video of the Day:

Now here are some serious question asking and answering!