Question-Answer System is currently a very hot topic in the IR community and attracted many researchers. This 2004 paper (published in ACM SIGIR’08) is one among many in this area. The problem the paper tries to solve is how to mine knowledge in the form of question-answer pairs specifically from a forum setting. The knowledge can then be used for QA services, to improve forum management, or to augment the knowledge base of chatbot. This is a challenging paper to read because it touches many different concepts and ideas from many research disciplines besides IR such as machine learning (n-fold cross-validation), Information Theory (Entropy), NLP (KL-divergence), Bayesian statistics (Markov Chain Convergence), and Graph Theory (graph propagation).
The main contributions of this reference paper include: 1) a classification-based method for question detection by using sequential pattern features automatically extracted from both questions and non-questions in forums; 2) an unsupervised graph-based propagation approach, which can also be integrated with classification method when training data is available, for ranking candidate answers. The paper also presented a good amount of experimental results including a 2 (datasets) 4 (methods) 3 (measures) design for question detection, a 2 (w. w/o answers) 9 (methods) 3 (measures) design for answer detection, and a 2 (w. w/o answers) 2 (KL convergence or all three) 2 (propagate w. w/o initial score) 3 (measures) design for evaluating the graph-based method, and showed the performance superiority of the proposed algorithm compared to existing methods.
The paper proposed several novel ideas in solving the proposed problem. First, the paper defined support and confidence and also introduced minimum thresholds for both, which are additional constraints previous works did not have. “Minimum support threshold ensures that the discovered patters are general and minimum confidence threshold ensures that all discovered LSPs are discriminating and are cable of predicting question or non-question sentences.” (Quoted from the paper.)
Second, the paper introduced the idea of combining the distance between a candidate answer and the question, and the authority of an author, with KL convergence using a linear interpolation. Because of the specific forum setting, these additional factors improved the algorithm performance. However, note that these additional factors also poses limit to the applicability of the algorithm to other Question-Answer mining applications.
Third, the paper proposed a graph based propagation method that uses a graph to represent inter-relationships among nodes (answers) using generator and offspring ideas to generate edges (weights). With this graph, the paper suggests propagating authority through the graph. The authors argued (briefly) that because this can be treated as a Markov Chain, therefore, the propagation will converge. This idea of using a graph to propagate authority information is great because it takes into consideration of how inter-relationship between pair of nodes can be used to help ranking (following the PageRank idea). The idea of integrating classification (two ways) with graph propagation is another great idea. However, I find this Markovian argument weak. No rationality is given about why this can be treated as a Markovian process, and the transitional probability mentioned is not convincing.
The experiment design in this paper is extremely good. First, when using two annotators to annotate the dataset, the paper created two datasets, the Q-Tunion and Q-TInter and evaluated different algorithms using both datasets. This effectively shows that the algorithms performances showed same trends even with disagreeing annotators. The paper also showed detailed performance comparisons using multiple measures across different datasets and different algorithms/methods. This way, the superiority of the proposed algorithm is clear and convincing.
Additionally, the paper used many good examples in the first half of the paper explain complex concepts or to provide justifications. This is good writing! I wish the paper had done the same thing for the latter part of the algorithm description.
The paper also has some significant drawbacks. First, the paper certainly covered a great deal of information and ideas, especially because of the experiment design, a large amount of performance values are contrasted and analyzed. Even the authors used the phrase “due to space limitations” three times in the paper. It is truly a difficult task to cram everything into 8 pages, which is a constraint for a conference paper. And by doing so, lots of useful information are omitted (e.g. the section about how the Markov process can be justified) and some part of the paper just seemed difficult to understand (section 4.2.2). There are also places where proper references should be given, but are omitted probably due to space limitations (e.g. references for Ripper classification algorithm and power method algorithm). It is probably a better idea to either publish this as a journal paper where more space is available, or write a technical report on this subject and reference to it from this paper.
Second, it also seems that the authors were in a rush to meet the paper deadline. This is shown by the carelessness in many of the math notations used in the paper. For example, the Greek letter λ is used in equations (3), (5), (6), and (10) where they meant different things. The letter ‘a’ used to represent an answer sometimes is bolded and sometimes is italicized when all of them meant the same thing. Recursive updates are also represented using “=” instead of “”, such as in equation (10), and temporal indexes are not used. There are quite a few important spelling errors such as the sentence right after equation (3) where ‘w’ was written as “x”. My personal opinion is that if you are going to put your name on the paper, then better show some professionalism.
Third, the paper proposed a quite complex model with many parameters. Especially, the algorithm used many parameters that were set by empirical results. The author did mention that they did not evaluate different parameter values in one place and discussed the sensitivity of the empirical parameter in another; however, these empirical parameters make one wonder whether the algorithm will generalize well with other datasets or whether these parameters might be correlated in someway. A better approach would probably be either justify qualitatively or quantitively the sensitivity of these empirical parameters by discussing more intuitions behind them or showing experiment results using different values with several datasets of different domains and scale. The paper also proposed a few “magical” equations, such as author(i), equation (10), without rationalize how the formulas came about. (Again, publishing as a journal paper would have lessened such problems.)
There are other minor problems with the paper as well. For example, the paper mentioned several times that the improvements are statistically significant (p-value < 0.001), but without much more detail on how the statistical significances are calculated, I can only assume that they came from the 10-fold cross validation. In my opinion, statistical significance would not be a very good indicator of improvements in such set up. The paper also gave me the impression that they are ranking candidate answers by P(q|a). I would think ranking by P(a|q) would have been more appropriate.
Video of the Day:
Now here are some serious question asking and answering!
0 comments:
Post a Comment