Our Approach | 4.2 Generative model for Solution Posts |
Our Approach | Our generative model models the reply part of a (p, r) pair (in which r is a solution) as being generated from the statistical models in {83, 73} as follows. |
Our Approach | The generative model above is similar to the proposal in (Deepak et al., 2012), adapted suitably for our scenario. |
Related Work | Translation models were also seen to be useful in segmenting incident reports into the problem and solution parts (Deepak et al., 2012); we will use an adaptation of the generative model presented therein, for our solution extraction formulation. |
Background | The model takes as its starting point two probabilistic models of syntax that have been developed for CCG parsing, Hockenmaier & Steed-man’s (2002) generative model and Clark & Cur-ran’s (2007) normal-form model. |
Introduction | With this simple reranking strategy and each of three different Treebank parsers, we find that it is possible to improve BLEU scores on Penn Treebank development data with White & Rajkumar’s (2011; 2012) baseline generative model , but not with their averaged perceptron model. |
Simple Reranking | The first one is the baseline generative model (hereafter, generative model ) used in training the averaged perceptron model. |
Simple Reranking | Simple ranking with the Berkeley parser of the generative model’s n-best realizations raised the BLEU score from 85.55 to 86.07, well below the averaged perceptron model’s BLEU score of 87.93. |
Abstract | Our generative model deterministically maps a POS sequence to a bracketing via an undirected |
Abstract | The complete generative model that we follow is then: |
Abstract | Our learning algorithm focuses on recovering the undirected tree based for the generative model that was described above. |