Discussion and Future Directions | Table 2: A summarized answer composed of five different portions of text generated with the SH scoring function ; the chosen best answer is presented for comparison. |
Experiments | At this point a second version of the dataset was created to evaluate the summarization performance under scoring function (6) and (7); it was generated by manually selecting questions that arouse subjective, human interest from the previous 89,814 question-answer pairs. |
Experiments | Figure 2: Increase in ROUGE-L, ROUGE-l and ROUGE-2 performances of the S H system as more measures are taken in consideration in the scoring function , starting from Relevance alone (R) to the complete system (RQNC). |
Experiments | In order to determine what influence the single measures had on the overall performance, we conducted a final experiment on the filtered dataset to evaluate (the SH scoring function was used). |
The summarization framework | 2.5 The concept scoring functions |
The summarization framework | Analogously to what had been done with scoring function (6), the (I) space was augmented with a dimension representing the |
The summarization framework | The concept score for the same BE in two separate answers is very likely to be different because it belongs to answers with their own Quality and Coverage values: this only makes the scoring function context-dependent and does not interfere with the calculation the Coverage, Relevance and Novelty measures, which are based on information overlap and will regard two BEs with overlapping equivalence classes as being the same, regardless of their score being different. |
Introduction | and Imamura (2001) propose some score functions based on the lexical similarity and co-occurrence. |
Substructure Spaces for BTKs | The baseline system uses many heuristics in searching the optimal solutions with alternative score functions . |
Substructure Spaces for BTKs | The baseline method proposes two score functions based on the lexical translation probability. |
Substructure Spaces for BTKs | They also compute the score function by splitting the tree into the internal and external components. |
Introduction | defining a score function f (:13, y) and locating the |
Introduction | In HMMs, the score function f (:13, y) is the joint probability distribution over (3:, If we assume a one-to-one correspondence between the hidden states and the labels, the score function can be written as: |
Introduction | In the perceptron, the score function f (:13, y) is given as f(a:,y) = w - qb(a:,y) where w is the weight vector, and qb(a:, y) is the feature vector representation of the pair (:13, By making the first-order Markov assumption, we have |