Conclusions and Future Work | Story creation amounts to traversing the tree and selecting the nodes with the highest score . |
Experimental Setup | To evaluate which system configuration was best, we asked two human evaluators to rate (on a 1—5 scale) stories produced in the following conditions: (a) score the candidate stories using the interest function first and then coherence (and vice versa), (b) score the stories simultaneously using both rankers and select the story with the highest score . |
Experimental Setup | We also examined how best to prune the search space, i.e., by selecting the highest scoring stories, the lowest scoring one, or simply at random. |
Experimental Setup | The results showed that the evaluators preferred the version of the system that applied both rankers simultaneously and maintained the highest scoring stories in the beam. |
The Story Generator | Story generation amounts to traversing the tree and selecting the nodes with the highest score |
The Story Generator | Once we reach the required length, the highest scoring story is presented to the user. |
Dependency parsing for machine translation | When there is no need to ensure projectivity, one can independently select the highest scoring edge (i, j) for each modifier x j, yet we generally want to ensure that the resulting structure is a tree, i.e., that it does not contain any circular dependencies. |
Dependency parsing for machine translation | The main idea behind the CLE algorithm is to first greedily select for each word x j the incoming edge (i, j) with highest score , then to successively repeat the following two steps: (a) identify a loop in the graph, and if there is none, halt; (b) contract the loop into a single vertex, and update scores for edges coming in and out of the loop. |
Dependency parsing for machine translation | The greedy approach of selecting the highest scoring edge (i, j) for each modifier xj can easily be applied left-to-right during phrase-based decoding, which proceeds in the same order. |