Experimental Setup | Instead, it decides deterministically how to generate a story on the basis of the most likely predicate-argument and predicate-predicate counts in the knowledge base . |
The Story Generator | The generator next constructs several possible stories involving these entities by consulting a knowledge base containing information about dogs and ducks (e. g., dogs bark, ducks swim) and their interactions (e.g., dogs chase ducks, ducks love dogs). |
The Story Generator | Although we are ultimately searching for the best overall story at the document level, we must also find the most suitable sentences that can be generated from the knowledge base (see Figure 4). |
The Story Generator | The space of possible stories can increase dramatically depending on the size of the knowledge base so that an exhaustive tree search becomes computationally prohibitive. |
Extracting Rules from Wikipedia | Our goal is to utilize the broad knowledge of Wikipedia to extract a knowledge base of lexical reference rules. |
Extracting Rules from Wikipedia | We note that the last three extraction methods should not be considered as Wikipedia specific, since many Web-like knowledge bases contain redirects, hyperlinks and disambiguation means. |
Introduction | To perform such inferences, systems need large scale knowledge bases of LR rules. |