Experiments | We extract dependency paths for each pair of named entities in one sentence. |
Experiments | for words on the dependency paths . |
Experiments | Each entity pair tun and the dependency path which connects them form wit a tuple. |
Introduction | Such patterns could be sequences of lemmas and Part-of-Speech tags, or lexicalized dependency paths . |
Introduction | Whether we use sequences or dependency paths , we will encounter the problem of polysemy. |
Introduction | We perform experiments on New York Times articles and consider lexicalized dependency paths as patterns in our data. |
Related Work | Both DIRT and our approach represent dependency paths using their arguments. |
Experiments and results | Two ways of extracting patterns have been used: (a) Syntactic, taking the dependency path between the two entities, and (b) Intertext, taking the text between the two. |
Unsupervised relational pattern learning | This context may be a complex structure, such as the dependency path joining the two entities, but it is considered for our purposes as a single term; (e) for each relation r relating 67; with 63-, document Dij is added to collection CT. |
Unsupervised relational pattern learning | The words in each document can be, for example, all the dependency paths that have been observed in the input textual corpus between the two related entities. |
Unsupervised relational pattern learning | Generative model Once these collections are built, we use the generative model from Figure 2 to learn the probability that a dependency path is conveying some relation between the entities it connects. |