How Well can We Learn Interpretable Entity Types from Text?
Hovy, Dirk

Article Structure

Abstract

Many NLP applications rely on type systems to represent higher-level classes.

Introduction

Many NLP applications, such as question answering (QA) or information extraction (IE), use type systems to represent relevant semantic classes.

Related Work

In relation extraction, we have to identify the relation elements, and then map the arguments to types.

Model

Our goal is to find semantic type candidates in the data, and apply them in relation extraction to see which ones are best suited.

Extending the Model

The model used by Hovy et al.

Experiments

Since the labels are induced dynamically from the data, traditional precisiorflrecall measures, which require a known ground truth, are difficult to obtain.

Results

Table 1 shows the accuracy of the different models in the prediction task for the three different domains.

Conclusion

We evaluated an approach to learning domain-specific interpretable entity types from unlabeled data.

Topics

entity types

Appears in 6 sentences as: entity type (2) entity types (4)
In How Well can We Learn Interpretable Entity Types from Text?
  1. We investigate a largely unsupervised approach to learning interpretable, domain-specific entity types from unlabeled text.
    Page 1, “Abstract”
  2. It assumes that any common noun in a domain can function as potential entity type , and uses those nouns as hidden variables in a HMM.
    Page 1, “Abstract”
  3. The results suggest that it is possible to learn domain-specific entity types from unlabeled data.
    Page 1, “Abstract”
  4. (2011) proposed an approach that uses co-occurrence patterns to find entity type candidates, and then learns their applicability to relation arguments by using them as latent variables in a first-order HMM.
    Page 1, “Introduction”
  5. 0 the learned entity types can be used to predict selectional restrictions with high accuracy
    Page 2, “Introduction”
  6. We evaluated an approach to learning domain-specific interpretable entity types from unlabeled data.
    Page 5, “Conclusion”

See all papers in Proc. ACL 2014 that mention entity types.

See all papers in Proc. ACL that mention entity types.

Back to top.

co-occurrence

Appears in 3 sentences as: co-occurrence (3)
In How Well can We Learn Interpretable Entity Types from Text?
  1. To constrain training, it extracts co-occurrence dictionaries of entities and common nouns from the data.
    Page 1, “Abstract”
  2. (2011) proposed an approach that uses co-occurrence patterns to find entity type candidates, and then learns their applicability to relation arguments by using them as latent variables in a first-order HMM.
    Page 1, “Introduction”
  3. To restrict the search space and improve learning, we first have to learn which types modify entities and record their co-occurrence , and use this as dictionary.
    Page 2, “Model”

See all papers in Proc. ACL 2014 that mention co-occurrence.

See all papers in Proc. ACL that mention co-occurrence.

Back to top.

graphical model

Appears in 3 sentences as: graphical model (2) graphical models (1)
In How Well can We Learn Interpretable Entity Types from Text?
  1. We can thus move from a sequential model to a general graphical model by adding transitions and rearranging the structure.
    Page 3, “Extending the Model”
  2. Moving from the HMMs to a general graphical model structure (Figures 3c and d) creates a sparser distribution and significantly improves accuracy across the board.
    Page 5, “Results”
  3. Type candidates are collected from patterns and modeled as hidden variables in graphical models .
    Page 5, “Conclusion”

See all papers in Proc. ACL 2014 that mention graphical model.

See all papers in Proc. ACL that mention graphical model.

Back to top.

latent variables

Appears in 3 sentences as: latent variables (3)
In How Well can We Learn Interpretable Entity Types from Text?
  1. (2011) proposed an approach that uses co-occurrence patterns to find entity type candidates, and then learns their applicability to relation arguments by using them as latent variables in a first-order HMM.
    Page 1, “Introduction”
  2. Thus all common nouns are possible types, and can be used as latent variables in an HMM.
    Page 2, “Model”
  3. By adding additional transitions, we can constrain the latent variables further.
    Page 3, “Extending the Model”

See all papers in Proc. ACL 2014 that mention latent variables.

See all papers in Proc. ACL that mention latent variables.

Back to top.

relation extraction

Appears in 3 sentences as: relation extraction (3)
In How Well can We Learn Interpretable Entity Types from Text?
  1. The results indicate that the learned types can be used to in relation extraction tasks.
    Page 1, “Introduction”
  2. In relation extraction , we have to identify the relation elements, and then map the arguments to types.
    Page 2, “Related Work”
  3. Our goal is to find semantic type candidates in the data, and apply them in relation extraction to see which ones are best suited.
    Page 2, “Model”

See all papers in Proc. ACL 2014 that mention relation extraction.

See all papers in Proc. ACL that mention relation extraction.

Back to top.

unlabeled data

Appears in 3 sentences as: unlabeled data (3)
In How Well can We Learn Interpretable Entity Types from Text?
  1. The results suggest that it is possible to learn domain-specific entity types from unlabeled data .
    Page 1, “Abstract”
  2. 0 we empirically evaluate an approach to learning types from unlabeled data
    Page 2, “Introduction”
  3. We evaluated an approach to learning domain-specific interpretable entity types from unlabeled data .
    Page 5, “Conclusion”

See all papers in Proc. ACL 2014 that mention unlabeled data.

See all papers in Proc. ACL that mention unlabeled data.

Back to top.