Index of papers in Proc. ACL 2012 that mention
  • joint model
Hatori, Jun and Matsuzaki, Takuya and Miyao, Yusuke and Tsujii, Jun'ichi
Abstract
We propose the first joint model for word segmentation, POS tagging, and dependency parsing for Chinese.
Abstract
Based on an extension of the incremental joint model for POS tagging and dependency parsing (Hatori et al., 2011), we propose an efficient character-based decoding method that can combine features from state-of-the-art segmentation, POS tagging, and dependency parsing models.
Abstract
We also perform comparison experiments with the partially joint models .
Introduction
Based on these observations, we aim at building a joint model that simultaneously processes word segmentation, POS tagging, and dependency parsing, trying to capture global interaction among
Introduction
We also perform comparison experiments with partially joint models , and investigate the tradeoff between the running speed and the model performance.
Model
(2011), we build our joint model to solve word segmentation, POS tagging, and dependency parsing within a single framework.
Model
In our joint model , the early update is invoked by mistakes in any of word segmentation, POS tagging, or dependency parsing.
Model
The list of the features used in our joint model is presented in Table 1, where $01—$05, W01—W21, and T01—05 are taken from Zhang and Clark (2010), and P01—P28 are taken from Huang and Sagae (2010).
Related Works
In contrast, we built a joint model based on a dependency-based framework, with a rich set of structural features.
Related Works
Because we found that even an incremental approach with beam search is intractable if we perform the word-based decoding, we take a character-based approach to produce our joint model .
joint model is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Minkov, Einat and Zettlemoyer, Luke
Abstract
This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events.
Abstract
The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields.
Corporate Acquisitions
Unfortunately, we can not directly compare against a generative joint model evaluated on this dataset (Haghighi and Klein, 2010).7 The best results per attribute are shown in boldface.
Introduction
In this paper, we present a joint modeling and learning approach for the combined tasks of mention detection, unification, and template filling, as described above.
Introduction
We also demonstrate, through ablation studies on the feature set, the need for joint modeling and the relative importance of the different types of joint constraints.
Seminar Extraction Task
An important question to be addressed in evaluation is to what extent the joint modeling approach contributes to performance.
Seminar Extraction Task
This is largely due to erroneous assignments of named entities of other types (mainly, person) as titles; such errors are avoided in the full joint model , where tuple validity is enforced.
Seminar Extraction Task
As argued before, joint modeling is especially important for irregular fields, such as title; we provide first results on this field.
Summary and Future Work
This approach allows for joint modeling of interdependen-cies at all levels and across fields.
Summary and Future Work
Finally, it is worth exploring scaling the approach to unrestricted event extraction, and jointly model extracting more than one relation per document.
joint model is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Hakkani-Tur, Dilek
Abstract
We describe a joint model for understanding user actions in natural language utterances.
Background
Only recent research has focused on the joint modeling of SLU (Jeong and Lee, 2008; Wang, 2010) taking into account the dependencies at learning time.
Background
Our joint model can discover domain D, and user’s act A as higher layer latent concepts of utterances in relation to lower layer latent semantic topics (slots) S such as named-entities (”New York”) or context bearing non-named entities (”vegan ”).
Data and Approach Overview
Here we define several abstractions of our joint model as depicted in Fig.
Experiments
* Tri—CRF: We used Triangular Chain CRF (J eong and Lee, 2008) as our supervised joint model baseline.
Experiments
We evaluate the performance of our joint model on two experiments using two metrics.
Experiments
The results show that our joint modeling approach has an advantage over the other joint models (i.e., Tri—CRF) in that it can leverage unlabeled NL utterances.
Introduction
Recent work on SLU (Jeong and Lee, 2008; Wang, 2010) presents joint modeling of two components, i.e., the domain and slot or dialog act and slot components together.
joint model is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Sun, Xu and Wang, Houfeng and Li, Wenjie
Abstract
We present a joint model for Chinese word segmentation and new word detection.
Abstract
We present high dimensional new features, including word-based features and enriched edge (label-transition) features, for the joint modeling .
Introduction
In this paper, we present high dimensional new features, including word-based features and enriched edge (label-transition) features, for the joint modeling of Chinese word segmentation (CWS) and new word detection (NWD).
Introduction
While most of the state-of-the-art CWS systems used semi-Markov conditional random fields or latent variable conditional random fields, we simply use a single first-order conditional random fields (CRFs) for the joint modeling .
Introduction
0 We propose a joint model for Chinese word segmentation and new word detection.
System Architecture
3.1 A Joint Model Based on CRFs
System Architecture
In this paper, we presented a joint model for Chinese word segmentation and new word detection.
System Architecture
We presented new features, including word-based features and enriched edge features, for the joint modeling .
joint model is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Pantel, Patrick and Lin, Thomas and Gamon, Michael
Abstract
We jointly model the interplay between latent user intents that govern queries and unobserved entity types, leveraging observed signals from query formulations and document clicks.
Conclusion
Jointly modeling the interplay between the underlying user intents and entity types in web search queries shows significant improvements over the current state of the art on the task of resolving entity types in head queries.
Evaluation Methodology
In order to learn type distributions by jointly modeling user intents and a large number of types, we require a large set of training examples containing tagged entities and their potential types.
Introduction
We show that jointly modeling user intent and entity type significantly outperforms the current state of the art on the task of entity type resolution in queries.
Related Work
Our models also expand upon theirs by jointly modeling
joint model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chambers, Nathanael
Experiments and Results
Finally, the Joint model is the combined document and year mention classifiers as described in Section 4.3.
Experiments and Results
Table 4 shows the F1 scores of the Joint model by year.
Experiments and Results
Table 4: Yearly results for the Joint model .
Learning Time Constraints
Finally, given the document classifiers of Section 3 and the constraint classifier just defined in Section 4, we create a joint model combining the two with the following linear interpolation:
joint model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Mukherjee, Arjun and Liu, Bing
Introduction
Our models also jointly model both aspects and aspect specific sentiments.
Introduction
Our models are related to topic models in general (Blei et al., 2003) and joint models of aspects and sentiments in sentiment analysis in specific (e.g., Zhao et al., 2010).
Introduction
First of all, we jointly model aspect and sentiment, while DF-LDA is only for topics/aspects.
joint model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: