Model | Here, we describe the generative process our model uses to generate the observed utterances and present the corresponding graphical model. |
Model | For clarity, we assume that the values of the boundary variables bi are given in the generative process . |
Model | The generative process indicates that our model ignores utterance boundaries and views the entire data as concatenated spoken segments. |
Problem Formulation | In the next section, we show the generative process our model uses to generate the observed data. |
Abstract | We inject information extracted from unstructured web search query logs as prior information to enhance the generative process of the natural language utterance understanding model. |
Experiments | This is because we utilize domain priors obtained from the web sources as supervision during generative process as well as unlabeled utterances that enable handling language variability. |
MultiLayer Context Model - MCM | The generative process of our multilayer context model (MCM) (Fig. |
Method | We assume the following generation process for all the posts in the stream. |
Method | Figure 2: The generation process for all posts. |
Method | Formally, the generation process is summarized in Figure 2. |
Abstract | We cast the generation process as constraint optimization problems, collectively incorporating multiple interconnected aspects of language composition for content planning, surface realization and discourse structure. |
Introduction | Because the generation process sticks relatively closely to the recognized content, the resulting descriptions often lack the kind of coverage, creativity, and complexity typically found in human-written text. |
Introduction | Our ILP formulation encodes a rich set of linguistically motivated constraints and weights that incorporate multiple aspects of the generation process . |