Abstract | The generative process assumes that each entity mention arises from copying and optionally mutating an earlier name from a similar context. |
Introduction | Our model is an evolutionary generative process based on the name variation model of Andrews et al. |
Introduction | This can also relate seemingly dissimilar names via multiple steps in the generative process: |
The IBPOT Model | The IBPOT model defines a generative process for mappings between input and output forms based on three latent variables: the constraint violation matrices F (faithfulness) and M (markedness), and the weight vector w. The cells of the violation matrices correspond to the number of violations of a constraint by a given input-output mapping. |
The IBPOT Model | Represented constraint sampling We begin by resampling M j; for all represented constraints M.l, conditioned on the rest of the violations (M_(jl), F) and the weights w. This is the sampling counterpart of drawing existing features in the IBP generative process . |
The IBPOT Model | This is the sampling counterpart to the Poisson draw for new features in the IBP generative process . |
Empirical Evaluation | The times reported are from the start of the generation process , eliminating variations due to interpreter startup, input parsing, etc. |
Empirical Evaluation | Note that, as STRUCT is an anytime algorithm, valid sentences are available very early in the generation process , despite the size of the set of adjoining trees. |
Sentence Tree Realization with UCT | If so, we store it, and continue the generation process . |
Algorithm | The generative process of word distributions for non-emotion topics follows the standard LDA definition with a scalar hyperparameter 607’). |
Algorithm | We summarize the generative process of the EaLDA model as below: |
Algorithm | As an alternative representation, the graphical model of the the generative process is shown by Figure 1. |