Nonparametric Learning of Phonological Constraints in Optimality Theory
Doyle, Gabriel and Bicknell, Klinton and Levy, Roger

Article Structure

Abstract

We present a method to jointly learn features and weights directly from distributional data in a log-linear framework.

Introduction

Many aspects of human cognition involve the interaction of constraints that push a decision-maker toward different options, whether in something so trivial as choosing a movie or so important as a fight-or-flight response.

Phonology and Optimality Theory 2.1 OT structure

Optimality Theory has been used for constraint-based analysis of many areas of language, but we focus on its most successful application: phonology.

The IBPOT Model

3.1 Structure

Experiment

4.1 Wolof vowel harmony

Discussion and Future Work

5.1 Relation to phonotactic learning

Conclusion

A central assumption of Optimality Theory has been the existence of a fixed inventory of universal markedness constraints innately available to the learner, an assumption by arguments regarding the computational complexity of constraint identification.

Topics

log-linear

Appears in 6 sentences as: log-linear (6)
In Nonparametric Learning of Phonological Constraints in Optimality Theory
  1. We present a method to jointly learn features and weights directly from distributional data in a log-linear framework.
    Page 1, “Abstract”
  2. The model uses an Indian Buffet Process prior to learn the feature values used in the log-linear method, and is the first algorithm for learning phonological constraints without presupposing constraint structure.
    Page 1, “Abstract”
  3. These constraint-driven decisions can be modeled with a log-linear system.
    Page 1, “Introduction”
  4. We consider this question by examining the dominant framework in modern phonology, Optimality Theory (Prince and Smolensky, 1993, OT), implemented in a log-linear framework, MaXEnt OT (Goldwater and Johnson, 2003), with output forms’ probabilities based on a weighted sum of
    Page 1, “Introduction”
  5. In IBPOT, we use the log-linear EVAL developed by Goldwater and J ohn-son (2003) in their MaxEnt OT system.
    Page 2, “Phonology and Optimality Theory 2.1 OT structure”
  6. The weight vector w provides weight for both F and M. Probabilities of output forms are given by a log-linear function:
    Page 4, “The IBPOT Model”

See all papers in Proc. ACL 2014 that mention log-linear.

See all papers in Proc. ACL that mention log-linear.

Back to top.

MaxEnt

Appears in 5 sentences as: MaXEnt (2) MaxEnt (3)
In Nonparametric Learning of Phonological Constraints in Optimality Theory
  1. We consider this question by examining the dominant framework in modern phonology, Optimality Theory (Prince and Smolensky, 1993, OT), implemented in a log-linear framework, MaXEnt OT (Goldwater and Johnson, 2003), with output forms’ probabilities based on a weighted sum of
    Page 1, “Introduction”
  2. In IBPOT, we use the log-linear EVAL developed by Goldwater and J ohn-son (2003) in their MaxEnt OT system.
    Page 2, “Phonology and Optimality Theory 2.1 OT structure”
  3. MEOT also is motivated by the general MaxEnt framework, whereas most other OT formulations are ad hoc constructions specific to phonology.
    Page 2, “Phonology and Optimality Theory 2.1 OT structure”
  4. In MaXEnt OT, each constraint has a weight, and the candidates’ scores are the sums of the weights of violated constraints.
    Page 3, “Phonology and Optimality Theory 2.1 OT structure”
  5. To establish performance for the phonological standard, we use the IBPOT learner to find constraint weights but do not update M. The resultant learner is essentially MaxEnt OT with the weights estimated through Metropolis sampling instead of gradient ascent.
    Page 6, “Experiment”

See all papers in Proc. ACL 2014 that mention MaxEnt.

See all papers in Proc. ACL that mention MaxEnt.

Back to top.

weight vector

Appears in 5 sentences as: weight vector (5)
In Nonparametric Learning of Phonological Constraints in Optimality Theory
  1. The IBPOT model defines a generative process for mappings between input and output forms based on three latent variables: the constraint violation matrices F (faithfulness) and M (markedness), and the weight vector w. The cells of the violation matrices correspond to the number of violations of a constraint by a given input-output mapping.
    Page 4, “The IBPOT Model”
  2. The weight vector w provides weight for both F and M. Probabilities of output forms are given by a log-linear function:
    Page 4, “The IBPOT Model”
  3. We initialize the model with a randomly-drawn markedness violation matrix M and weight vector 212.
    Page 4, “The IBPOT Model”
  4. After each iteration through M, we use Metropolis-Hastings to update the weight vector w.
    Page 4, “The IBPOT Model”
  5. Table 1: Data, markedness matrix, weight vector , and joint log-probabilities for the IBPOT and the phonological standard constraints.
    Page 7, “Experiment”

See all papers in Proc. ACL 2014 that mention weight vector.

See all papers in Proc. ACL that mention weight vector.

Back to top.

generative process

Appears in 3 sentences as: generative process (3)
In Nonparametric Learning of Phonological Constraints in Optimality Theory
  1. The IBPOT model defines a generative process for mappings between input and output forms based on three latent variables: the constraint violation matrices F (faithfulness) and M (markedness), and the weight vector w. The cells of the violation matrices correspond to the number of violations of a constraint by a given input-output mapping.
    Page 4, “The IBPOT Model”
  2. Represented constraint sampling We begin by resampling M j; for all represented constraints M.l, conditioned on the rest of the violations (M_(jl), F) and the weights w. This is the sampling counterpart of drawing existing features in the IBP generative process .
    Page 4, “The IBPOT Model”
  3. This is the sampling counterpart to the Poisson draw for new features in the IBP generative process .
    Page 5, “The IBPOT Model”

See all papers in Proc. ACL 2014 that mention generative process.

See all papers in Proc. ACL that mention generative process.

Back to top.