Index of papers in Proc. ACL 2013 that mention
  • Amazon’s Mechanical Turk
Endriss, Ulle and Fernández, Raquel
A Case Study
(2008) includes 10 non-expert annotations for each of the 800 items in the RTEl testset, collected with Amazon’s Mechanical Turk .
Introduction
In recent years, the possibility to undertake large-scale annotation projects with hundreds or thousands of annotators has become a reality thanks to online crowdsourcing methods such as Amazon’s Mechanical Turk and Games with a Purpose.
Related Work
Similarly, crowdsourcing via microworking sites like Amazon’s Mechanical Turk has been used in several annotation experiments related to tasks such as affect analysis, event annotation, sense definition and word sense disambiguation (Snow et al., 2008; Rumshisky, 2011; Rumshisky et al., 2012), amongst others.12
Related Work
12See also the papers presented at the NAACL 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk (t inyurl .
Amazon’s Mechanical Turk is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: