Generating Recommendation Dialogs by Extracting Information from User Reviews
Reschke, Kevin and Vogel, Adam and Jurafsky, Dan

Article Structure

Abstract

Recommendation dialog systems help users navigate e-commerce listings by asking questions about users’ preferences toward relevant domain attributes.

Introduction

Recommendation dialog systems have been developed for a number of tasks ranging from product search to restaurant recommendation (Chai et al., 2002; Thompson et al., 2004; Bridge et al., 2005; Young et al., 2010).

Generating Questions from Reviews

2.1 Subcategory Questions

Question Selection for Dialog

To utilize the questions generated from reviews in recommendation dialogs, we first formalize the dialog optimization task and then offer a solution.

Evaluation

4.1 Experimental Setup

Conclusion

We presented a system for extracting large sets of attributes from user reviews and selecting relevant attributes to ask questions about.

Acknowledgments

Thanks to the anonymous reviewers and the Stanford NLP group for helpful suggestions.

Topics

sentiment lexicon

Appears in 6 sentences as: Sentiment Lexicon (1) sentiment lexicon (5)
In Generating Recommendation Dialogs by Extracting Information from User Reviews
  1. We demonstrate our approach on a new dataset just released by Yelp, and release a new sentiment lexicon with 1329 adjectives for the restaurant domain.
    Page 1, “Abstract”
  2. First we develop a domain specific sentiment lexicon .
    Page 2, “Generating Questions from Reviews”
  3. 2.2.1 Sentiment Lexicon
    Page 2, “Generating Questions from Reviews”
  4. To identify noun-phrases which are targeted by predicates in our sentiment lexicon , we develop handcrafted extraction patterns defined over syntactic dependency parses (Blair—Goldensohn et al., 2008; Somasundaran and Wiebe, 2009) generated by the Stanford parser (Klein and Manning, 2003).
    Page 3, “Generating Questions from Reviews”
  5. Using topic models to discover subtypes of businesses, a domain-specific sentiment lexicon , and a number of new techniques for increasing precision in sentiment aspect extraction yields attributes that give a rich representation of the restaurant domain.
    Page 5, “Conclusion”
  6. We have made this l329-term sentiment lexicon for the restaurant domain available as useful resource to the community.
    Page 5, “Conclusion”

See all papers in Proc. ACL 2013 that mention sentiment lexicon.

See all papers in Proc. ACL that mention sentiment lexicon.

Back to top.

topic modeling

Appears in 5 sentences as: Topic Modeling (1) topic modeling (2) topic models (2)
In Generating Recommendation Dialogs by Extracting Information from User Reviews
  1. The framework makes use of techniques from topic modeling and sentiment-based aspect extraction to identify fine-grained attributes for each business.
    Page 1, “Introduction”
  2. Using these topic models , we assign a business
    Page 1, “Generating Questions from Reviews”
  3. 2We use the Topic Modeling Toolkit implementation: http://n1p.stanford.edu/software/tmt
    Page 1, “Generating Questions from Reviews”
  4. ‘Top-level’ repeatedly queries the user’s top-level category preferences, ‘Subtopic’ additionally uses our topic modeling subcategories, and ‘All’ uses these plus the aspects extracted from reviews.
    Page 4, “Evaluation”
  5. Using topic models to discover subtypes of businesses, a domain-specific sentiment lexicon, and a number of new techniques for increasing precision in sentiment aspect extraction yields attributes that give a rich representation of the restaurant domain.
    Page 5, “Conclusion”

See all papers in Proc. ACL 2013 that mention topic modeling.

See all papers in Proc. ACL that mention topic modeling.

Back to top.

fine-grained

Appears in 4 sentences as: Fine-Grained (1) fine-grained (3)
In Generating Recommendation Dialogs by Extracting Information from User Reviews
  1. The framework makes use of techniques from topic modeling and sentiment-based aspect extraction to identify fine-grained attributes for each business.
    Page 1, “Introduction”
  2. To identify these subcategories, we run Latent Dirichlet Analysis (LDA) (Blei et al., 2003) on the reviews of each set of businesses in the twenty most common top-level categories, using 10 topics and concatenating all of a business’s reviews into one document.2 Several researchers have used sentence-level documents to model topics in reviews, but these tend to generate topics about fine-grained aspects of the sort we discuss in Section 2.2 (Jo and Oh, 2011; Brody and Elhadad, 2010).
    Page 1, “Generating Questions from Reviews”
  3. 2.2 Questions from Fine-Grained Aspects
    Page 2, “Generating Questions from Reviews”
  4. Note that the information gain agent starts dialogs with the top-level and appropriate subcategory questions, so it is only for longer dialogs that the fine-grained aspects boost performance.
    Page 5, “Evaluation”

See all papers in Proc. ACL 2013 that mention fine-grained.

See all papers in Proc. ACL that mention fine-grained.

Back to top.