Index of papers in Proc. ACL that mention
  • deep learning
Cui, Lei and Zhang, Dongdong and Liu, Shujie and Chen, Qiming and Li, Mu and Zhou, Ming and Yang, Muyun
Background: Deep Learning
Deep learning is an active topic in recent years which has triumphed in many machine learning research areas.
Background: Deep Learning
Followed by fine-tuning in this parameter region, deep learning is able to achieve state-of-the-art performance in various research areas, including breakthrough results on the ImageNet dataset for objective recognition (Krizhevsky et al., 2012), significant error reduction in speech recognition (Dahl et al., 2012), etc.
Background: Deep Learning
Deep learning has also been successfully applied in a variety of NLP tasks such as part-of-speech tagging, chunking, named entity recognition, semantic role labeling (Collobert et al., 2011), parsing (Socher et al., 2011a), sentiment analysis (Socher et al., 2011b), etc.
Experiments
In deep learning , this parameter is often empirically tuned with human efforts.
Related Work
0 We directly optimized bilingual topic similarity in the deep learning framework with the help of sentence-level parallel data, so that the learned representation could be easily used in SMT decoding procedure.
Topic Similarity Model with Neural Network
Auto-encoder (Bengio et al., 2006) is one of the basic building blocks of deep learning .
deep learning is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Lapata, Mirella
Related Work
The use of stacked autoencoders to extract a shared lexical meaning representation is new to our knowledge, although, as we explain below related to a large body of work on deep learning .
Related Work
Multimodal Deep Learning Our work employs deep learning (a.k.a deep networks) to project linguistic and visual information onto a unified representation that fuses the two modalities together.
Related Work
The goal of deep learning is to learn multiple levels of representations through a hierarchy of network architectures, where higher-level representations are expected to help define higher-level concepts.
deep learning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Socher, Richard and Bauer, John and Manning, Christopher D. and Andrew Y., Ng
Introduction
sets of discrete states and recursive deep learning models that jointly learn classifiers and continuous feature representations for variable-sized inputs.
Introduction
Deep Learning and Recursive Deep Learning Early attempts at using neural networks to describe phrases include Elman (1991), who used recurrent neural networks to create representations of sentences from a simple toy grammar and to analyze the linguistic expressiveness of the resulting representations.
Introduction
The idea of untying has also been successfully used in deep learning applied to vision (Le et al., 2010).
deep learning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ji, Yangfeng and Eisenstein, Jacob
Conclusion
Deep learning approaches typically apply a nonlinear transformation such as the sigmoid function (Bengio et al., 2013).
Large-Margin Learning Framework
This idea is similar to the mini-batch learning, which has been used in large-scale SVM problem (Nelakanti et al., 2013) and deep learning models (Le et al., 2011).
Related Work
Also called Deep Learning , such approaches have recently been applied in a number of NLP tasks (Collobert et al., 2011; Socher et al., 2012).
deep learning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lu, Shixiang and Chen, Zhenbiao and Xu, Bo
Conclusions
Compared with the original features, DNN (DAE and HCDAE) features are learned from the nonlinear combination of the original features, they strong capture high-order correlations between the activities of the original features, and we believe this deep learning paradigm induces the original features to further reach their potential for SMT.
Introduction
In this paper, we strive to effectively address the above two shortcomings, and systematically explore the possibility of learning new features using deep (multilayer) neural networks (DNN, which is usually referred under the name Deep Learning ) for SMT.
Introduction
DNN features are learned from the nonlinear combination of the input original features, they strong capture high-order correlations between the activities of the original features, and we believe this deep learning paradigm induces the original features to further reach their potential for SMT.
deep learning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: