Index of papers in Proc. ACL 2013 that mention
  • neural network
Boros, Tiberiu and Ion, Radu and Tufis, Dan
Abstract
In this paper we present an alternative method to Tiered Tagging, based on local optimizations with Neural Networks and we show how, by properly encoding the input sequence in a general Neural Network architecture, we achieve results similar to the Tiered Tagging methodology, significantly faster and without requiring extensive linguistic knowledge as implied by the previously mentioned method.
Abstract
In this article, we propose an alternative solution based on local optimizations with feed-forward neural networks .
Abstract
2 Large tagset part-of—speech tagging with feed-forward neural networks
neural network is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
liu, lemao and Watanabe, Taro and Sumita, Eiichiro and Zhao, Tiejun
Abstract
A neural network is a reasonable method to address these pitfalls.
Abstract
However, modeling SMT with a neural network is not trivial, especially when taking the decoding efficiency into consideration.
Abstract
In this paper, we propose a variant of a neural network , i.e.
Introduction
A neural network (Bishop, 1995) is a reasonable method to overcome the above shortcomings.
Introduction
In the search procedure, frequent computation of the model score is needed for the search heuristic function, which will be challenged by the decoding efficiency for the neural network based translation model.
Introduction
In this paper, we propose a variant of neural networks , i.e.
neural network is mentioned in 34 sentences in this paper.
Topics mentioned in this paper:
Socher, Richard and Bauer, John and Manning, Christopher D. and Andrew Y., Ng
Abstract
Instead, we introduce a Compositional Vector Grammar (CVG), which combines PCFGs with a syntactically untied recursive neural network that learns syntactico-semantic, compositional vector representations.
Introduction
The vectors for nonterminals are computed via a new type of recursive neural network which is conditioned on syntactic categories from a PCFG.
Introduction
l. CVGs combine the advantages of standard probabilistic context free grammars (PCFG) with those of recursive neural networks (RNNs).
Introduction
This requires the composition function to be extremely powerful, since it has to combine phrases with different syntactic head words, and it is hard to optimize since the parameters form a very deep neural network .
neural network is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Yang, Nan and Liu, Shujie and Li, Mu and Zhou, Ming and Yu, Nenghai
Abstract
In this paper, we explore a novel bilingual word alignment approach based on DNN (Deep Neural Network ), which has been proven to be very effective in various machine learning tasks (Collobert et al., 2011).
DNN structures for NLP
The lookup process is called a lookup layer LT , which is usually the first layer after the input layer in neural network .
DNN structures for NLP
Multilayer neural networks are trained with the standard back propagation algorithm (LeCun, 1985).
DNN structures for NLP
Techniques such as layerwise pre-training(Bengio et al., 2007) and many tricks(LeCun et al., 1998) have been developed to train better neural networks .
Introduction
Recent years research communities have seen a strong resurgent interest in modeling with deep (multilayer) neural networks .
Introduction
For speech recognition, (Dahl et al., 2012) proposed context-dependent neural network with large vocabulary, which achieved 16.0% relative error reduction.
Introduction
(Collobert et al., 2011) and (Socher et al., 2011) further apply Recursive Neural Networks to address the structural prediction tasks such as tagging and parsing, and (Socher et al., 2012) explores the compositional aspect of word representations.
Related Work
(Seide et al., 2011) and (Dahl et al., 2012) apply Context-Dependent Deep Neural Network with HMM (CD-DNN-HMM) to speech recognition task, which significantly outperforms traditional models.
Related Work
(Bengio et al., 2006) proposed to use multilayer neural network for language modeling task.
neural network is mentioned in 33 sentences in this paper.
Topics mentioned in this paper:
He, Zhengyan and Liu, Shujie and Li, Mu and Zhou, Ming and Zhang, Longkai and Wang, Houfeng
Abstract
We propose a novel entity disambiguation model, based on Deep Neural Network (DNN).
Introduction
Deep neural networks (Hinton et al., 2006; Bengio et al., 2007) are built in a hierarchical manner, and allow us to compare context and entity at some higher level abstraction; while at lower levels, general concepts are shared across entities, resulting in compact models.
Learning Representation for Contextual Document
BTS is a variant of the general backpropagation algorithm for structured neural network .
neural network is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: