Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
Celikyilmaz, Asli and Hakkani-Tur, Dilek and Tur, Gokhan and Sarikaya, Ruhi

Article Structure

Abstract

Finding concepts in natural language utterances is a challenging task, especially given the scarcity of labeled data for learning semantic ambiguity.

Introduction

Semantic tagging is used in natural language understanding (NLU) to recognize words of semantic importance in an utterance, such as entities.

Related Work and Motivation

(I) Semi-Supervised Tagging.

Markov Topic Regression - MTR

3.1 Model and Abstractions

Semi-Supervised Semantic Labeling

>x< (ngfi—l) + a)*

Experiments

5.1 Datasets and Tagsets 5.1.1 Semantic Tagging Datasets

Conclusions

We have presented a novel semi supervised learning approach using a probabilistic clustering

Topics

CRF

Appears in 32 sentences as: CRF (32)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. It initially starts with training supervised Conditional Random Fields ( CRF ) (Lafferty et al., 2001) on the source training data which has been semantically tagged.
    Page 1, “Introduction”
  2. Our SSL uses MTR to smooth the semantic tag posteriors on the unlabeled target data (decoded using the CRF model) and later obtains the best tag sequences.
    Page 1, “Introduction”
  3. While retrospective learning itera-tively trains CRF models with the automatically annotated target data (explained above), it keeps track of the errors of the previous iterations so as to carry the properties of both the source and target domains.
    Page 2, “Introduction”
  4. We present a retrospective SSL for CRF , in that, the iterative learner keeps track of the errors of the previous iterations so as to carry the properties of both the source and target domains.
    Page 3, “Related Work and Motivation”
  5. We use the word-tag posterior probabilities obtained from a CRF sequence model trained on labeled utterances as features.
    Page 4, “Markov Topic Regression - MTR”
  6. 4.1 Semi Supervised Learning (SSL) with CRF
    Page 5, “Semi-Supervised Semantic Labeling”
  7. They decode unlabeled queries from target domain (t) using a CRF model trained on the POS-labeled newswire data (source domain (0)).
    Page 5, “Semi-Supervised Semantic Labeling”
  8. Since CRF tagger only uses local features of the input to score tag pairs, they try to capture all the context with the graph with additional context features on types.
    Page 5, “Semi-Supervised Semantic Labeling”
  9. Graph-based SSL defines a new CRF objective function:
    Page 5, “Semi-Supervised Semantic Labeling”
  10. (5) is the loss on the labeled data and £2 regularization on parameters, Ag), from nth iteration, same as standard CRF .
    Page 5, “Semi-Supervised Semantic Labeling”
  11. 4.2 Retrospective Semi-Supervised CRF
    Page 5, “Semi-Supervised Semantic Labeling”

See all papers in Proc. ACL 2013 that mention CRF.

See all papers in Proc. ACL that mention CRF.

Back to top.

unlabeled data

Appears in 10 sentences as: unlabeled data (10)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. To the best of our knowledge, our work is the first to explore the unlabeled data to iteratively adapt the semantic tagging models for target domains, preserving information from the previous iterations.
    Page 2, “Introduction”
  2. Adapting the source domain using unlabeled data is the key to achieving good performance across domains.
    Page 2, “Related Work and Motivation”
  3. The last term is the loss on unlabeled data from target domain with a hyper-parameter 7'.
    Page 5, “Semi-Supervised Semantic Labeling”
  4. After we decode the unlabeled data , we retrain a new CRF model at each iteration.
    Page 6, “Semi-Supervised Semantic Labeling”
  5. Each iteration makes predictions on the semantic tags of unlabeled data with varying posterior probabilities.
    Page 6, “Semi-Supervised Semantic Labeling”
  6. The last term ensures that the predictions of the current model have the same sign as the predictions of the previous models (using labeled and unlabeled data ), denoted by a maximum margin hinge weight, hn(uj)=fi 71H pn(e;|uj; A5,”).
    Page 6, “Semi-Supervised Semantic Labeling”
  7. As for unlabeled data we crawled the web and collected around 100,000 questions that are similar in style and length to the ones in QuestionBank, e.g.
    Page 7, “Experiments”
  8. A CRF model is used to decode the unlabeled data to generate more labeled examples for retraining.
    Page 7, “Experiments”
  9. smooth the semantic tag posteriors of a unlabeled data decoded by the CRF model using Eq.
    Page 8, “Experiments”
  10. Our Bayesian MTR efficiently extracts information from the unlabeled data for the target domain.
    Page 8, “Experiments”

See all papers in Proc. ACL 2013 that mention unlabeled data.

See all papers in Proc. ACL that mention unlabeled data.

Back to top.

topic model

Appears in 10 sentences as: Topic Model (1) topic model (5) topic models (4)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. It can efficiently handle semantic ambiguity by extending standard topic models with two new features.
    Page 1, “Abstract”
  2. Our first contribution is a new probabilistic topic model , Markov Topic Regression (MTR), which uses rich features to capture the degree of association between words and semantic tags.
    Page 1, “Introduction”
  3. Standard topic models , such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), use a bag-of-words approach, which disregards word order and clusters words together that appear in a similar global context.
    Page 3, “Related Work and Motivation”
  4. Recent topic models consider word sequence information in documents (Griffiths et al., 2005; Moon et al., 2010).
    Page 3, “Related Work and Motivation”
  5. Thus, we build a semantically rich topic model , MTR, using word context features as side information.
    Page 3, “Related Work and Motivation”
  6. Thus, to best of our knowledge, MTR is the first topic model to incorporate word features while considering the sequence of words.
    Page 3, “Related Work and Motivation”
  7. Since MTR provides a mixture of properties adapted from earlier models, we present performance benchmarks on tag clustering using: (i) LDA; (ii) Hidden Markov Topic Model HMTM (Gruber et al., 2005); and, (iii) w-LDA (Petterson et al., 2010) that uses word features as priors in LDA.
    Page 7, “Experiments”
  8. Each topic model uses Gibbs sampling for inference and parameter learning.
    Page 7, “Experiments”
  9. For fair comparison, each benchmark topic model is provided with prior information on word-semantic tag distributions based on the labeled training data, hence, each K latent topic is assigned to one of K semantic tags at the beginning of Gibbs sampling.
    Page 8, “Experiments”
  10. The performance of the four topic models are reported in Figure 2.
    Page 8, “Experiments”

See all papers in Proc. ACL 2013 that mention topic model.

See all papers in Proc. ACL that mention topic model.

Back to top.

LDA

Appears in 9 sentences as: LDA (10)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. Standard topic models, such as Latent Dirichlet Allocation ( LDA ) (Blei et al., 2003), use a bag-of-words approach, which disregards word order and clusters words together that appear in a similar global context.
    Page 3, “Related Work and Motivation”
  2. In LDA , common words tend to dominate all topics causing related words to end up in different topics.
    Page 3, “Related Work and Motivation”
  3. In (Petterson et al., 2010), the vector-based features of words are used as prior information in LDA so that the words that are synonyms end up in same topic.
    Page 3, “Related Work and Motivation”
  4. LDA assumes that the latent topics of documents are sampled independently from one of K topics.
    Page 3, “Markov Topic Regression - MTR”
  5. Since MTR provides a mixture of properties adapted from earlier models, we present performance benchmarks on tag clustering using: (i) LDA; (ii) Hidden Markov Topic Model HMTM (Gruber et al., 2005); and, (iii) w-LDA (Petterson et al., 2010) that uses word features as priors in LDA .
    Page 7, “Experiments”
  6. 0.6 T I ags I _ LDA w—LDA HMTM MTR
    Page 8, “Experiments”
  7. models: LDA , HMTM, w—LDA.
    Page 8, “Experiments”
  8. LDA shows the worst performance, even though some supervision is provided by way of labeled semantic tags.
    Page 8, “Experiments”
  9. Although w—LDA improves semantic clustering performance over LDA , the fact that it does not have Markov properties makes it fall short behind MTR.
    Page 8, “Experiments”

See all papers in Proc. ACL 2013 that mention LDA.

See all papers in Proc. ACL that mention LDA.

Back to top.

graph-based

Appears in 8 sentences as: Graph-based (1) graph-based (7)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. Recent adaptation methods for SSL use: expectation minimization (Daumé-III, 2010) graph-based learning (Chapelle et al., 2006; Zhu, 2005), etc.
    Page 2, “Related Work and Motivation”
  2. In (Subramanya et al., 2010) an efficient iterative SSL method is described for syntactic tagging, using graph-based learning to smooth POS tag posteriors.
    Page 2, “Related Work and Motivation”
  3. The unlabeled POS tag posteriors are then smoothed using a graph-based learning algorithm.
    Page 5, “Semi-Supervised Semantic Labeling”
  4. Graph-based SSL defines a new CRF objective function:
    Page 5, “Semi-Supervised Semantic Labeling”
  5. smoothing model, instead of a graph-based model, as follows:
    Page 6, “Semi-Supervised Semantic Labeling”
  6. It should also be noted that with MTR, the R—SSL learns the word-tag relations by using features that describe the words in context, eliminating the need for additional type representation of graph-based model.
    Page 6, “Semi-Supervised Semantic Labeling”
  7. * SSL-Graph: A SSL model presented in (Subramanya et al., 2010) that uses graph-based leam-ing as posterior tag smoother for CRF model using Eq.
    Page 7, “Experiments”
  8. For graph-based learning, we implemented the algorithm presented in (Subramanya et al., 2010) and used the same hyper-parameters and features.
    Page 8, “Experiments”

See all papers in Proc. ACL 2013 that mention graph-based.

See all papers in Proc. ACL that mention graph-based.

Back to top.

labeled data

Appears in 8 sentences as: labeled data (8)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. Finding concepts in natural language utterances is a challenging task, especially given the scarcity of labeled data for learning semantic ambiguity.
    Page 1, “Abstract”
  2. Thus, each latent semantic class corresponds to one of the semantic tags found in labeled data .
    Page 1, “Introduction”
  3. We assume a fixed K topics corresponding to semantic tags of labeled data .
    Page 3, “Markov Topic Regression - MTR”
  4. K latent topics to the K semantic tags of our labeled data .
    Page 5, “Markov Topic Regression - MTR”
  5. labeled data , 712?, based on the log-linear model in Eq.
    Page 5, “Markov Topic Regression - MTR”
  6. (5) is the loss on the labeled data and £2 regularization on parameters, Ag), from nth iteration, same as standard CRF.
    Page 5, “Semi-Supervised Semantic Labeling”
  7. The labeled rows ml of the vocabulary matrix, m={wl,m“}, contain only {0,1} values, indicating the word’s observed semantic tags in the labeled data .
    Page 6, “Semi-Supervised Semantic Labeling”
  8. First a supervised learning algorithm is used to build a CRF model based on the labeled data .
    Page 7, “Experiments”

See all papers in Proc. ACL 2013 that mention labeled data.

See all papers in Proc. ACL that mention labeled data.

Back to top.

Gibbs sampling

Appears in 7 sentences as: Gibbs sampler (1) Gibbs sampling (7)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. We use blocked Gibbs sampling, in which the topic assignments 3k, and hyper-parameters {6,3521 are alternately sampled at each Gibbs sampling lag period 9 given all other variables.
    Page 4, “Markov Topic Regression - MTR”
  2. At each 9 lag period of the Gibbs sampling , K
    Page 4, “Markov Topic Regression - MTR”
  3. At the start of the Gibbs sampling , we designate the
    Page 4, “Markov Topic Regression - MTR”
  4. We use collapsed Gibbs sampling to reduce random components and model the posterior distribution by obtaining samples (Sji, 03¢) drawn from this distribution.
    Page 5, “Markov Topic Regression - MTR”
  5. Each topic model uses Gibbs sampling for inference and parameter learning.
    Page 7, “Experiments”
  6. For testing we iterated the Gibbs sampler using the trained model for 10 iterations on the testing data.
    Page 7, “Experiments”
  7. For fair comparison, each benchmark topic model is provided with prior information on word-semantic tag distributions based on the labeled training data, hence, each K latent topic is assigned to one of K semantic tags at the beginning of Gibbs sampling .
    Page 8, “Experiments”

See all papers in Proc. ACL 2013 that mention Gibbs sampling.

See all papers in Proc. ACL that mention Gibbs sampling.

Back to top.

semi-supervised

Appears in 7 sentences as: Semi-Supervised (3) semi-supervised (4)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. To deal with these issues, we describe an efficient semi-supervised learning (SSL) approach which has two components: (i) Markov Topic Regression is a new probabilistic model to cluster words into semantic tags (concepts).
    Page 1, “Abstract”
  2. Our new SSL approach improves semantic tagging performance by 3% absolute over the baseline models, and also compares favorably on semi-supervised syntactic tagging.
    Page 1, “Abstract”
  3. To deal with these issues, we present a new semi-supervised learning (SSL) approach, which mainly has two components.
    Page 1, “Introduction”
  4. (I) Semi-Supervised Tagging.
    Page 2, “Related Work and Motivation”
  5. 0 (Wang et al., 2009; Li et al., 2009; Li, 2010; Liu et al., 2011) investigate web query tagging using semi-supervised sequence models.
    Page 2, “Related Work and Motivation”
  6. 4.2 Retrospective Semi-Supervised CRF
    Page 5, “Semi-Supervised Semantic Labeling”
  7. Algorithm 2 Retrospective Semi-Supervised CRF Input: Labeled Lil, and unlabeled Ll“ data.
    Page 6, “Semi-Supervised Semantic Labeling”

See all papers in Proc. ACL 2013 that mention semi-supervised.

See all papers in Proc. ACL that mention semi-supervised.

Back to top.

F-measure

Appears in 5 sentences as: F-Measure (1) F-measure (4)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. F-Measure
    Page 8, “Experiments”
  2. Figure 2: F-measure for semantic clustering performance.
    Page 8, “Experiments”
  3. As expected, we see a drop in F-measure on all models on descriptive tags.
    Page 8, “Experiments”
  4. Table 2: Domain Adaptation performance in F-measure on Semantic Tagging on Movie Target domain and POS tagging on QBanszuestionBank.
    Page 9, “Experiments”
  5. Table 3: Classification performance in F-measure for semantically ambiguous words on the most frequently confused descriptive tags in the movie domain.
    Page 9, “Conclusions”

See all papers in Proc. ACL 2013 that mention F-measure.

See all papers in Proc. ACL that mention F-measure.

Back to top.

POS tag

Appears in 5 sentences as: POS tag (3) POS tagging (2)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. In (Subramanya et al., 2010) an efficient iterative SSL method is described for syntactic tagging, using graph-based learning to smooth POS tag posteriors.
    Page 2, “Related Work and Motivation”
  2. In (Subramanya et al., 2010), a new SSL method is described for adapting syntactic POS tagging of sentences in newswire articles along with search queries to a target domain of natural language (NL) questions.
    Page 5, “Semi-Supervised Semantic Labeling”
  3. The unlabeled POS tag posteriors are then smoothed using a graph-based learning algorithm.
    Page 5, “Semi-Supervised Semantic Labeling”
  4. Later, using Viterbi decoding, they select the l-best POS tag sequence, 33'?
    Page 5, “Semi-Supervised Semantic Labeling”
  5. Table 2: Domain Adaptation performance in F-measure on Semantic Tagging on Movie Target domain and POS tagging on QBanszuestionBank.
    Page 9, “Experiments”

See all papers in Proc. ACL 2013 that mention POS tag.

See all papers in Proc. ACL that mention POS tag.

Back to top.

log-linear

Appears in 4 sentences as: log-linear (4)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. log-linear models with parameters, AiméRM , is
    Page 4, “Markov Topic Regression - MTR”
  2. trained to predict Blgikfik, for each w, of a tag 81.32 BEE) = exp(f(wl; A?» (2) where the log-linear function f is: n23} = m; A295 = 9531905771 (3)
    Page 4, “Markov Topic Regression - MTR”
  3. labeled data, 712?, based on the log-linear model in Eq.
    Page 5, “Markov Topic Regression - MTR”
  4. The a: is used as the input matrix of the kth log-linear model (corresponding to kth semantic tag (topic)) to infer the [3 hyper-parameter of MTR in Eq.
    Page 6, “Semi-Supervised Semantic Labeling”

See all papers in Proc. ACL 2013 that mention log-linear.

See all papers in Proc. ACL that mention log-linear.

Back to top.

natural language

Appears in 4 sentences as: natural language (4)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. Finding concepts in natural language utterances is a challenging task, especially given the scarcity of labeled data for learning semantic ambiguity.
    Page 1, “Abstract”
  2. Semantic tagging is used in natural language understanding (NLU) to recognize words of semantic importance in an utterance, such as entities.
    Page 1, “Introduction”
  3. Our SSL approach uses probabilistic clustering method tailored for tagging natural language utterances.
    Page 2, “Introduction”
  4. In (Subramanya et al., 2010), a new SSL method is described for adapting syntactic POS tagging of sentences in newswire articles along with search queries to a target domain of natural language (NL) questions.
    Page 5, “Semi-Supervised Semantic Labeling”

See all papers in Proc. ACL 2013 that mention natural language.

See all papers in Proc. ACL that mention natural language.

Back to top.

Domain Adaptation

Appears in 4 sentences as: Domain Adaptation (2) domain adaptation (2)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. This causes data-mismatch issues and hence provides a perfect testbed for a domain adaptation task.
    Page 7, “Experiments”
  2. To evaluate the domain adaptation (DA) approach and to compare with results reported by (Subramanya et al., 2010), we use the first and second half of QuestionBank (Judge et al., 2006) as our development and test sets (target).
    Page 7, “Experiments”
  3. 5.3.2 Experiment 2: Domain Adaptation Task.
    Page 8, “Experiments”
  4. Table 2: Domain Adaptation performance in F-measure on Semantic Tagging on Movie Target domain and POS tagging on QBanszuestionBank.
    Page 9, “Experiments”

See all papers in Proc. ACL 2013 that mention Domain Adaptation.

See all papers in Proc. ACL that mention Domain Adaptation.

Back to top.

context information

Appears in 4 sentences as: context information (3) contextual information (1)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. To include contextual information , we add binary features for all possible tags.
    Page 8, “Experiments”
  2. The results indicate that incorporating context information with MTR is an effective option for identifying semantic ambiguity.
    Page 9, “Experiments”
  3. Our results show that encoding priors on words and context information contributes significantly to the performance of semantic clustering.
    Page 9, “Conclusions”
  4. Rather than using single turn utterances, we hope to utilize the context information , e.g., information from previous turns for improving the performance of the semantic tagging of the current turns.
    Page 9, “Conclusions”

See all papers in Proc. ACL 2013 that mention context information.

See all papers in Proc. ACL that mention context information.

Back to top.

language model

Appears in 3 sentences as: Language Model (1) language model (2)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. (19) Language Model Prior (77W): Probabilities on word transitions denoted as nw=p(wi=v|wi_1).
    Page 4, “Markov Topic Regression - MTR”
  2. We built a language model using SRILM (Stol-cke, 2002) on the domain specific sources such as top wiki pages and blogs on online movie reviews, etc., to obtain the probabilities of domain-specific n-grams, up to 3-grams.
    Page 4, “Markov Topic Regression - MTR”
  3. (l), we assume that the prior on the semantic tags, 773, is more indicative of the decision for sampling a w,- from a new tag compared to language model posteriors on word sequences, 77W.
    Page 4, “Markov Topic Regression - MTR”

See all papers in Proc. ACL 2013 that mention language model.

See all papers in Proc. ACL that mention language model.

Back to top.

latent semantic

Appears in 3 sentences as: latent semantic (3)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. Thus, each latent semantic class corresponds to one of the semantic tags found in labeled data.
    Page 1, “Introduction”
  2. (I) Semantic Tags (Si): Each word 21),- of a given utterance with Nj words, uj={wi}§:j1€U, j=1,..|U |, from a set of utterances U, is associated with a latent semantic tag (state) variable 3168, where 8 is the set of semantic tags.
    Page 3, “Markov Topic Regression - MTR”
  3. latent semantic tag
    Page 4, “Markov Topic Regression - MTR”

See all papers in Proc. ACL 2013 that mention latent semantic.

See all papers in Proc. ACL that mention latent semantic.

Back to top.

bag-of-words

Appears in 3 sentences as: bag-of-words (3)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. Second, by going beyond a bag-of-words approach, it takes into account the inherent sequential nature of utterances to learn semantic classes based on context.
    Page 1, “Abstract”
  2. Standard topic models, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), use a bag-of-words approach, which disregards word order and clusters words together that appear in a similar global context.
    Page 3, “Related Work and Motivation”
  3. Similarly, if no Markov properties are used ( bag-of-words ), MTR reduces to w—LDA.
    Page 7, “Experiments”

See all papers in Proc. ACL 2013 that mention bag-of-words.

See all papers in Proc. ACL that mention bag-of-words.

Back to top.

log-linear model

Appears in 3 sentences as: log-linear model (2) log-linear models (1)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. log-linear models with parameters, AiméRM , is
    Page 4, “Markov Topic Regression - MTR”
  2. labeled data, 712?, based on the log-linear model in Eq.
    Page 5, “Markov Topic Regression - MTR”
  3. The a: is used as the input matrix of the kth log-linear model (corresponding to kth semantic tag (topic)) to infer the [3 hyper-parameter of MTR in Eq.
    Page 6, “Semi-Supervised Semantic Labeling”

See all papers in Proc. ACL 2013 that mention log-linear model.

See all papers in Proc. ACL that mention log-linear model.

Back to top.

model trained

Appears in 3 sentences as: model trained (3)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. We use the word-tag posterior probabilities obtained from a CRF sequence model trained on labeled utterances as features.
    Page 4, “Markov Topic Regression - MTR”
  2. They decode unlabeled queries from target domain (t) using a CRF model trained on the POS-labeled newswire data (source domain (0)).
    Page 5, “Semi-Supervised Semantic Labeling”
  3. They use a small value for 7' to enable the new model to be as close as possible to the initial model trained on source data.
    Page 5, “Semi-Supervised Semantic Labeling”

See all papers in Proc. ACL 2013 that mention model trained.

See all papers in Proc. ACL that mention model trained.

Back to top.

n-grams

Appears in 3 sentences as: n-grams (3)
In Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression
  1. We built a language model using SRILM (Stol-cke, 2002) on the domain specific sources such as top wiki pages and blogs on online movie reviews, etc., to obtain the probabilities of domain-specific n-grams , up to 3-grams.
    Page 4, “Markov Topic Regression - MTR”
  2. In addition, we extracted web n-grams and entity lists (see §3) from movie related web sites, and online blogs and reviews.
    Page 7, “Experiments”
  3. We extract prior distributions for entities and n-grams to calculate entity list 77 and word-tag [3 priors (see §3.1).
    Page 7, “Experiments”

See all papers in Proc. ACL 2013 that mention n-grams.

See all papers in Proc. ACL that mention n-grams.

Back to top.