BRAINSUP: Brainstorming Support for Creative Sentence Generation
Özbal, Gözde and Pighin, Daniele and Strapparava, Carlo

Article Structure

Abstract

We present BRAINSUP, an extensible framework for the generation of creative sentences in which users are able to force several words to appear in the sentences and to control the generation process across several semantic dimensions, namely emotions, colors, domain relatedness and phonetic properties.

Introduction

A variety of real-world scenarios involve talented and knowledgable people in a time-consuming process to write creative, original sentences generated according to well-defined requisites.

Related work

Research in creative language generation has bloomed in recent years.

Architecture of BRAINSUP

To effectively support the creative process with useful suggestions, we must be able to generate sentences conforming to the user needs.

Evaluation

We evaluated our model on a creative sentence generation task.

Conclusion

We have presented BRAINSUP, a novel system for creative sentence generation that allows users to control many aspects of the creativity process, from the presence of specific target words in the output, to the selection of a target domain, and to the injection of phonetic and semantic properties in the generated sentences.

Topics

generation process

Appears in 9 sentences as: generation process (8) generative process (1)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. We present BRAINSUP, an extensible framework for the generation of creative sentences in which users are able to force several words to appear in the sentences and to control the generation process across several semantic dimensions, namely emotions, colors, domain relatedness and phonetic properties.
    Page 1, “Abstract”
  2. More formally, the user input is a tuple: U = (t,d, c, e,p,w> , where t is the set of target words, (1 is a set of words defining the target domain, 0 and p are, respectively, the color and the emotion towards which the user wants to slant the sentence, p represents the desired phonetic features, and w is a set of weights that control the influence of each dimension on the generative process , as detailed in Section 3.3.
    Page 3, “Architecture of BRAINSUP”
  3. The sentence generation process is based on morpho-syntactic patterns which we automatically discover from a corpus of dependency parsed sentences ’P.
    Page 3, “Architecture of BRAINSUP”
  4. Algorithm 1 provides a high-level description of the creative sentence generation process .
    Page 3, “Architecture of BRAINSUP”
  5. The choice of the corpus from which the patterns are extracted constitutes the first element of the creative sentence generation process , as different choices will generate sentences with different styles.
    Page 3, “Architecture of BRAINSUP”
  6. We have devised a set of feature functions that account for different aspects of the creative sentence generation process .
    Page 5, “Architecture of BRAINSUP”
  7. By changing the weight w of the feature functions in U, users can control the extent to which each creativity component will affect the sentence generation process , and tune the output of the system to better match their needs.
    Page 5, “Architecture of BRAINSUP”
  8. We did so to simulate the brainstorming phase behind the slogan generation process , where copywriters start with a set of relevant keywords to come up with a catchy slogan.
    Page 7, “Evaluation”
  9. Nonetheless, it is very encouraging to observe that the generation process does not deteriorate the positive impact of the input keywords.
    Page 8, “Evaluation”

See all papers in Proc. ACL 2013 that mention generation process.

See all papers in Proc. ACL that mention generation process.

Back to top.

semantic relations

Appears in 6 sentences as: semantic relations (4) semantically related (2)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. (2011) slant existing textual expressions to obtain more positively or negatively valenced versions using WordNet (Miller, 1995) semantic relations and SentiWordNet (Esuli and Sebastiani, 2006) annotations.
    Page 2, “Related work”
  2. Stock and Strapparava (2006) generate acronyms based on lexical substitution via semantic field opposition, rhyme, rythm and semantic relations .
    Page 2, “Related work”
  3. (2012) attempt to generate novel poems by replacing words in existing poetry with morphologically compatible words that are semantically related to a target domain.
    Page 2, “Related work”
  4. [YesMo]; 3) Relatedness: is the sentence semantically related to the target domain?
    Page 6, “Evaluation”
  5. In other cases, such as “A sixth calorie may taste an own good” or “A same sunshine is fewer than a juice of day”, more sophisticated reasoning about syntactic and semantic relations in the output might be necessary in order to enforce the generation of sound and grammatical sentences.
    Page 8, “Evaluation”
  6. Concerning the extension of the capabilities of BRAINSUP, we want to include commonsense knowledge and reasoning to profit from more sophisticated semantic relations and to inject humor on demand.
    Page 9, “Conclusion”

See all papers in Proc. ACL 2013 that mention semantic relations.

See all papers in Proc. ACL that mention semantic relations.

Back to top.

lexicalized

Appears in 5 sentences as: lexicalizations (2) lexicalized (3)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. A beam search in the space of all possible lexicalizations of a syntactic pattern promotes the words with the highest likelihood of satisfying the user specification.
    Page 3, “Architecture of BRAINSUP”
  2. With the compatible patterns selected, we can initiate a beam search in the space of all possible lexicalizations of the patterns, i.e., the space of all sentences that can be generated by respecting the syntactic constraints encoded by each pattern.
    Page 4, “Architecture of BRAINSUP”
  3. Figure 2: A partially lexicalized sentence with a highlighted empty slot marked with X.
    Page 4, “Architecture of BRAINSUP”
  4. Each partially lexicalized solution is scored by a battery of scoring functions that compete to generate creative sentences respecting the user specification U, as explained in Section 3.3.
    Page 4, “Architecture of BRAINSUP”
  5. The most promising solutions are extended by filling another slot, until completely lexicalized sentences, i.e., sentences without empty slots, are generated.
    Page 4, “Architecture of BRAINSUP”

See all papers in Proc. ACL 2013 that mention lexicalized.

See all papers in Proc. ACL that mention lexicalized.

Back to top.

treebanks

Appears in 5 sentences as: treebank (2) treebanks (3)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. These constraints are learned from relation-head-modifier co-occurrence counts estimated from a dependency treebank £.
    Page 3, “Architecture of BRAINSUP”
  2. Algorithm 1 SentenceGeneration(U, 9, 73, [2): U is the user specification, 8 is a set of meta-parameters; 73 and £3 are two dependency treebanks .
    Page 3, “Architecture of BRAINSUP”
  3. We estimate the probability of a modifier word m and its head h to be in the relation r as Mb, m) = Cr(h, m)/(Zh, Em, CAM» 77%)), where cr(-) is the number of times that m depends on h in the dependency treebank £ and hi, m,- are all the head/modifier pairs observed in £.
    Page 6, “Architecture of BRAINSUP”
  4. As discussed in Section 3 we use two different treebanks to learn the syntactic patterns (’P) and the dependency operators (£).
    Page 7, “Evaluation”
  5. BRAINSUP makes heavy use of dependency parsed data and statistics collected from dependency treebanks to ensure the grammaticality of the generated sentences, and to trim the search space while seeking the sentences that maximize the user satisfaction.
    Page 9, “Conclusion”

See all papers in Proc. ACL 2013 that mention treebanks.

See all papers in Proc. ACL that mention treebanks.

Back to top.

beam search

Appears in 4 sentences as: beam search (4)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. A beam search in the space of all possible lexicalizations of a syntactic pattern promotes the words with the highest likelihood of satisfying the user specification.
    Page 3, “Architecture of BRAINSUP”
  2. CompatiblePatterns(-) finds the most frequent syntactic patterns in ’P that are compatible with the user specification, as explained in Section 3.1; FillInPattern(-) carries out the beam search , and returns the best solutions generated for each pattern p given U.
    Page 3, “Architecture of BRAINSUP”
  3. With the compatible patterns selected, we can initiate a beam search in the space of all possible lexicalizations of the patterns, i.e., the space of all sentences that can be generated by respecting the syntactic constraints encoded by each pattern.
    Page 4, “Architecture of BRAINSUP”
  4. Letting beam search find the best placement for the target words comes at no extra cost and results in a simple and elegant model.
    Page 6, “Architecture of BRAINSUP”

See all papers in Proc. ACL 2013 that mention beam search.

See all papers in Proc. ACL that mention beam search.

Back to top.

dependency relations

Appears in 4 sentences as: dependency relation (1) dependency relations (3)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. Candidate fillers for each empty position (slot) in the patterns are chosen according to the lexical and syntactic constraints enforced by the dependency relations in the patterns.
    Page 3, “Architecture of BRAINSUP”
  2. To achieve that, we analyze a large corpus of parsed sentences L3 and store counts of observed head-relation-modifier ((h, r, 771)) dependency relations .
    Page 4, “Architecture of BRAINSUP”
  3. a dependency relation .
    Page 5, “Architecture of BRAINSUP”
  4. The dependency-likelihood of a sentence 5 can then be calculated as f<s, U) = exp<z<hw>9<s> 10gPr(ham>>, r<s> being the set of dependency relations in s.
    Page 6, “Architecture of BRAINSUP”

See all papers in Proc. ACL 2013 that mention dependency relations.

See all papers in Proc. ACL that mention dependency relations.

Back to top.

N-gram

Appears in 4 sentences as: N-gram (2) n-gram (2)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. N-gram likelihood.
    Page 6, “Architecture of BRAINSUP”
  2. This is simply the likelihood of a sentence estimated by an n-gram language model, to enforce the generation of well-formed word sequences.
    Page 6, “Architecture of BRAINSUP”
  3. When a solution is not complete, in the computation we include only the sequences of contiguous words (i.e., not interrupted by empty slots) having length greater than or equal to the order of the n-gram model.
    Page 6, “Architecture of BRAINSUP”
  4. The four combinations of features are: base: Target-word scorer + N-gram likelihood + Dependency likelihood + Variety scorer + Unusual-words scorer + Semantic cohesion; base+D: all the scorers in base + Domain relatedness; base+D+C: all the scorers in base+D + Chromatic connotation; base+D+E: all the scorers in base+D + Emotional connotation; base+D+P: all the scorers in base+D + Phonetic features.
    Page 7, “Evaluation”

See all papers in Proc. ACL 2013 that mention N-gram.

See all papers in Proc. ACL that mention N-gram.

Back to top.

content words

Appears in 3 sentences as: content words (3)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. ning, 2003) and produce the patterns by stripping away all content words from the parses.
    Page 4, “Architecture of BRAINSUP”
  2. Two or three content words appearing in each slogan were randomly selected as the target words 1:.
    Page 7, “Evaluation”
  3. Furthermore, we only considered sentences in which all the content words are listed in WordNet (Miller, 1995) with the observed part of speech.8 The LSA space used for the semantic feature functions was also learned on BNC data, but in this case no filtering was applied.
    Page 7, “Evaluation”

See all papers in Proc. ACL 2013 that mention content words.

See all papers in Proc. ACL that mention content words.

Back to top.

dependency parsed

Appears in 3 sentences as: dependency parsed (2) dependency parsing (1)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. The sentence generation process is based on morpho-syntactic patterns which we automatically discover from a corpus of dependency parsed sentences ’P.
    Page 3, “Architecture of BRAINSUP”
  2. Dependency operators were learned by dependency parsing the British National Corpus7.
    Page 7, “Evaluation”
  3. BRAINSUP makes heavy use of dependency parsed data and statistics collected from dependency treebanks to ensure the grammaticality of the generated sentences, and to trim the search space while seeking the sentences that maximize the user satisfaction.
    Page 9, “Conclusion”

See all papers in Proc. ACL 2013 that mention dependency parsed.

See all papers in Proc. ACL that mention dependency parsed.

Back to top.

scoring functions

Appears in 3 sentences as: scoring functions (3)
In BRAINSUP: Brainstorming Support for Creative Sentence Generation
  1. Each partially lexicalized solution is scored by a battery of scoring functions that compete to generate creative sentences respecting the user specification U, as explained in Section 3.3.
    Page 4, “Architecture of BRAINSUP”
  2. Concerning the scoring of partial solutions and complete sentences, we adopt a simple linear combination of scoring functions .
    Page 5, “Architecture of BRAINSUP”
  3. ,fk] be the vector of scoring functions and w = [2120, .
    Page 5, “Architecture of BRAINSUP”

See all papers in Proc. ACL 2013 that mention scoring functions.

See all papers in Proc. ACL that mention scoring functions.

Back to top.