Use this URL to cite or link to this record in EThOS: | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.818096 |
![]() |
|||||||
Title: | Learning meaning representations for text generation with deep generative models | ||||||
Author: | Cao, Kris |
ORCID:
0000-0003-2760-7358
ISNI:
0000 0004 9359 4927
|
|||||
Awarding Body: | University of Cambridge | ||||||
Current Institution: | University of Cambridge | ||||||
Date of Award: | 2019 | ||||||
Availability of Full Text: |
|
||||||
Abstract: | |||||||
This thesis explores conditioning a language generation model with auxiliary variables. By doing so, we hope to be able to better control the output of the language generator. We explore several kinds of auxiliary variables in this thesis, from unstructured continuous, to discrete, to structured discrete auxiliary variables, and evaluate their advantages and disadvantages. We consider three primary axes of variation: how interpretable the auxiliary variables are, how much control they provide over the generated text, and whether the variables can be induced from unlabelled data. The latter consideration is particularly interesting: if we can show that induced latent variables correspond to the semantics of the generated utterance, then by manipulating the variables, we have fine-grained control over the meaning of the generated utterance, thereby learning simple meaning representations for text generation. We investigate three language generation tasks: open domain conversational response generation, sentence generation from a semantic topic, and generating surface form realisations of meaning representations. We use a different type of auxiliary variable for each task, describe the reasons for choosing that type of variable, and critically discuss how much the task benefited from an auxiliary variable decomposition. All of the models that we use combine a high-level graphical model with a neural language model text generator. The graphical model lets us specify the structure of the text generating process, while the neural text generator can learn how to generate fluent text from a large corpus of examples. We aim to show the utility of such 'deep generative models' of text for text generation in the following work.
|
|||||||
Supervisor: | Clark, Stephen | Sponsor: | Not available | ||||
Qualification Name: | Thesis (Ph.D.) | Qualification Level: | Doctoral | ||||
EThOS ID: | uk.bl.ethos.818096 | DOI: | |||||
Keywords: | computational linguistics ; natural language processing ; machine learning | ||||||
Share: |