Disentangling Representations in Pre-Trained Language Models
Dave, Aniruddha Mahesh, Computer Science - School of Engineering and Applied Science, University of Virginia
Ji, Yangfeng, EN-Comp Science Dept, University of Virginia
Pre-trained language models dominate modern natural language processing.
They rely on self-supervision to learn general-purpose representations.
Given the redundant information encoded in these representations, it is unclear what information encoded leads to superior performance on various tasks and whether it can be even better if it is encoded in an interpretable way.
In this work, with stylistic datasets, we explore whether style and content can be disentangled from sentence representations learned by pre-trained language models.
We devise a novel approach leveraging multi-task and adversarial objectives to learn disentangled representations.
The latent space is divided into different parts and fine-tuned so that they encode different information.
Our approach is demonstrated using parallel datasets with different styles from various domains.
We show that style and content spaces can be disentangled from the sentence representations through this simple yet effective approach.
MS (Master of Science)
Pre-trained language model, Natural Language Processing, NLP, BERT, Representation Learning, Dientangled