Unsupervised Domain Adaptation and Contrastive Learning for Insufficiently Labeled Data

Author: ORCID icon orcid.org/0000-0003-3881-8599
Moradinasab, Nazanin, Systems Engineering - School of Engineering and Applied Science, University of Virginia
Advisor:
Brown, Donald, School of Data Science, University of Virginia
Abstract:

Recently, modern deep learning-based approaches have become popular over traditional methods in many real-world applications. However, the success of these approaches relies on two factors: (1) access to the massive amount of labeled data for training and (2) independent and identically distributed (i.i.d) assumption of training and test datasets. In many applications, collecting a large amount of high-quality labeled data is expensive and financially demanding, especially for tasks like semantic segmentation and multivariate time series classification. The majority of practical datasets are only partially labeled or possess limited labeled instances.
The main goal of this dissertation is to develop robust deep learning models for situations where the target dataset is labeled insufficiently. To achieve this goal, we developed four innovative approaches, two of which are the Universal representation learning and Label-efficient Contrastive learning-based models. These models are designed for time series classification and semantic segmentation tasks where the datasets are insufficiently labeled. A distinctive feature of our methods is the introduction of a cluster-level Supervised Contrastive (SupCon) approach in addition to the instance-level SupCon. This addition aims to mitigate the negative impact caused by intra-class variances and inter-class similarities during the training process. By incorporating both instance and cluster-level contrastive learning, our approach seeks to enhance the model's ability to discern meaningful patterns and representations, particularly in scenarios where labeled data is scarce.
The third approach focuses on self-training Domain Adaptation (DA) techniques to improve the generalization ability of the deep models on the unlabeled or scarce-labeled target tasks by training the model on both label-scarce target and label-rich source data. The prevalent self-training approach involves retraining the dense discriminative classifier of $p(class|pixel feature)$ using the pseudo-labels from the target domain. While many methods focus on mitigating the issue of noisy pseudo-labels, they often overlook the underlying data distribution $p(pixel feature|class)$ in both the source and target domains. To address this limitation, we designed the multi-prototype Gaussian-Mixture-based (ProtoGMM) model, which incorporates the Gaussian mixture model into contrastive losses to perform guided contrastive learning. This novel approach involves estimating the underlying multi-prototype source distribution by utilizing the Gaussian Mixture model on the feature space of the source samples. The components of the GMM model act as representative prototypes, effectively adapting to the multimodal data density and capturing within-class variations. To achieve increased intra-class semantic similarity, decreased inter-class similarity, and domain alignment between the source and target domains, we employed multi-prototype contrastive learning between source distribution and target samples.
The fourth developed approach is the Generalized Gaussian mixture-based (GenGMM) Domain Adaptation Model which was designed for the Generalized Domain Adaptation (GDA) task. While significant efforts have been devoted to improving unsupervised domain adaptation for this task, it's crucial to note that many promising domain adaptation models rely on a strong assumption: the source data is entirely and accurately labeled, while the target data is unlabeled. In real-world scenarios, however, we often encounter partially or noisy labeled data in source and target domains, referred to as the Generalized Domain Adaptation (GDA) setting. In such cases, we leveraged weak or unlabeled data from both domains to narrow the gap between them, leading to more effective adaptation.
To facilitate this, we introduce the GenGMM Domain Adaptation Model, which harnesses the underlying data distribution in both domains to refine noisy weak and pseudo labels.
All developed approaches compared to the current state-of-the-art (SOTA) approaches across different well-known benchmarks including, 1) The UEA multivariate time series classification archive, 2) The cardiopulmonary exercise testing (CPET) dataset, 3) The immunofluorescent images, and 4) The benchmarks of urban scenes including GTA5 to Cityscapes, Synthia to Cityscapes, and Cityscapes to Dark Zurich. The results demonstrate that our framework yields substantial improvements when compared to existing approaches.

Degree:
PHD (Doctor of Philosophy)
Keywords:
Domain Adaptation, Segmentation, Contrastive Learning, Deep Learning
Language:
English
Issued Date:
2024/04/23