Transfer learning aims to reduce the size of the labeled training samples by leveraging existing knowledge from one domain, called the source domain, and using the learned
knowledge to construct models for another domain, called the target domain. In particular, unsupervised domain adaptation (UDA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.Most existing UDA methods rely on deep learning, primarily pre-trained models, adversarial networks, and transformers.
We propose an interpretable and lightweight transfer learning (ILTL) method. It consists of two modules. The first module deals with image-level alignment to ensure visually similar images across domains, which performs image processing to minimize structural differences between the source and target images.The second module focuses on feature-level alignment, which identifies the discriminant feature subspace, uses the feature distance to transfer source labels to target samples, and then conducts class-wise alignment in the feature subspace. ILTL can be performed in multiple rounds to enhance the alignment of source and target features. We benchmark ILTL and deep-learning-based methods in classification accuracy, model sizes, and computational complexity in two transfer learning datasets. Experiments show that ILTL can achieve similar accuracy with smaller model sizes and lower computational complexity, while its interpretability provides a deeper understanding of the transfer learning mechanism.