JOURNAL ARTICLE

Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving Vision Transformer

Teru NagamoriSayaka ShiotaHitoshi Kiya

Year: 2024 Journal:   APSIPA Transactions on Signal and Information Processing Vol: 13 (1)Pages: 1-15   Publisher: Cambridge University Press

Abstract

We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT). The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images, whereas conventional methods cannot avoid the influence of image encryption. A domain adaptation method is used to efficiently fine-tune ViT with encrypted images. In experiments, the method is demonstrated to outperform conventional methods in an image classification task on the CIFAR-10 and ImageNet datasets in terms of classification accuracy.

Keywords:
Computer science Domain adaptation Computer vision Adaptation (eye) Transformer Artificial intelligence Computer security Psychology Engineering Electrical engineering Voltage

Metrics

1
Cited By
0.37
FWCI (Field Weighted Citation Impact)
0
Refs
0.50
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Memory and Neural Computing
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
CCD and CMOS Imaging Sensors
Physical Sciences →  Engineering →  Electrical and Electronic Engineering
Advanced Optical Imaging Technologies
Physical Sciences →  Engineering →  Media Technology
© 2026 ScienceGate Book Chapters — All rights reserved.