JOURNAL ARTICLE

Interactive Generative Adversarial Networks With High-Frequency Compensation for Facial Attribute Editing

Wenmin HuangWeiqi LuoXiaochun CaoJiwu Huang

Year: 2024 Journal:   IEEE Transactions on Circuits and Systems for Video Technology Vol: 34 (9)Pages: 8215-8229   Publisher: Institute of Electrical and Electronics Engineers

Abstract

Recently, facial attribute editing has drawn increasing attention and has achieved significant progress due to Generative Adversarial Network (GAN). Since paired images before and after editing are not available, existing methods typically perform the editing and reconstruction tasks simultaneously, and transfer facial details learned from the reconstruction to the editing via sharing the latent representation space and weights. In this way, they can not preserve those non-targeted regions well during editing. In addition, they usually introduce skip connections between the encoder and decoder to improve image quality at the cost of attribute editing ability. In this paper, we propose a novel method called InterGAN with high-frequency compensation to alleviate above problems. Specifically, we first propose the cross-task interaction (CTI) to fully explore the relationships between editing and reconstruction tasks. The CTI includes two translations: style translation adjusts the mean and variance of feature maps according to style features, and conditional translation utilizes attribute vector as condition to guide feature map transformation. They provide effective information interaction to preserve the irrelevant regions unchanged. Without using skip connections between the encoder and decoder, furthermore, we propose the high-frequency compensation module (HFCM) to improve image quality. The HFCM tries to collect potentially loss information from input images and each down-sampling layers of the encoder, and then re-inject them into subsequent layers to alleviate the information loss. Ablation analysis show the effectiveness of proposed CTI and HFCM. Extensive qualitative and quantitative experiments on CelebA-HQ demonstrate that the proposed method outperforms state-of-the-art methods both in attribute editing accuracy and image quality.

Keywords:
Computer science Adversarial system Generative grammar Artificial intelligence Compensation (psychology) Speech recognition Image editing Image (mathematics)

Metrics

3
Cited By
1.59
FWCI (Field Weighted Citation Impact)
56
Refs
0.73
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Face recognition and analysis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing

Related Documents

JOURNAL ARTICLE

Facial Attribute Editing Using Generative Adversarial Network

Mukunda UpadhyayBadri Raj LamichhaneBal Krishna Nyaupane

Journal:   Journal of Engineering and Sciences Year: 2023 Vol: 2 (1)Pages: 57-63
JOURNAL ARTICLE

Progressive editing with stacked Generative Adversarial Network for multiple facial attribute editing

Patrick P. K. ChanXiaotian WangZhe LinDaniel Yeung

Journal:   Computer Vision and Image Understanding Year: 2021 Vol: 217 Pages: 103347-103347
JOURNAL ARTICLE

Face attribute editing based on generative adversarial networks

Xiaoxia SongMingwen ShaoWangmeng ZuoCunhe Li

Journal:   Signal Image and Video Processing Year: 2020 Vol: 14 (6)Pages: 1217-1225
JOURNAL ARTICLE

Semi-supervised image attribute editing using generative adversarial networks

Yahya DoğanHacer Yalım Keleş

Journal:   Neurocomputing Year: 2020 Vol: 401 Pages: 338-352
JOURNAL ARTICLE

Facial attribute editing via a Balanced Simple Attention Generative Adversarial Network

Fujian RenW. M. LiuFasheng WangBo WangFuming Sun

Journal:   Expert Systems with Applications Year: 2025 Vol: 277 Pages: 127245-127245
© 2026 ScienceGate Book Chapters — All rights reserved.