CeFET: Contrast-Enhanced Guided Facial Expression Translation

Linfeng Han, Zhong Jin, Yi Chang Li*, Zhiyang Jia

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Advancements in deep learning have stimulated the creation of multiple techniques for facial expression translation. However, these methods frequently rely on detailed annotations of action units (AU) or 3D modelling techniques. In this paper, we introduce a novel Contrast-enhanced Guided Facial Expression Translation (CeFET) method. The model uses only facial images as input and extracts facial features from these images using an encoder model based on the StyleGAN prior. We propose a contrast-enhanced guidance technique aimed at minimizing the distance between the generated face and the input face, as well as the distance between the generated expression and the reference expression. This ensures that the generated face maintains identity consistency with the source face and expression consistency with the reference face. Extensive experimental results support the effectiveness of our method.

源语言英语
主期刊名Proceedings of the 14th International Conference on Computer Engineering and Networks - Volume IV
编辑Guangqiang Yin, Xiaodong Liu, Jian Su, Yangzhao Yang
出版商Springer Science and Business Media Deutschland GmbH
177-188
页数12
ISBN(印刷版)9789819640157
DOI
出版状态已出版 - 2025
已对外发布
活动14th International Conference on Computer Engineering and Networks, CENet 2024 - Kashi, 中国
期限: 18 10月 202421 10月 2024

出版系列

姓名Lecture Notes in Electrical Engineering
1383 LNEE
ISSN(印刷版)1876-1100
ISSN(电子版)1876-1119

会议

会议14th International Conference on Computer Engineering and Networks, CENet 2024
国家/地区中国
Kashi
时期18/10/2421/10/24

指纹

探究 'CeFET: Contrast-Enhanced Guided Facial Expression Translation' 的科研主题。它们共同构成独一无二的指纹。

引用此