Seung Hyun LeeWonseok RohWonmin ByeonSang Ho YoonChan Young KimJinkyu KimSangpil Kim
The recent success of the generative model shows that leveraging the multi-modal embedding space can manipu-late an image using text information. However, manipulating an image with other sources rather than text, such as sound, is not easy due to the dynamic characteristics of the sources. Especially, sound can convey vivid emotions and dynamic expressions of the real world. Here, we propose a framework that directly encodes sound into the multi-modal (image-text) embedding space and manipulates an image from the space. Our audio encoder is trained to pro-duce a latent representation from an audio input, which is forced to be aligned with image and text representations in the multi-modal embedding space. We use a direct latent op-timization method based on aligned embeddings for sound-guided image manipulation. We also show that our method can mix different modalities, i.e., text and audio, which en-rich the variety of the image modification. The experiments on zero-shot audio classification and semantic-level image classification show that our proposed model outperforms other text and sound-guided state-of-the-art methods.
Seung Hyun LeeHyung‐gun ChiGyeongrok OhWonmin ByeonSang Ho YoonHyunje ParkWonjun ChoJinkyu KimSangpil Kim
Seung Hyun LeeGyeongrok OhWonmin ByeonChanyoung KimWon Jeong RyooSang Ho YoonHyunjun ChoJihyun BaeJinkyu KimSangpil Kim
Helisa DhamoAzade FarshadIro LainaNassir NavabGregory D. HagerFederico TombariChristian Rupprecht
Jianan WangGuansong LuHang XuZhenguo LiChunjing XuYanwei Fu