Learning to Synthesize Images with Multimodal and Hierarchical Inputs
Yu Zeng, JHU
April 17, 2024 4:00 – 5:15 PM
ENGR 231, UMBC or Webex
In recent years, image synthesis and manipulation has experienced remarkable advancements driven by deep learning algorithms and web-scale data, yet there persists a notable disconnect between the intricate nature of human ideas and the simplistic input structures employed by the existing models. In this talk, I will present our research towards a more natural way for controllable image synthesis inspired by the coarse-to-fine workflow of human artists and the inherently multimodal aspect of human thought processes. We consider the inputs of semantic and visual modality at varying levels of hierarchy. For the semantic modality, we introduce a general framework for modeling semantic inputs of different levels, which includes image-level text prompts and pixel-level label maps as two extremes and brings a series of mid-level regional descriptions with different precision. For the visual modality, we explore the use of low-level and high-level visual inputs aligning with the natural hierarchy of visual processing. Additionally, as the misuse of generated images becomes a societal threat, I will introduce our findings on the trustworthiness of deep generative models in the second part of this talk and potential future research directions.
Yu Zeng is a Ph.D. candidate at Johns Hopkins University advised by Vishal M. Patel. Her research interest lies in computer vision and deep learning. She has focused on two main areas: (1) deep generative models for image synthesis and editing and (2) label-efficient deep learning. By combining these research areas, she aims to bridge human creativity and machine intelligence through user-friendly and socially responsible models while minimizing the need for intensive human supervision. Yuhas collaborated with researchers at NVIDIA and Adobe through internships. Prior to her Ph.D., she worked as a researcher at Tencent Games. Yu’s research has been recognized by the KAUST Rising Stars in AI, and her Ph.D. study has been supported by a JHU Kewei Yang and Grace Xin Fellowship.