MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models

MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models

Donghao Zhou, Jiancheng Huang, Jinbin Bai, Jiaze Wang, Hao Chen, Guangyong Chen, Xiaowei Hu, Pheng-Ann Heng

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
AI, Arts & Creativity. Pages 10225-10233. https://doi.org/10.24963/ijcai.2025/1136

Text-to-image diffusion models can generate high-quality images but lack fine-grained control of visual concepts, limiting their creativity. Thus, we introduce component-controllable personalization, a new task that enables users to customize and reconfigure individual components within concepts. This task faces two challenges: semantic pollution, where undesired elements disrupt the target concept, and semantic imbalance, which causes disproportionate learning of the target concept and component. To address these, we design MagicTailor, a framework that uses Dynamic Masked Degradation to adaptively perturb unwanted visual semantics and Dual-Stream Balancing for more balanced learning of desired visual semantics. The experimental results show that MagicTailor achieves superior performance in this task and enables more personalized and creative image generation.
Keywords:
Application domains: Images, movies and visual arts
Methods and resources: AI systems for ideation
Methods and resources: Machine learning, deep learning, neural models, reinforcement learning
Theory and philosophy of arts and creativity in AI systems: Autonomous creative or artistic AI
Theory and philosophy of arts and creativity in AI systems: Computational paradigms, architectures and models for creativity