Learn and Sample Together: Collaborative Generation for Graphic Design Layout

Learn and Sample Together: Collaborative Generation for Graphic Design Layout

Haohan Weng, Danqing Huang, Tong Zhang, Chin-Yew Lin

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
AI and Arts. Pages 5851-5859. https://doi.org/10.24963/ijcai.2023/649

In the process of graphic layout generation, user specifications including element attributes and their relationships are commonly used to constrain the layouts (e.g.,"put the image above the button''). It is natural to encode spatial constraints between elements using a graph. This paper presents a two-stage generation framework: a spatial graph generator and a subsequent layout decoder which is conditioned on the previous output graph. Training the two highly dependent networks separately as in previous work, we observe that the graph generator generates out-of-distribution graphs with a high frequency, which are unseen to the layout decoder during training and thus leads to huge performance drop in inference. To coordinate the two networks more effectively, we propose a novel collaborative generation strategy to perform round-way knowledge transfer between the networks in both training and inference. Experiment results on three public datasets show that our model greatly benefits from the collaborative generation and has achieved the state-of-the-art performance. Furthermore, we conduct an in-depth analysis to better understand the effectiveness of graph condition modeling.
Keywords:
Application domains: Images and visual arts
Application domains: Text, literature and creative language
Methods and resources: Machine learning, deep learning, neural models, reinforcement learning
Theory and philosophy of arts and creativity in AI systems: Autonomous creative or artistic AI