Exploring the Frontiers of Animation Video Generation in the Sora Era: Method, Dataset and Benchmark

Exploring the Frontiers of Animation Video Generation in the Sora Era: Method, Dataset and Benchmark

Yudong Jiang, Baohan Xu, Siqian Yang, Mingyu Ying, Jing Liu, Chao Xu, Siqi Wang, Yidi Wu, Bingwen Zhu, Yue Zhang, Jinlong Hou, Huyang Sun

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 1260-1268. https://doi.org/10.24963/ijcai.2025/141

Animation has gained significant interest in the recent film and TV industry. Despite the success of advanced video generation models like Sora, Kling, and CogVideoX in generating natural videos, they lack the same effectiveness in handling animation videos. Evaluating animation video generation is also a great challenge due to its unique artist styles, violating the laws of physics and exaggerated motions. In this paper, we present a comprehensive system, AniSora, designed for animation video generation, which includes a data processing pipeline, a controllable generation model, and an evaluation benchmark. Supported by the data processing pipeline with over 10M high-quality data, the generation model incorporates a spatiotemporal mask module to facilitate key animation production functions such as image-to-video generation, frame interpolation, and localized image-guided animation. We also collect an evaluation benchmark of 948 various animation videos, with specifically developed metrics for animation video generation. Our entire project is publicly available on https://github.com/bilibili/Index-anisora/tree/main
Keywords:
Computer Vision: CV: Image and video synthesis and generation 
Computer Vision: CV: Video analysis and understanding