Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation
Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Demos. Pages 6506-6508.
https://doi.org/10.24963/ijcai.2019/938
We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve
score-to-audio music generation. The model learns
to convert a music piece from the symbolic domain
to the audio domain, assigning performance-level
attributes such as changes in velocity automatically
to the music and then synthesizing the audio. The
model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a
musical score in its own way. The code and sample outputs of the model can be found online at
https://github.com/bwang514/PerformanceNet.
Keywords:
AI: Human-Computer Interactive Systems
AI: Machine Learning