There has been tremendous interest in deep learning across many fields of study. Recently, these techniques have gained popularity in the field of music. Projects such as Magenta (Google's Brain Team's music generation project), Jukedeck and others testify to their potential.
While humans can rely on their intuitive understanding of musical patterns and the relationships between them, it remains a challenging task for computers to capture and quantify musical structures. Recently, researchers have attempted to use deep learning models to learn features and relationships that allow us to accomplish tasks in music transcription, audio feature extraction, emotion recognition, music recommendation, and automated music generation.
With this workshop we aim to advance the state-of-the-art in machine intelligence for music by bringing together researchers in the field of music and deep learning. This will enable us to critically review and discuss cutting-edge-research so as to identify grand challenges, effective methodologies, and potential new applications.
Papers and abstracts on the application of deep learning techniques on music are welcomed, including but not limited to:
Deep learning applications for computational music research
Modeling hierarchical and long term music structures using deep learning
Modeling ambiguity and preference in music
Software frameworks and tools for deep learning in music
05月18日
2017
05月19日
2017
初稿截稿日期
初稿录用通知日期
终稿截稿日期
注册截止日期
留言