Deep Learning for Style Transfer and Experimentation with Audio Effects and Music Creation
DOI:
https://doi.org/10.1609/aaai.v38i21.30558Keywords:
Deep Learning, Natural Language Processing, Music And AI, Generative AI, Digital Signal ProcessingAbstract
Recent advancements in deep learning have the potential to transform the process of writing and creating music. Models that have the potential to capture and analyze higher-level representations of music and audio can serve to change the field of digital signal processing. In this statement, I propose a set of Music+AI methods that serves to assist with the writing of and melodies, modelling and transferring of timbres, applying a wide variety of audio effects, including research into experimental audio effects, and production of audio samples using style transfers. Writing and producing music is a tedious task that is notably difficult to become proficient in, as many tools to create music both cost sums money and require long-term commitments to study. An all-encompassing framework for music processing would make the process much more accessible and simple and would allow for human art to work alongside technology to advance.Downloads
Published
2024-03-24
How to Cite
Tur, A. (2024). Deep Learning for Style Transfer and Experimentation with Audio Effects and Music Creation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23766-23767. https://doi.org/10.1609/aaai.v38i21.30558
Issue
Section
AAAI Undergraduate Consortium