Translating Melody to Chord: Structured and Flexible Harmonization of Melody With Transformer
Translating Melody to Chord: Structured and Flexible Harmonization of Melody With Transformer
Blog Article
Recent deep learning approaches for melody harmonization have achieved remarkable performance by overcoming the uneven chord am22 pro model distributions of music data.However, most of these approaches have not attempted to capture an original melodic structure and generate structured chord sequences with appropriate rhythms.Hence, we use a Transformer-based architecture that directly maps lower-level melody notes into a semantic higher-level chord sequence.In particular, we encode the binary piano roll of a melody into a note-based representation.
Furthermore, we address the flexible generation of various chords with Transformer expanded with a VAE framework.We propose three Transformer-based melody harmonization navy blue achkan models: 1) the standard Transformer-based model for the neural translation of a melody to chords (STHarm); 2) the variational Transformer-based model for learning the global representation of complete music (VTHarm); and 3) the regularized variational Transformer-based model for the controllable generation of chords (rVTHarm).Experimental results demonstrate that the proposed models generate more structured, diverse chord sequences than LSTM-based models.