Code and media samples for the paper Automated Music Generation for Visual Art through Emotion.
| Input | ![]() |
![]() |
![]() |
![]() |
| Output | midi | midi | midi | midi |
| Input | ![]() |
![]() |
![]() |
![]() |
| Output | midi | midi | midi | midi |
| Input | ![]() |
![]() |
![]() |
![]() |
| Output | midi | midi | midi | midi |
| Input | ![]() |
![]() |
||
| Output | midi | midi |
Sources of the images: https://www.imageemotion.org/
python3.7cuda10musescore
The remaining python dependencies can be installed with:
pip install -r requirements.txt
The full dataset can be downloaded from https://www.cs.rochester.edu/u/qyou/deepemotion/ .
cd rnn
python preprocess.py ../dataset/midi/train ../dataset/midi/rnn # preprocess data
python train.py -s ../model/rnn_example.sess -d ../dataset/midi/rnn -i 10 # train model
python generate.py -i ../dataset/image/test/1.jpg #generate music from a given imageThe implementation of performance RNN is modified from https://github.com/djosix/Performance-RNN-PyTorch .
cd transformer
python preprocess.py # preprocess data
python train.py # train model
python generate.py -i ../dataset/image/test/1.jpg #generate music from a given imageThe implementation of transformer is modified from https://github.com/bearpelican/musicautobot .













