I'm trying to convert a MIDI file into a PyTorch Tensor, but muspy seems to be truncating the file.
The last note ends at timestep 759 (0-indexed). So, when I convert this Music object to a piano roll representation using muspy.to_representation(test_music, kind='piano-roll'), I get an array of length 760:
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], shape=(760, 128), dtype=uint8)
But the original MIDI clip was 768 timesteps long (96 timesteps/beat * 8 beats = 768 timesteps).
I'm trying to convert a MIDI file into a PyTorch Tensor, but muspy seems to be truncating the file.
This is the file's
Musicrepresentation (after reading it in withtest_music = muspy.read_midi('...'):The last note ends at timestep 759 (0-indexed). So, when I convert this
Musicobject to a piano roll representation usingmuspy.to_representation(test_music, kind='piano-roll'), I get an array of length 760:But the original MIDI clip was 768 timesteps long (96 timesteps/beat * 8 beats = 768 timesteps).
How can I get
muspyto give me a Tensor that is 768 timesteps long? This is important to my application as the notes need to be placed in the context of the full beat lengths.