Difference between revisions of "Musical Machine Learning"
From vjmedia
Nsbradford (talk | contribs) m |
Nsbradford (talk | contribs) |
||
Line 1: | Line 1: | ||
=Musical Machine Learning= | =Musical Machine Learning= | ||
− | + | [[File:LoopBuddy1.0.PNG|200px]] | |
==Overview== | ==Overview== | ||
Line 7: | Line 7: | ||
==TODO List== | ==TODO List== | ||
− | * | + | *Large additions |
− | *Cluster an entire song in 1-s segements, then use a Gaussian KDE to smooth out classification. This can then be used to actually mark "segments" of a song | + | **Cluster an entire song in 1-s segements, then use a Gaussian KDE to smooth out classification. This can then be used to actually mark "segments" of a song |
− | *Create website where people can upload a MIDI file, and then listen to RNN improvise over it | + | **Train a massive Deep Neural Net to try to automatically distinguish between parts |
− | * | + | **Composition |
− | + | ***Create a LSTM recurrent neural net to learn from MIDI input | |
+ | ***Combine with Verse/Chorus algorithm/work to give songs more structure | ||
+ | ***Create website where people can upload a MIDI file, and then listen to RNN improvise over it | ||
+ | *Small improvements to Verse/Chorus system: | ||
+ | **Use SVM with linear kernel (instead of RBF) as a better approximation of "true" clustering | ||
**Find additional features other than spectral centroid and zero-crossing rate | **Find additional features other than spectral centroid and zero-crossing rate | ||
+ | **IPython notebook demo (https://ipython.org/notebook.html) | ||
***Play with how the features are generated and averaged | ***Play with how the features are generated and averaged | ||
**Include outlier detection in the data preprocessing stage | **Include outlier detection in the data preprocessing stage | ||
Line 18: | Line 23: | ||
**Optimize the different classifiers | **Optimize the different classifiers | ||
**Optimize song loading times (store in database? alternative form?) | **Optimize song loading times (store in database? alternative form?) | ||
− | **Add option of multiple sections (bridge?) | + | **Add option of multiple sections (bridge?) |
− | + | ||
− | + | ||
− | |||
− | |||
− | |||
==Help Connecting to Repository== | ==Help Connecting to Repository== |
Revision as of 04:09, 5 May 2016
Contents
Musical Machine Learning
Overview
TODO List
- Large additions
- Cluster an entire song in 1-s segements, then use a Gaussian KDE to smooth out classification. This can then be used to actually mark "segments" of a song
- Train a massive Deep Neural Net to try to automatically distinguish between parts
- Composition
- Create a LSTM recurrent neural net to learn from MIDI input
- Combine with Verse/Chorus algorithm/work to give songs more structure
- Create website where people can upload a MIDI file, and then listen to RNN improvise over it
- Small improvements to Verse/Chorus system:
- Use SVM with linear kernel (instead of RBF) as a better approximation of "true" clustering
- Find additional features other than spectral centroid and zero-crossing rate
- IPython notebook demo (https://ipython.org/notebook.html)
- Play with how the features are generated and averaged
- Include outlier detection in the data preprocessing stage
- Optimize the different classifiers
- Optimize song loading times (store in database? alternative form?)
- Add option of multiple sections (bridge?)
Help Connecting to Repository
All files for this project are stored on the secured repository below. Contact Manzo for access once you've made an account on the Git (see Help).
Main project repository address:
http://solar-10.wpi.edu/ModalObjectLibrary/MachineLearning [git@solar-10.wpi.edu:ModalObjectLibrary/MachineLearning.git]
WPI Student Contributors
2016
Nicholas S. Bradford