Difference between revisions of "Musical Machine Learning"
From vjmedia
Nsbradford (talk | contribs) m |
|||
Line 1: | Line 1: | ||
=Musical Machine Learning= | =Musical Machine Learning= | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | ==Overview== | ||
+ | |||
+ | ==TODO List== | ||
+ | |||
+ | *Use SVM with linear kernel (instead of RBF) as a better approximation of "true" clustering | ||
+ | *Cluster an entire song in 1-s segements, then use a Gaussian KDE to smooth out classification. This can then be used to actually mark "segments" of a song | ||
+ | *Create website where people can upload a MIDI file, and then listen to RNN improvise over it | ||
+ | *IPython notebook demo | ||
+ | *Verse/Chorus system: | ||
+ | **Find additional features other than spectral centroid and zero-crossing rate | ||
+ | ***Play with how the features are generated and averaged | ||
+ | **Include outlier detection in the data preprocessing stage | ||
+ | ***(http://scikit-learn.org/stable/modules/outlier_detection.html) | ||
+ | **Optimize the different classifiers | ||
+ | **Optimize song loading times (store in database? alternative form?) | ||
+ | **Add option of multiple sections (bridge?) | ||
+ | *Expansions: | ||
+ | **Train a massive Deep Neural Net to try to automatically distinguish between parts | ||
+ | *Composition | ||
+ | **Create a LSTM recurrent neural net to learn from MIDI input | ||
+ | **Combine with Verse/Chorus algorithm/work to give songs more structure | ||
+ | ==Help Connecting to Repository== | ||
+ | All files for this project are stored on the secured repository below. Contact Manzo for access once you've made an account on the Git (see Help). | ||
<br> | <br> | ||
+ | Main project repository address: | ||
<br> | <br> | ||
+ | http://solar-10.wpi.edu/ModalObjectLibrary/MachineLearning [git@solar-10.wpi.edu:ModalObjectLibrary/MachineLearning.git] | ||
+ | |||
+ | ==WPI Student Contributors== | ||
+ | ===2016=== | ||
+ | Nicholas S. Bradford | ||
+ | |||
<br> | <br> | ||
[[Category: Advisor:Manzo]][[Category:Interactive Systems]] | [[Category: Advisor:Manzo]][[Category:Interactive Systems]] | ||
<!--[[Category:Featured]]--> | <!--[[Category:Featured]]--> |
Revision as of 04:03, 5 May 2016
Contents
Musical Machine Learning
Overview
TODO List
- Use SVM with linear kernel (instead of RBF) as a better approximation of "true" clustering
- Cluster an entire song in 1-s segements, then use a Gaussian KDE to smooth out classification. This can then be used to actually mark "segments" of a song
- Create website where people can upload a MIDI file, and then listen to RNN improvise over it
- IPython notebook demo
- Verse/Chorus system:
- Find additional features other than spectral centroid and zero-crossing rate
- Play with how the features are generated and averaged
- Include outlier detection in the data preprocessing stage
- Optimize the different classifiers
- Optimize song loading times (store in database? alternative form?)
- Add option of multiple sections (bridge?)
- Find additional features other than spectral centroid and zero-crossing rate
- Expansions:
- Train a massive Deep Neural Net to try to automatically distinguish between parts
- Composition
- Create a LSTM recurrent neural net to learn from MIDI input
- Combine with Verse/Chorus algorithm/work to give songs more structure
Help Connecting to Repository
All files for this project are stored on the secured repository below. Contact Manzo for access once you've made an account on the Git (see Help).
Main project repository address:
http://solar-10.wpi.edu/ModalObjectLibrary/MachineLearning [git@solar-10.wpi.edu:ModalObjectLibrary/MachineLearning.git]
WPI Student Contributors
2016
Nicholas S. Bradford