Difference between revisions of "Sight to Sound"

From vjmedia
Line 1: Line 1:
 +
=== Sight to Sound ===
 +
 
This is a sight-to-sound application; something that takes a camera input and outputs a spectrum of audio frequencies. The creative task is to choose a mapping from 2D pixel-space to 1D frequency-space in a way that could be meaningful to the listener. Of course, it would take someone a long time to relearn their sight through sound, but the purpose of this project is just to ''implement'' the software.
 
This is a sight-to-sound application; something that takes a camera input and outputs a spectrum of audio frequencies. The creative task is to choose a mapping from 2D pixel-space to 1D frequency-space in a way that could be meaningful to the listener. Of course, it would take someone a long time to relearn their sight through sound, but the purpose of this project is just to ''implement'' the software.
  
Line 5: Line 7:
  
  
The video below demonstrates the concept in Max. For a better understanding of this concept, check out this video by YouTube animator ''3Blue1Brown'': [https://www.youtube.com/watch?v=3s7h2MHQtxc '''link''']
+
The video below demonstrates the concept in Max. For a better understanding, check out this video by YouTube animator ''3Blue1Brown'': [https://www.youtube.com/watch?v=3s7h2MHQtxc '''link''']
  
  
<htmltag tagname="iframe" id="ensembleEmbeddedContent_aELPpo0eJEiNwcKrpzACnQ" src="https://video.wpi.edu/hapi/v1/contents/a6cf4268-1e8d-4824-8dc1-c2aba730029d/plugin?embedAsThumbnail=false&displayTitle=false&startTime=0&autoPlay=false&hideControls=true&showCaptions=false&width=460&height=360&displaySharing=false&displayAnnotations=false&displayAttachments=false&displayLinks=false&displayEmbedCode=false&displayDownloadIcon=false&displayMetaData=false&displayCredits=false&displayCaptionSearch=false&audioPreviewImage=false&displayViewersReport=false" title="Ben Gobler Final Project" frameborder="0" height="360" width="460" allowfullscreen></htmltag>
+
<htmltag tagname="iframe" id="ensembleEmbeddedContent_aELPpo0eJEiNwcKrpzACnQ" src="https://video.wpi.edu/hapi/v1/contents/a6cf4268-1e8d-4824-8dc1-c2aba730029d/plugin?embedAsThumbnail=false&displayTitle=false&startTime=0&autoPlay=false&hideControls=true&showCaptions=false&width=276&height=216&displaySharing=false&displayAnnotations=false&displayAttachments=false&displayLinks=false&displayEmbedCode=false&displayDownloadIcon=false&displayMetaData=false&displayCredits=false&displayCaptionSearch=false&audioPreviewImage=false&displayViewersReport=false" title="Ben Gobler Final Project" frameborder="0" height="216" width="276" allowfullscreen></htmltag>
Here is a quick walkthrough of the code:
+
== Here is a walkthrough of the code: ==
  
 
{|style="margin: 0 auto;"
 
{|style="margin: 0 auto;"
| [[File:Main.png|300px|thumb|This is the main patcher. It looks complicated, but there are only two primary paths running.]]
+
| [[File:img_spacer.png|100px|frameless|]]
 +
| [[File:Main.png|350px|thumb|This is the main patcher. It looks complicated, but there are only two primary paths running.]]
 
|}
 
|}
 
{|style="margin: 0 auto;"
 
{|style="margin: 0 auto;"
| [[File:Mode.png|300px|thumb|This path controls the current mode. The left branch runs the demo mode, and the right branch runs the camera mode. ''p inv_toggle'' acts as a router between 0 and 1, but with toggles.]]
 
 
| [[File:img_spacer.png|100px|frameless|]]
 
| [[File:img_spacer.png|100px|frameless|]]
| [[File:Hilbert.png|300px|thumb|This path controls the Hilbert Curves and the sliders for each dimension level.]]
+
| [[File:Mode.png|350px|thumb|This path controls the current mode. The left branch runs the demo mode, and the right branch runs the camera mode. ''p inv_toggle'' acts as a router between 0 and 1, but with toggles.]]
 +
| [[File:img_spacer.png|100px|frameless|]]
 +
| [[File:Hilbert.png|350px|thumb|This path controls the Hilbert Curves and the sliders for each dimension level.]]
 
|}
 
|}
 
{|style="margin: 0 auto;"
 
{|style="margin: 0 auto;"
| [[File:Demo.png|200px|thumb|This subpatcher runs the demo mode. The left half outputs the selected frequency, and the right half generates the matrix output for the video screen.]]
 
| [[File:img_spacer.png|50px|frameless|]]
 
| [[File:Data_sort.png|200px|thumb|This subpatcher begins running the camera mode.]]
 
 
| [[File:img_spacer.png|100px|frameless|]]
 
| [[File:img_spacer.png|100px|frameless|]]
| [[File:Hilbert_setup.png|200px|thumb|This subpatcher sends the proper Hilbert Curves and sliders from image files.]]
+
| [[File:Demo.png|300px|thumb|This subpatcher ''p demo'' runs the demo mode. The left half outputs the selected frequency, and the right half generates the matrix output for the video screen.]]
| [[File:img_spacer.png|50px|frameless|]]
+
| [[File:Data_sort.png|100px|thumb|This subpatcher ''p data_sort'' begins running the camera mode.]]
 +
| [[File:img_spacer.png|100px|frameless|]]
 +
| [[File:Hilbert_setup.png|300px|thumb|This subpatcher ''p hilbert'' sends the proper Hilbert Curves and sliders from image files.]]
 +
| [[File:img_spacer.png|100px|frameless|]]
 
|}
 
|}
 
{|style="margin: 0 auto;"
 
{|style="margin: 0 auto;"
| [[File:Demo_audio.png|200px|thumb|]]
+
| [[File:Demo_audio.png|200px|thumb|This half of ''p demo'' uses the place along the Hilbert Curve (also the slider value) to calculate and output that point's frequency.]]
| [[File:img_spacer.png|50px|frameless|]]
+
| [[File:Hilbert_to_Point.png|200px|thumb|This half of ''p demo'' uses javascript to take the place along the Hilbert Curve (also the slider value) and output the xy-coordinates of the corresponding pixel. Then, it generates a matrix of black pixels with a white pixel at that point (x,y).]]
| [[File:Hilbert_to_Point.png|200px|thumb|]]
+
| [[File:Point_to_Hilbert.png|100px|thumb|''jit.iter'' splits the camera data into pixel coordinates and their brightness values. The javascript file takes each pixel's xy-coordinates and outputs that pixel's place on the Hilbert Curve (from first to last, when unraveled).]]
 
| [[File:img_spacer.png|50px|frameless|]]
 
| [[File:img_spacer.png|50px|frameless|]]
| [[File:Point_to_Hilbert.png|200px|thumb|]]
+
| [[File:Image_selection.png|300px|thumb|With the dimension level as an input, it gets routed and sent to a corresponding Hilbert Curve and slider image.]]
 
| [[File:img_spacer.png|50px|frameless|]]
 
| [[File:img_spacer.png|50px|frameless|]]
| [[File:Image_selection.png|200px|thumb|]]
 
 
|}
 
|}

Revision as of 23:13, 9 October 2019

Sight to Sound

This is a sight-to-sound application; something that takes a camera input and outputs a spectrum of audio frequencies. The creative task is to choose a mapping from 2D pixel-space to 1D frequency-space in a way that could be meaningful to the listener. Of course, it would take someone a long time to relearn their sight through sound, but the purpose of this project is just to implement the software.


Used here, the mapping from pixels to frequencies is the Hilbert Curve. This particular mapping is desirable for two reasons: first, when the camera dimensions increase, points on the curve approach more precise locations, tending toward a specific point. So increasing the dimensions makes better approximations of the camera data, which becomes "higher resolution sound" in terms of audio-sight. Second, the Hilbert Curve maintains that nearby pixels in pixel-space are assigned frequencies near each other in frequency-space. By leveraging these two intuitions of sight, the Hilbert curve is an excellent choice for the mapping for this hypothetical software.


The video below demonstrates the concept in Max. For a better understanding, check out this video by YouTube animator 3Blue1Brown: link


Here is a walkthrough of the code:

Img spacer.png
This is the main patcher. It looks complicated, but there are only two primary paths running.
Img spacer.png
This path controls the current mode. The left branch runs the demo mode, and the right branch runs the camera mode. p inv_toggle acts as a router between 0 and 1, but with toggles.
Img spacer.png
This path controls the Hilbert Curves and the sliders for each dimension level.
Img spacer.png
This subpatcher p demo runs the demo mode. The left half outputs the selected frequency, and the right half generates the matrix output for the video screen.
This subpatcher p data_sort begins running the camera mode.
Img spacer.png
This subpatcher p hilbert sends the proper Hilbert Curves and sliders from image files.
Img spacer.png
This half of p demo uses the place along the Hilbert Curve (also the slider value) to calculate and output that point's frequency.
This half of p demo uses javascript to take the place along the Hilbert Curve (also the slider value) and output the xy-coordinates of the corresponding pixel. Then, it generates a matrix of black pixels with a white pixel at that point (x,y).
jit.iter splits the camera data into pixel coordinates and their brightness values. The javascript file takes each pixel's xy-coordinates and outputs that pixel's place on the Hilbert Curve (from first to last, when unraveled).
Img spacer.png
With the dimension level as an input, it gets routed and sent to a corresponding Hilbert Curve and slider image.
Img spacer.png