0:03
So welcome to module five of the course.
Here we're gonna take all the basics that we learned about Python programming and
the EarSketch API in module four and
start looking at how we can really look at musical algorithms to think about
making music from fundamentally different perspectives.
And the first perspective I wanna look at that from is Data sonification.
And this is actually gonna take all the things that we learned with set effect
at the end of the last module and take it kind of to a new level.
To use data from another source, in this case weather data,
to drive those effects envelopes and ramps that we want to create.
0:41
The field of data sonification, very simply put,
it's about using sound to express data, of some non-musical data.
So what that data is, it could be anything.
It could be the structure of molecules,
it could be the amount of radiation that's present in a room.
It could be the amount of network traffic that a server is processing.
It could be the weather.
It could be stock market values,
it could be an image and the pixel data in an image.
It could be just about anything you imagine.
And the reason that sonification has become an important research field within
music technology, in the last couple of decades is really for two reasons.
One is that It can be a very powerful creative tool,
particularly in the context of algorithmic composition.
To be able to take data from one domain and
use it to form the structural basis of a piece of music.
And so a lot of musicians and
artists have been drawn to this because they want to take data
that is meaningful to them in some way and use it to structure their work.
Use it as the foundation of their music.
The other thing is that there's a lot of
practical applications to Data Sonification.
Or there might be some types of data for
instance in very highly multi-dimensional data that can't be really expressed
visually very well with two dimensional or even three dimensional representation.
Or maybe even a four-dimensional representation
since 3D video is changing over time.
There also might be cases where it's not practical to
look at a visualization of data, but it is practical to hear it.
People that have various kinds of vision impairments,
2:33
they can't just look at a chart but they might be able to hear that data.
But also people that are in situations where they can't be looking at
something all the time.
They wanna monitor something in the backgrounds where they're working or
there might be a fighter pilot that's trying to monitor a lot of different
gauges and monitors in his cockpit all at the same time
all well he's also looking at kind of where he's going.
And that can get very tricky and so if some of those can be monitored through
sound, that might be able to help in situations like that where
we can't devote our full visual attention to monitoring something.
So there's a large research community that ponders these question.
I wanted to just look at a simple spplication data sonification here and
I wanted to work with weather data.
So what I did was I looked at some historical weather data
from San Francisco.
3:25
Looking at the maximum temperature each day
in Celsius, over a two-year period.
So there's about 700 data points here.
I couldn't but them all on the slide so I just put the first 10 or
20 on the slide here.
There's about 700 or so data points here,
representing over two years the daily of maximum temperatures.
So you can kind of see how this changes day to day and also with the season and
try to represent it somehow in sound.
So I put this all into a python list.
I called it weather data.
So you can see the beginning brackets and the ending brackets.
And each of these floating point numbers is representing one daily high, and
they're separated by commas.
So this list, in reality, goes on and on for 700 items.
I just put the short version here.
And what I want to do is map that onto the amount of pitch shifting that I'm doing.
So I want to be able to have each of these points in my envelope
mapped to a successive one of these high temperatures.
And so this is fairly straight forward to do, but
we do need to look at few new functions that we haven't seen before.
And these are Python functions for working with lists.
4:41
Max of a list will return to us the maximum value in a list.
Min of a list will return the minimum value.
And then len of list will tell us how many items are in the list.
So if we go ahead and look at this code here, it looks something like this.
So I'm importing Earsketch like always.
And then I'm declaring this weatherData array.
And then I init, I create a new project.
I'm setting my tempo to 140 BPM.
I'm finding a sounds here that I want to process this way and
I'm putting it on a, on eight measures of music on track one and
it's just starting measure one, ending at the beginning of measure nine.
Then I need to figure out because I don't know off hand, you know
what is across these tiers, what was the lowest temperature in San Francisco?
And what was the highest temperature in San Fransisco.
So what I know is that I want to map this data so
the absolute lowest temperature is going to be no pitch shift, so
a value of zero, and the highest possible temperature across that whole time
is going to go plus 12 semitones, so it's to transpose it up a full octave.
But before I can do that mapping mathematically,
I need to understand what were the minimum and maximum temperatures in my list.
And so that's what these two lines calculate, is they use these max and
min functions to figure out what those max and min values were.
And I also need to figure out, I know that I placed
my sound to last eight measures here from measure one to measure nine.
6:05
But I don't know how many points that means that I need in my envelope.
So what I do then, is I look at 8.0 measures that I need to fill and
look at the number of items I have in my weatherData list.
And I just do simple division to figure out how much space is between each of
them, how close together the points in my envelope need to be.
And then I do a for loop.
It just goes up for each item in my list, so for I in range length of weather data.
So what this is doing is the first time I will equal zero,
the next time it will be one, the next time it will be two,
next time it will be three and so on so forth, until it's gotten to the length of
that that entire list and it's gone through every item.
So then I load weather data bracket i,
so that gets me 13.3 then it's going to get me 9.4, then it's going to be 10 and
7.2 and 8.9 and so forth.
Each time through the loop temperature is going to take on the next item in my list.
And then I calculate how much I need to shift the pitch by.
So I know I want a number somewhere between 0 and 12.
And so I'm gonna get a number between 0 and 1 actually to start,
and then I'm gonna multiply it by 12 to get it to that range.
And so once I get the number between zero and
one, I look at my temperature minus the minimum temp.
So how much are we above the minimum and divide that by the entire range.
So if my minimum were five and
my maximum were ten, and
my current temperature was 9,
then we would get 9- 5 over 10-
5 which would be 4 over 5 or 0.8.
So we're kind of 80% of the way up from our min to our max is
what that's telling us.
And then finally I just call said effect.
And I know I want it on track one.
The effect I want is pitchshift.
What I want to control in terms of parameter is pitchshift shift.
The amount I want to do it is this pitchshift amount.
And then where do I actually want to put this thing.
Well I know I'm starting at measure one so it's gonna be one plus something.
And that plus something is the step size.
How far apart each of these data points is times i.
So the first time though this is gonna be at one.
Then the next time through it's gonna be at one plus step size.
Then the next time it's gonna be at one plus two times step size and
plus three times step size, and so on and so forth.
Until that point is getting all the way to measure 9.
So here we are in EarSketch,
have the exact same script that I just reviewed on the slide,
there's just one small difference here which is that it has the whole set of
weather data in it in its list here instead of just a few sample points.
So you can see how many data points we have to plot there.
A lot of data points.
And when I run that it shows up like this in EarSketch and
you can see those data plots are really getting, literally getting plotted.
As the effect envelope here for the pitch shifting.
10:06
And something I really like about this example is that you really can hear
the data in those pitch shifts.
You can hear the seasonal changes as the general pitch is getting higher or lower.
But you can also hear these micro-changes from day to day.
Because, you know, temperature doesn't just change upwards or downwards,
you know, weeks and weeks at a time.
It always varies a little bit from day to day, you know, within seasons or
within months or within weeks.
So we're hearing both those micro-variations and
those large-scale variations and
they're really coming through clearly through these changes in the pitch-shift.
I also like how it really radically transforms the original sound here.
It really sounds, to me, almost like Flight of the Bumblebee or
something, which is very different from the original,
kind of smooth pad sounds that we started with.