The exercises build upon the Preparation Course Python (PCP) and the Fundamentals of Music Processing (FMP) notebooks. You are required to run the PCP notebook environment and the FMP notebook environment locally on your computer. Follow the links to learn how to set them up and run Jupyter notebooks in your browser.
Some exercise tasks will link directly to the PCP and FMP notebooks. In any case, these notebooks are one of the best learning resources for this course! You are always encouraged to play around with different parameters, other input files, etc.
Proficiency in Python and Numpy is required for the entire exercise class. You should be confident in applying the concepts presented in units 1 to 5 of PCP.
Before starting this exercise, you should work through the Lecture on Music Representations. In particular, you should be able to answer the following questions:
To refresh your knowledge of Python functions and basic use of numpy, complete Excercises 1, 2, and 3 from PCP unit 4.
import numpy as np
############
# Ex. 1
# <your solution goes here>
# test case
print(give_me_a_number('random'))
############
# Ex. 2
# <your solution goes here>
# test case
A_test = np.array([
[1, 2, 6],
[5, 5, 2]
])
print(row_mean(A_test))
############
# Ex. 3
# <your solution goes here>
# test case
x_1 = np.arange(10)
print(vector_odd_index(x_1))
In this task, we will implement a class that bundles different representations of pitch. We want to be able to freely convert between a string representation (scientific pitch notation), the MIDI pitch, and the frequency in Hz.
Your task is to complete the functions _string2midi()
, get_freq()
, get_midi()
and get_string()
in the Note
class below. You can use the FMP Notebooks on Frequency and Pitch and Musical Notes and Pitches for inspiration.
_string2midi()
and get_string()
, you can make use of the self._names
list. For example, self._names.index("E")
returns you the position of "E" in the array (4).class Note:
"""Representation of a musical note with a single pitch
"""
_names = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
def __init__(self, note_string, ref_A4=440.):
"""Initialize a new note
Arguments
=========
note_string: str
name of the tone in format "X[#/b]N" where X is a letter from A to G and N is the octave number (e.g. A3 or C#5)
ref_A4: float
reference frequency for A4 in Hz (default: 440 Hz)
"""
self.midi_pitch = self._string2midi(note_string)
self.ref_A4 = ref_A4
def _string2midi(self, note_string):
"""Helper function to convert a note string into a MIDI pitch (A4 = 69)
Arguments
=========
note_string: str
name of the tone in format "X[#/b]N" where X is a letter from A to G and N is the octave number (e.g. A3 or C#5)
Returns
=======
midi_pitch: float
"""
# <your solution goes here>
def get_freq(self):
"""Returns the frequency of the tone in Hz
"""
# <your solution goes here>
def get_midi(self):
"""Returns the MIDI pitch of the tone (A4 = 69)
"""
# <your solution goes here>
def get_string(self):
"""Returns the note string representation
in format "X[#/b]N" where X is a letter from A to G and N is the octave number (e.g. A3 or C#5)
"""
# <your solution goes here>
def __str__(self):
return self.get_string()
You can test your solution with the code cell below.
self.ref_A4
?# test cases
note1 = Note("A4")
print("Note %s:\tPitch %i (should be 69), Frequency %.1f Hz (should be 440.0 Hz)" % (note1.get_string(), note1.get_midi(), note1.get_freq()))
note2 = Note("G#2")
print("Note %s:\tPitch %i (should be 44), Frequency %.1f Hz (should be 103.8 Hz)" % (note2.get_string(), note2.get_midi(), note2.get_freq()))
note3 = Note("Eb6")
print("Note %s:\tPitch %i (should be 87), Frequency %.1f Hz (should be 1244.5 Hz)" % (note3.get_string(), note3.get_midi(), note3.get_freq()))
note4 = Note("Bbb4")
print("Note %s:\tPitch %i (should be 69), Frequency %.1f Hz (should be 440.0 Hz)" % (note4.get_string(), note4.get_midi(), note4.get_freq()))
The twelve-tone equal temperament scale creates a fixed grid of pitches from which a lot of Western music is composed. But what happens between the grid?
Complete the function detune()
of the new derived class NoteDetunable
below according to the function documentation string.
Do you have to change something in the parent class Note
to make the representation work with possible detuning?
class NoteDetunable(Note):
def __init__(self, note_string, ref_A4=440.):
super().__init__(note_string, ref_A4)
def detune(self, delta_cents):
"""Detune the note by a certain amount
Arguments
=========
delta_cents: float
desired detuning of the tone in cents (i.e., 1/100 of a semitone)
"""
# <your solution goes here>
# test cases
note1 = NoteDetunable("A4")
note1.detune(42)
print("Note %s:\tPitch %.2f (should be 69.42), Frequency %.1f Hz (should be 450.8 Hz)" % (note1, note1.get_midi(), note1.get_freq()))
note2 = NoteDetunable("A4")
note2.detune(-42)
print("Note %s:\tPitch %.2f (should be 68.58), Frequency %.1f Hz (should be 429.5 Hz)" % (note2, note2.get_midi(), note2.get_freq()))
note3 = NoteDetunable("A4")
note3.detune(-42)
note3.detune(-1200)
print("Note %s:\tPitch %.2f (should be 56.58), Frequency %.1f Hz (should be 214.7 Hz)" % (note3, note3.get_midi(), note3.get_freq()))
Of course it would be nice to listen to our notes as well. As an introduction to sonifying pitch-based representations, you can work through the corresponding part in the FMP notebook on Sonification. The notebook on the Harmonic Series may be useful to understand why we superimpose certain sinusoids in the sonification.
Then complete the sonification functions below according to the documentation strings. You can use harmonic or pure sinusoidal synthesis.
def sonify_note(note, duration, Fs):
"""Create an audio signal for a given Note
Arguments
=========
note: Note
An instance of `Note` (or `NoteDetunable`)
duration: float
Duration of the output audio in seconds
Fs: float
sampling rate in Hz
Returns
=======
x: np.ndarray
A 1D array containing audio samples – the length depends on duration of the Note and sampling rate.
"""
# <your solution goes here>
def sonify_melody(note_sequence, Fs):
"""Create an audio signal from a sequence of Notes
This function internally calls `sonify_note()` for each Note in the sequence.
Arguments
=========
note_sequence : List
A list of pairs (Note, float) of a Note instance and a respective duration, for example `[(Note("G2"), 0.5), (Note("C3"), 1.)]`
Fs: float
sampling rate in Hz
Returns
=======
x: np.ndarray
A 1D array containing audio samples – the length depends on duration of the Note sequence and sampling rate.
"""
# <your solution goes here>
You can test your solution with the code cell below.
# test cases
import IPython.display as ipd
Fs = 16000. # sampling rate in Hz
ipd.display(ipd.Audio(sonify_note(Note("A4"), 1., Fs), rate=Fs))
ipd.display(ipd.Audio(sonify_melody([
(Note("E4"), 0.16), (Note("F#4"), 0.14), (Note("G4"), 0.16), (Note("A4"), 0.14), (Note("F#4"), 0.3), (Note("D4"), 0.17), (Note("E4"), 1.)
], Fs), rate=Fs))
You can learn about the concept of Shepard tones and the Shepard-Risset glissando in the FMP Notebook on Chroma and Shepard Tones. Answer the following questions:
generate_shepard_tone()
function? How is the upper limit related to the sampling rate? What happens if you remove one or both of these limits?Sonify the output of the function melody_1()
. Do you recognize the song?
Can you write such a function for another melody with as few lines of code as possible?
How does a random melody sound like? What constitutes a "good" melody from a computational perspective?
def melody_1():
s = ["C", "D", "E", "F", "G", "A"]
m = []
for i in range(0, 20):
i7 = i % 7
b = 2 * (int(i/7) - int(i/14))
x = b + 3 - abs(3-i7) - 2*int(i/14) if (i7 > 2) else b
if (i == 1): x = 1
m.append(s[x] + "4")
return m
print(melody_1())
# <your solution goes here>