sounds

Classes to play sounds.

Each sound inherits a base type depending on prefs.AUDIOSERVER

To avoid unnecessary dependencies, Jack_Sound is not defined if AUDIOSERVER is ‘pyo’ and vice versa.

Todo

Implement sound level and filter calibration

class autopilot.stim.sound.sounds.Pyo_Sound[source]

Bases: object

Metaclass for pyo sound objects.

Note

Use of pyo is generally discouraged due to dropout issues and the general opacity of the module. As such this object is intentionally left undocumented.

play()[source]
table_wrap(audio, duration=None)[source]

Records a PyoAudio generator into a sound table, returns a tableread object which can play the audio with .out()

Parameters:
  • audio
  • duration
set_trigger(trig_fn)[source]
Parameters:trig_fn
class autopilot.stim.sound.sounds.Jack_Sound[source]

Bases: object

Base class for sounds that use the JackClient audio server.

Variables:
PARAMS = []

list – list of strings of parameters to be defined

type = None

str – string human readable name of sound

server_type = 'jack'

str – type of server, always ‘jack’ for Jack_Sound s.

chunk()[source]

Split our table up into a list of Jack_Sound.blocksize chunks.

set_trigger(trig_fn)[source]

Set a trigger function to be called when the stop_evt is set.

Parameters:trig_fn (callable) – Some callable
wait_trigger()[source]

Wait for the stop_evt trigger to be set for at least a second after the sound should have ended.

Call the trigger when the event is set.

get_nsamples()[source]

given our fs and duration, how many samples do we need?

literally:

np.ceil((self.duration/1000.)*self.fs).astype(np.int)
buffer()[source]

Dump chunks into the sound queue.

play()[source]

Play ourselves.

If we’re not buffered, be buffered.

Otherwise, set the play event and clear the stop event.

If we have a trigger, set a Thread to wait on it.

end()[source]

Release any resources held by this sound

class autopilot.stim.sound.sounds.Tone(frequency, duration, amplitude=0.01, **kwargs)[source]

Bases: object

The Humble Sine Wave

Parameters:
  • frequency (float) – frequency of sin in Hz
  • duration (float) – duration of the sin in ms
  • amplitude (float) – amplitude of the sound as a proportion of 1.
  • **kwargs – extraneous parameters that might come along with instantiating us
PARAMS = ['frequency', 'duration', 'amplitude']
type = 'Tone'
init_sound()[source]

Create a sine wave table using pyo or numpy, depending on the server type.

class autopilot.stim.sound.sounds.Noise(duration, amplitude=0.01, **kwargs)[source]

Bases: object

White Noise

Parameters:
  • duration (float) – duration of the noise
  • amplitude (float) – amplitude of the sound as a proportion of 1.
  • **kwargs – extraneous parameters that might come along with instantiating us
PARAMS = ['duration', 'amplitude']
type = 'Noise'
init_sound()[source]

Create a table of Noise using pyo or numpy, depending on the server_type

class autopilot.stim.sound.sounds.File(path, amplitude=0.01, **kwargs)[source]

Bases: object

A .wav file.

Todo

Generalize this to other audio types if needed.

Parameters:
  • path (str) – Path to a .wav file relative to the prefs.SOUNDDIR
  • amplitude (float) – amplitude of the sound as a proportion of 1.
  • **kwargs – extraneous parameters that might come along with instantiating us
PARAMS = ['path', 'amplitude']
type = 'File'
init_sound()[source]

Load the wavfile with scipy.io.wavfile , converting int to float as needed.

Create a sound table, resampling sound if needed.

class autopilot.stim.sound.sounds.Speech(path, speaker, consonant, vowel, token, amplitude=0.05, **kwargs)[source]

Bases: autopilot.stim.sound.sounds.File

Speech subclass of File sound.

Example of custom sound class - PARAMS are changed, but nothing else.

Parameters:
  • speaker (str) – Which Speaker recorded this speech token?
  • consonant (str) – Which consonant is in this speech token?
  • vowel (str) – Which vowel is in this speech token?
  • token (int) – Which token is this for a given combination of speaker, consonant, and vowel
type = 'Speech'
PARAMS = ['path', 'amplitude', 'speaker', 'consonant', 'vowel', 'token']
autopilot.stim.sound.sounds.SOUND_LIST = {'File': <class 'autopilot.stim.sound.sounds.File'>, 'Noise': <class 'autopilot.stim.sound.sounds.Noise'>, 'Speech': <class 'autopilot.stim.sound.sounds.Speech'>, 'Tone': <class 'autopilot.stim.sound.sounds.Tone'>, 'speech': <class 'autopilot.stim.sound.sounds.Speech'>}

Sounds must be added to this SOUND_LIST so they can be indexed by the string keys used elsewhere.

autopilot.stim.sound.sounds.STRING_PARAMS = ['path', 'speaker', 'consonant', 'vowel', 'type']

These parameters should be given string columns rather than float columns.

Bother Jonny to do this better.

v0.3 will be all about doing parameters better.

autopilot.stim.sound.sounds.int_to_float(audio)[source]

Convert 16 or 32 bit integer audio to 32 bit float.

Parameters:audio (numpy.ndarray) – a numpy array of audio
Returns:Audio that has been rescaled and converted to a 32 bit float.
Return type:numpy.ndarray