nafc

class autopilot.tasks.nafc.Nafc(stage_block=None, stim=None, reward=50, req_reward=False, punish_stim=False, punish_dur=100, correction=False, correction_pct=50.0, bias_mode=False, bias_threshold=20, current_trial=0, **kwargs)[source]

Bases: autopilot.tasks.task.Task

A Two-alternative forced choice task.

(can’t have number as first character of class.)

Stages

  • request - compute stimulus, set request trigger in center port.
  • discrim - respond to input, set reward/punishment triggers on target/distractor ports
  • reinforcement - deliver reward/punishment, end trial.
Variables:
  • target ("L", "R") – Correct response
  • distractor ("L", "R") – Incorrect response
  • stim – Current stimulus
  • response ("L", "R") – Response to discriminand
  • correct (0, 1) – Current trial was correct/incorrect
  • correction_trial (bool) – If using correction trials, last trial was a correction trial
  • trial_counter (itertools.count) – Which trial are we on?
  • discrim_playing (bool) – Is the stimulus playing?
  • bailed (0, 1) – Subject answered before stimulus was finished playing.
  • current_stage (int) – As each stage is reached, update for asynchronous event reference
Parameters:
  • stage_block (threading.Event) – Signal when task stages complete.
  • stim (dict) –

    Stimuli like:

    "sounds": {
        "L": [{"type": "Tone", ...}],
        "R": [{"type": "Tone", ...}]
    }
    
  • reward (float) – duration of solenoid open in ms
  • req_reward (bool) – Whether to give a water reward in the center port for requesting trials
  • punish_stim (bool) – Do a white noise punishment stimulus
  • punish_dur (float) – Duration of white noise in ms
  • correction (bool) – Should we do correction trials?
  • correction_pct (float) – (0-1), What proportion of trials should randomly be correction trials?
  • bias_mode (False, "thresholded_linear") – False, or some bias correction type (see managers.Bias_Correction )
  • bias_threshold (float) – If using a bias correction mode, what threshold should bias be corrected for?
  • current_trial (int) – If starting at nonzero trial number, which?
  • **kwargs
STAGE_NAMES = ['request', 'discrim', 'reinforcement']
PARAMS = {'bias_mode': {'values': {'Proportional': 1, 'None': 0, 'Thresholded Proportional': 2}, 'tag': 'Bias Correction Mode', 'type': 'list'}, 'bias_threshold': {'depends': {'bias_mode': 2}, 'tag': 'Bias Correction Threshold (%)', 'type': 'int'}, 'correction': {'tag': 'Correction Trials', 'type': 'bool'}, 'correction_pct': {'depends': {'correction': True}, 'tag': '% Correction Trials', 'type': 'int'}, 'punish_dur': {'tag': 'Punishment Duration (ms)', 'type': 'int'}, 'punish_stim': {'tag': 'White Noise Punishment', 'type': 'bool'}, 'req_reward': {'tag': 'Request Rewards', 'type': 'bool'}, 'reward': {'tag': 'Reward Duration (ms)', 'type': 'int'}, 'stim': {'tag': 'Sounds', 'type': 'sounds'}}
PLOT = {'chance_bar': True, 'data': {'correct': 'rollmean', 'target': 'point', 'response': 'segment'}, 'roll_window': 50}
class TrialData

Bases: tables.description.IsDescription

columns = {'DC_timestamp': StringCol(itemsize=26, shape=(), dflt='', pos=None), 'RQ_timestamp': StringCol(itemsize=26, shape=(), dflt='', pos=None), 'bailed': Int32Col(shape=(), dflt=0, pos=None), 'correct': Int32Col(shape=(), dflt=0, pos=None), 'correction': Int32Col(shape=(), dflt=0, pos=None), 'response': StringCol(itemsize=1, shape=(), dflt='', pos=None), 'target': StringCol(itemsize=1, shape=(), dflt='', pos=None), 'trial_num': Int32Col(shape=(), dflt=0, pos=None)}
HARDWARE = {'LEDS': {'C': <class 'autopilot.core.hardware.LED_RGB'>, 'R': <class 'autopilot.core.hardware.LED_RGB'>, 'L': <class 'autopilot.core.hardware.LED_RGB'>}, 'POKES': {'C': <class 'autopilot.core.hardware.Beambreak'>, 'R': <class 'autopilot.core.hardware.Beambreak'>, 'L': <class 'autopilot.core.hardware.Beambreak'>}, 'PORTS': {'C': <class 'autopilot.core.hardware.Solenoid'>, 'R': <class 'autopilot.core.hardware.Solenoid'>, 'L': <class 'autopilot.core.hardware.Solenoid'>}}
request(*args, **kwargs)[source]

Stage 0: compute stimulus, set request trigger in center port.

Returns:With fields:
{
'target': self.target,
'trial_num' : self.current_trial,
'correction': self.correction_trial,
'type': stimulus type,
**stim.PARAMS
}
Return type:data (dict)
discrim(*args, **kwargs)[source]

Stage 1: respond to input, set reward/punishment triggers on target/distractor ports

Returns:
With fields::
{ ‘RQ_timestamp’: datetime.datetime.now().isoformat(), ‘trial_num’: self.current_trial, }
Return type:data (dict)
reinforcement(*args, **kwargs)[source]

Stage 2 - deliver reward/punishment, end trial.

Returns:With fields:
 {
'DC_timestamp': datetime.datetime.now().isoformat(),
'response': self.response,
'correct': self.correct,
'bailed': self.bailed,
'trial_num': self.current_trial,
'TRIAL_END': True
}
Return type:data (dict)
punish()[source]

Flash lights, play punishment sound if set

respond(pin)[source]

Set self.response

Parameters:pin – Pin to set response to
stim_start()[source]

mark discrim_playing = true

stim_end()[source]

called by stimulus callback

set outside lights blue

flash_leds()[source]

flash lights for punish_dir