Montreal Affective Voices (MAV)
Audio Collection
The "Montreal Affective Voices" (MAV) are designed to constitute auditory equivalent of the set of affective faces by Ekman & Friesen (1986). They consist of ninety nonverbal affect bursts corresponding to emotions of anger, disgust, fear, pain, sadness, surprise, happiness and pleasure (plus a neutral expression) recorded in ten different actors (five male and five females). Ratings of Valence, Arousal and Intensity along eight emotions were collected for each vocalization in thirty participants. Analyses reveal high recognition accuracies for most emotional categories (mean 68%). They also reveal significant effects of both actor's and participant's gender: the highest hit rates (75%) were obtained for female participants rating female vocalizations, and the lowest hit rates (60%) for male participants rating male vocalizations. Interestingly, the "mixed" situations, i.e., male participants rating female vocalizations or female participants rating male vocalizations, yielded similar, intermediate ratings.
Probabilistic maps of the TVA
Talairach Space Maps
This file contains probabilistic maps of the TVA in Talairach space (two dimensions: 79 * 95 * 69 and 91 * 109 * 91). Voxel value indicates the percentage of subjects in whom this voxel is included in the TVA, i.e., in whom it shows significantly greater response to vocal vs. nonvocal sounds. Based on a sample of 19 subjects. Ref: Pernet, C., Charest, I., Bélizaire, G., Zatorre, R.J. & Belin, P. (2007) The Temporal Voice Areas: spatial characterization and variability. Human Brain Mapping conference, 2007.
Download the file:
Functional localizer of the TVA
Stimuli for the Temporal Voice Area
The file contains a set of stimuli to perform in a functional localizer of the temporal voice areas (TVA) with fMRI. This functional localizer lasts 10 minutes and is based on the contrast of vocal vs. nonvocal sounds (cf. Belin et al, (2000) Nature).
The localizer contains 40 8-sec blocs of sounds (16 bit, mono, 22050 Hz sampling rate): 20 blocs (vocal_01 -> vocal_20) consist of only vocal sound (speech as well as nonspeech), and 20 consist of only nonvocal sounds (industrial sounds, environmental sounds, as well as some animal vocalizations). All sounds have been normalized for RMS; a 1kHz tone of similar energy is provided for calibration.
The file TVA_loc.txt provides a proposed order of the sound blocs, optimized for the contrast Vocal vs. Nonvocal. Number 1->20 refer to the 20 vocal blocs; number 21->40 refer to the 20 nonvocal blocs; 99 refers to an 8-sec silence bloc.
The localizer has been planned for a TR of 10 sec (sparse sampling), with a dummy scan at the beginning (starting the sound stimulation), and a beginning of each block 2 sec after beginning of image acquisition. In this case, and following the bloc order suggested in TVA_loc.txt, 61 volumes should be acquired, and the vector of onsets for the two conditions VOCAL and NONVOCAL are (in seconds):
VOCAL = [22 62 82 112 132 162 202 222 242 262 312 352 372 402 432 462 482 512 542 572];
NONVOCAL= [12 32 52 102 122 142 182 232 282 302 322 342 382 422 442 472 502 522 552 592];
Note: it is also possible to use the localizer with a TR of 2 sec, with a continuous scanning noise as background.
Download the file:
Animal, Artificial, Natural, Speech and Vocal Non-Speech sounds
Sounds from Capilla et al (2012) Cerebral Cortex
The following links allow you to download and use the sounds from the recent Capilla et al (2012) Cerebral Cortex paper - The Early Spatio-Temporal Correlates and Task Independence of Cerebral Voice Processing Studied with MEG.
Details regarding stimuli production and choice can be found in the Capilla et al (2012), and relied on a pilot behavioral study and important acoustic considerations.
The sets of stimuli are those used in the actual experiment and consist of: 27 speech sounds; 43 non-speech sounds; 18 animal sounds; 24 natural sounds; and 28 artificial sounds.
Download the file:
Download the file:
Download the file:
Download the file:
Download the file: