Dataset Summary

The AMIGOS dataset consists of the participants' profiles (anonymized participants' data, personality profiles and mood (PANAS) profiles), participant ratings, external annotations, neuro-physiological recordings (EEG, ECG and GSR signals), and video recording (frontal HD, full-body and depth videos) of two experiments:

  1. Short videos experiment: In this experiment, 40 volunteers watched a set of 16 short affective video extracts from movies. Each participant was in individual settings and rated each video in valence, arousal, dominance, familiarity and liking, and selected basic emotions (Neutral, Happiness, Sadness, Surprise, Fear, Anger, and Disgust) that they felt during the videos.
  2. Long videos experiment: In this experiment, 37 of the participants of the previous experiment watched a set of 4 long affective video extracts from movies. 17 of the participants performed the experiment in individual setting while the other 20 participants did it in group setting, in 5 groups of 4 people. Each participant rated each video in valence, arousal, dominance, familiarity and liking, and selected basic emotions (Neutral, Happiness, Sadness, Surprise, Fear, Anger, and Disgust) that they felt during the videos.

Videos of both experiments have been externally annotated on the scales of valence and arousal by 3 annotators.

For a more thorough explanation of the dataset collection and its contents, refer to [1].

File Listing

The following files are available (each explained in more detail below):

File nameFormatContents
Participant_questionnairexls, ods spreadsheet The answers participants gave to the questionnaire before the experiment.
Experiment_data xls, ods spreadsheet Order of the videos for both short and long videos experiments.
Participant_Personality xls, ods spreadsheet The answer participants gave to the personality traits questionnaire, and calculated stimated personality traits.
Participant_PANAS xls, ods spreadsheet The answer participants gave to the mood (PANAS) questionnaire, and calculated Positive Affect (PA) and Negative Affect (NA).
Video_List xls, ods spreadsheet Information of all the videos used in both experiments.
Face_video Zip file The frontal face video recordings, through an HD camera from both experiments.
RGB_kinect Zip file The frontal full-body RGB video recordings, through Kinect RBG camera from both experiments.
Depth_kinect Zip file The frontal full-body depth video recordings, through Kinect depth sensor from both experiments.
Frame_timestamps Zip file for Matlab Timestamps for the frames obtained through Kinect sensors.
Data_original Zip file for Matlab The original unprocessed physiological data recordings from both experiments in Matlab .mat format.
Data_preprocessed Zip file for Matlab The preprocessed (downsampling, EOG removal, filtering, segmenting, etc.) physiological data recordings from both experiment in Matlab .mat format.
Self_Assessment xls, ods spreadsheet Self-Assessment of the 20 videos of both experiments.
External_Annotations xls, ods spreadsheet External annotations of valence and arousal for the 20 second segments of the frontal videos of both experiments by three annotators.

File details

Participant_questionnaire

This file contains the participants' responses to the questions on the consent forms. The file is available in Open-Office Calc (participant_questionnaire.ods), and Microsoft Excel (participant_questionnaire.xls) formats.

This file also contains the information about the experiments and configuration in which the given participant participated.

The table in the file has one row per participant and the following columns:

Column nameColumn contents
UserID The unique id of the participant (1-40).
Publication_Consent Whether the participant has given consent for his/her imagery to appear on publications such as papers and posters (Y=Yes,N=No).
Exp1_ID The id of experiment 1 (short videos experiment) assigned to the participant, it coincides with the UserID.
Exp2_ID The id of experiment 2 (long videos experiment) assigned to the participants. Participants with the same Exp2_ID participanted in the same session in group configuration.
Session_Type_Exp_2 Social context of the participant for the recording session of experiment 2 (long videos experiment). Alone: The participant performed the experiment in individual setting. Group: The participant performed the experiment in group setting.
Group_Number Number of the group in which a participant belonged (1-5).
Exp2_Participant_IndexIndex of the participant in the experient 2 (long videos experiment) (1-4). It indicates the participant position, in a front view, from left to right in the the recording session of a group setting. In the case of individual setting, the participant was assigned the first position.
Gender Gender of the given participant.
Age Age of the given participant when participanting in the experiment.

Experiment_data

This file sumarizes the information about the two experiments of the dataset. The file is available in Open-Office Calc (Experiment_data.ods), and Microsoft Excel (Experiment_data.xls) formats.

It contains two tables. The first one (Short_Videos_Order) sumarizes the participant, order of videos and ID of the videos of the different sessions and trials of the experiment 1 (short videos experiment). The second one (Long_Videos_Order) sumarizes the sesion type (alone vs group), participants, group number in the case of the group sessions, order of videos and ID of the videos of each of the recording sessions and trials of the experiment 2 (long videos experiment).

The table Short_Videos_Order has one row per recording session of the short videos experiment with the following columns:

Column nameColumn contents
Exp1_IDThe id of the recording session in experiment 1 (short videos experiment) assigned to the participant, it coincides with the UserID.
UserIDThe unique id of the participant (1-40).
Trial #N (Video_Number)The video number of the #N trial, corresponding to the Video_Number column in the video_list file. There are 16 columns of this type corresponding to each of the 16 trials of the given recording session.
Trial #N (VideoID)The unique id of the video of the #N trial, corresponding to the VideoID column in the video_list file. There are 16 columns of this type corresponding to each of the 16 trials of the given recording session.

The table Long_Videos_Order has one row per recording session of the long videos experiment with the following columns:

Column nameColumn contents
Exp2_IDThe unique id of the recording session in experiment 2 (long videos experiment).
Session_TypeSocial context of the participant(s) for the given recording session. Alone: Individual setting. Group: Group setting.
UserID(s)Id(s) of the participant(s) that participated in the given recording session. For group setting, the participant ids are listed in the order they were sit, on a front view, from left to right.
Group_Number The unique number (1-5) assigned to each group for the recording sessions in group settings.
Video_OrderOrder in wich the videos were presented in the different trials of the given recording session (1: Video N1, 2: Video P1, 3: Video B1, and 4 Video U1).
Trial #N (Video_Number)The video number of the #N trial, corresponding to the Video_Number column in the video_list file. There are 4 columns of this type corresponding to each of the 4 trials of the given recording session.
Trial #N (VideoID)The unique id of the video of the #N trial, corresponding to the VideoID column in the video_list file. There are 4 columns of this type corresponding to each of the 4 trials of the given recording session.

Participant_Personality

This file contains the information obtained from the on-line questionary of the big-five marker scale (BFMS) questionnaire [2]. The file is available in Open-Office Calc (Participant_Personality.ods), and Microsoft Excel (Participant_Personality.xls) formats.

The file contains 6 tables. The first table (ReadMe) gives an overview of the content of the file. The second table (Results) summarizes the scores for the personality traits (Extroversion, Agreeableness, Conscientiousness, Emotional Stability, and Creativity or Openness [2]) for the different participants. The third table (Personality raw) present the raw data as it was introduced to the on-line form. Fourth table (personality changed) presents the pre-processed personality data. Fifth table (traits calculated) presents the different items sorted according to traits, and the calculated scores for the different traits. Table 6 (Results) presents the results for the calculations of the different traits for all participants. Personality ratings are unfortunately missing for participants 8 and 28.

Participant_PANAS

This file contains the information obtained from the on-line questionary of the general PANAS (Positive Affect and Negative Affect Schedule) questionnaire [3]. The file is available in Open-Office Calc (Participant_PANAS.ods), and Microsoft Excel (Participant_PANAS.xls) formats.

The file contains 5 tables. The first table (ReadMe) gives an overview of the content of the file. The second table (Panas_results) summarizes the scores for affect schedules (mood) (Positive Affect (PA) and Negative Affect(NA) [3]) for the different participants. The third table (Panas_raw) presents the raw data as it was introduced to the on-line form. Fourth table (Panas_sorted) presents the pre-processed mood (PANAS) data. Fifth table (Panas_calculation) presents the different items sorted according to PA/NA schedules, and the calculated scores for the different schedules. PANAS ratings are unfortunately missing for participant 28.

Video_list

This file lists all the videos used in the short and long videos experiments in a table. The file is available in Open-Office Calc (video_list.ods), and Microsoft Excel (video_list.xls) formats.

The table has one row per video and the following columns:

Column nameDescription
Video_Number The unique video number used in the experiment (1-20). Videos 1-16 are sorted according to the alfabethical order of the name of the files.
VideoID The unique id used in both experiments. Videos 1-16 preserved the ID used in their original dataset.
FileName Name of the file used in the experiment.
Experiment_TypeRecording session in which the given video was used. Short_Videos for the short videos experiment. Long_Videos for the long videos experiment.
Category Quadrant in the Valence/Arousal space (HVHA, HVLA, LVHA, and LVLA. H: High, L: Low, V: Valence, and A: Arousal) on which the given video was classified.
Source_Dataset Name of the dataset from which each video has been extracted (DECAF [4] and MAHNOB-HCI [5]).
Source_Movie Movie from which each video has been extracted.
Video_Duration Duration of the extracted video clip.

Face_video

Face_video contains the frontal face videos recorded from both experiments. Videos of the short videos experiment have been sorted in 40 .zip files, one for each of the recording sessions. File Exp1_PXX_face.zip corresponds to the trials of participant XX. In the zip file, PXX_VideoID_face.mov corresponds to the face video for the stimuli video VideoID of participant XX. Videos of the long videos experiment have been sorted in 22 .zip files, one for each of the recording sessions. File Exp2_LXX_TYY_NZZ_face.zip corresponds to the trials of recording session XX, type YY (Ind, Group), and Group/Individual ZZ. In the zip files of individuals' recordings, PXX_VideoID_face.mov corresponds to the face video for the stimuli video VideoID of participant XX. In the zip files of group recordings, P(XX1,XX2,XX3,XX4)_VideoID_face.mov corresponds to the face video for the stimuli video VideoID of participants XX1,XX2,XX3, and XX4. UserIDs are listed in the order the participants were sit, on a front view, during the recording session from left to right.

For groups 4 and 5, some of trials were missing due to technical issues, though these videos have been substituted by the videos recorded from the RGB video from Kinect. Please note that these videos are not in the order of presentation. The mapping between trial numbers and VideoIDs can be found in the Experiment_data file.

Videos were recorded in .mov format in HD quality using a JVC GY-HM150E camera at 25 fps deinterlaced using the h264 codec.

The synchronisation of the video is accurate to approximately 1/25 second (barring human error). Synchronisation was achieved by reproducing a beep at the beginning of the experiment and of each trial time. Time markers of each beep were recorded in the PC on the session recording. The onset frame of this beep was then manually marked in the video recording. Individual trial starting times were then calculated from the trial starting markers in the session recording. Final segmented video of each trial consists of the recordings during the 5 seconds prior the start of each video and the recordings during the duration of the video.

RGB_kinect

RGB_kinect contains the full-body RGB videos recorded from both experiments using the Kinect V1 sensor. Videos of the short videos experiment have been sorted in 40 .zip files, one for each of the recording sessions. File Exp1_PXX_rgb.zip corresponds to the trials of participant XX. In the zip file, PXX_VideoID_FaceVideo.avi corresponds to the full-body rgb video for the stimuli video VideoID of participant XX. Videos of the long videos experiment have been sorted in 22 .zip files, one for each of the recording sessions. File Exp2_LXX_TYY_NZZ_rgb.zip corresponds to the trials of recording session XX, type YY (Ind, Group), and Group/Individual ZZ. In the zip files of individuals' recordings, PXX_VideoID_rgb.avi corresponds to the full-body rgb video for the stimuli video VideoID of participant XX. In the zip files of group recordings, P(XX1,XX2,XX3,XX4)_VideoID_rgb.avi corresponds to the full-body rgb video for the stimuli video VideoID of participants XX1,XX2,XX3, and XX4. UserIDs are listed in the order the participants were sit, on a front view, during the recording from left to right. NOTE: given the way Kinect works, the videos are mirrored with respect to the frontal face videos from the Face_video videos.

Please note that these videos are not in the order of presentation. The mapping between trial numbers and VideoIDs can be found in the Experiment_data file.

Kinect V1 sensor was placed at the top from the screen. Frames of each video were obtained from the Kinect V1 sensor, using the maximum available resolution (1280x960), as they were available. Given the performance of the sensor the rate at which the frames are returned was not constant. Therefore, we have recorded the timestamp, from the beginning of each video, of each of the frames (see Frame_timestamps). Frames were encoded into a video in .avi format for each of the trials.

Depth_kinect

Depth_kinect contains the full-body depth videos recorded from both experiments using the Kinect V1 sensor. Videos of the short videos experiment have been sorted in 40 .zip files, one for each of the recording sessions. File Exp1_PXX_depth.zip corresponds to the trials of participant XX. In the zip file, PXX_VideoID_DepthVideo.avi corresponds to the full-body depth video for the stimuli video VideoID of participant XX. Videos of the long videos experiment have been sorted in 22 .zip files, one for each of the recording sessions. File Exp2_LXX_TYY_NZZ_depth.zip corresponds to the trials of recording session XX, type YY (Ind, Group), and Group/Individual ZZ. In the zip files of individuals' recordings, PXX_VideoID_depth.avi corresponds to the full-body depth video for the stimuli video VideoID of participant XX. In the zip files of group recordings, P(XX1,XX2,XX3,XX4)_VideoID_depth.avi corresponds to the full-body depth video for the stimuli video VideoID of participants XX1,XX2,XX3, and XX4. UserIDs are listed in the order the participants appear in the video, from left to right. NOTE: given the way Kinect works, the videos are mirrored with respect to the frontal face videos from the Face_video videos.

Please note that these videos are not in the order of presentation. The mapping between trial numbers and VideoIDs can be found in the Experiment_data file.

Kinect V1 sensor was placed at the top from the screen. Frames of each video were obtained from the Kinect V1 sensor, using the maximum available resolution (640x480), as they were available. Given the performance of the sensor the rate at which the frames are returned was not constant. Therefore, we have recorded the timestamp, from the beginning of each video, of each of the frames (see Frame_timestamps). Frames were encoded into a video in .avi format for each of the trials.

Frame_timestamps

Frame_Timestamps contains the time stamps from each frame of the depth and rgb videos obtained with the Kinect V1 sensor for both experiments. Frames of each video were obtained from the Kinect V1 sensor, using the maximum available resolution, as they were available. Given the performance of the sensor the rate at which the frames are returned was not constant. Therefore, we have recorded the timestamp, from the beginning of each video, of each of the frames.

Timestamps of the videos of the short videos experiment have been sorted in 40 .zip files, one for each of the recording sessions. File Exp1_PXX_timestamps.zip corresponds to the trials of participant XX. In the zip file, Exp1_PXX_timestamps.mat corresponds to the timestamps in seconds from the start of the video of the frames of the 16 rgb and depth videos. It contains a list with 2 rows and 16 columns, one for each video. The first row corresponds to the VideoID. The second raw correponds to the timestamps of the frames corresponding to the video VideoID. NOTE: Different participants can have different number of frames for the same video. Timestamps of the videos of the long videos experiment have been sorted in 22 .zip files, one for each of the recording sessions. File Exp2_LXX_TYY_NZZ_timestamps.zip corresponds to the trials of recording session XX, type YY (Ind, Group), and Group/Individual ZZ. In the zip files of individuals' recordings, PXX_timestamps.mat corresponds to the timestamps of the frames of the rgb and depth video for the stimuli video VideoID of participant XX. In the zip files of group recordings, P(XX1,XX2,XX3,XX4)_timestamps.mat corresponds to the timestamps of the frames of the rgb and depth video for the stimuli video VideoID of participants XX1,XX2,XX3, and XX4. UserIDs are listed in the order the participants appear in the videos, from left to right.

Data_original.zip

These are the original data recordings. There are 40 .zip files one for each of the participants. File Data_Original_PXX.zip includes the recordings of the different modalities, for participant XX, in response to the stimuli from short and long videos. The three modalities (EEG, ECG ad GSR) are stored in sepparate variables. The structure of the variables is as follows:

EEG_DATA: EEG data was recorded using the EMOTIV Epoc, with a sampling frequency of 128 Hz. EEG recordings are stored in the variable EEG_DATA as a list of 20 matrixes of 25 columns, one for each of the videos. To access the first element, corresponding to the video number 3 (See Video_List):

data=EEG_DATA{1,3};

Each element of the list EEG_DATA corresponds to a 25 columns matrix corresponding to one of the video stimuli. Elements 1-16 correspond to the short videos experiment, and Elements 17-20 correspond to the long videos experiment. Each matrix is a XXx25 dimensional matrix. Each row corresponds to a sample and the number of samples depends on the length of the video. The columns are as follows:

Column no.Ch. name Units Description
1 Counter Packet counter- use as a timebase. The counter runs from 0 to 128.
2 Interpolated Shows if a packet was dropped and the value interpolated from surrounding values.
3 Raw This is a multiplexed conductivity measurement used to derive the contact quality indicator lights.
4 AF3 uV EEG channel 10-20 system.
5 F7 uV EEG channel 10-20 system.
6 F3 uV EEG channel 10-20 system.
7 FC5 uV EEG channel 10-20 system.
8 T7 uV EEG channel 10-20 system.
9 P7 uV EEG channel 10-20 system.
10 O1 uV EEG channel 10-20 system.
11 O2 uV EEG channel 10-20 system.
12 P8 uV EEG channel 10-20 system.
13 T8 uV EEG channel 10-20 system.
14 FC6 uV EEG channel 10-20 system.
15 F4 uV EEG channel 10-20 system.
16 F8 uV EEG channel 10-20 system.
17 AF4 uV EEG channel 10-20 system.
18 GYROX Undocumented Signal of the horizontal axis gyroscope.
19 GYROY Undocumented Signal of the vertical axis gyroscope.
20Timestamp s System timestamp.
21 Es_Timestamp s EmoState timestamp.
22 Func_ID Reserved function id.
23 Func_Value Reserved function value.
24 Marker Marker value from hardware.
25 Sync_Signal Emotive synchronisation signal.

For more information please consult the given maual: EMOTIV Epoc API

ECG_DATA: ECG data was recorded using the Shimmer platform, with a sampling frequency of 256 Hz. ECG recordings are stored in the variable ECG_DATA as a list of 20 matrixes of 6 columns, one for each of the videos. To access the first element, corresponding to the video number 3 (See Video_List):

data=ECG_DATA{1,3};

Each element of the list EEG_DATA corresponds to a 6 columns matrix corresponding to one of the video stimuli. Elements 1-16 correspond to the short videos experiment, and Elements 17-20 correspond to the long videos experiment. Each matrix is a XXx6 dimensional matrix. Each row corresponds to a sample, and the columns are as follows:

Column no.Ch. name Units Description
1Timestamps ms Shimmer timestamp.
2ECG_RA mV ECG vector signal measured from the RA (right arm) position to the LL (left leg) position.
3ECG_LA mV ECG vector signal measured from the LA (left arm) position to the LL (left leg) position.
4X_ACCEL X-axis accelerometer signal. Refer to Shimmer platform manual.
5Y_ACCEL Y-axis accelerometer signal. Refer to Shimmer platform manual.
6Z_ACCEL Z-axis accelerometer signal. Refer to Shimmer platform manual.

For more information about the Shimmer3 ECG Unit please please refer to the manual at: ECG manual

For more information about the Shimmer platform please refer to the maual: Shimmer platform manual

GSR_DATA: GSR data was recorded using the Shimmer platform, with a sampling frequency of 128 Hz. ECG recordings are stored in the variable GSR_DATA as a list of 20 matrixes of 6 columns, one for each of the videos. To access the first element, corresponding to the video number 3 (See Video_List):

data=GSR_DATA{1,3};

Each element of the list EEG_DATA corresponds to a 6 columns matrix corresponding to one of the video stimuli. Elements 1-16 correspond to the short videos experiment, and Elements 17-20 correspond to the long videos experiment. Each matrix is a XXx6 dimensional matrix. Each row corresponds to a sample, and the columns are as follows:

Column no.Ch. name Units Description
1Timestamps ms Shimmer timestamp.
2GSR_RAW Shimmer GSR output encoded in 16 bit integers. Encoding depends on the auto-range of Shimmer GSR+ Unit see manual sec. 3.2.
3X_ACCEL X-axis accelerometer signal. Refer to Shimmer platform manual.
4Y_ACCEL Y-axis accelerometer signal. Refer to Shimmer platform manual.
5Z_ACCEL Z-axis accelerometer signal. Refer to Shimmer platform manual.

For more information about the Shimmer GSR+ Unit please refer to the maual: GSR+ manual

For more information about the Shimmer platform please refer to the maual: Shimmer platform manual

Self_Assessment.zip

This file contains all the participant self-assessment ratings collected during the experiment. The file is available in Open-Office Calc (participant_ratings.ods), Microsoft Excel (participant_ratings.xls).

The start_time values were logged by the presentation software. Valence, arousal, dominance, liking and familiarity were rated directly after each trial on a continuous 9-point scale using a standard mouse. SAM Manikins were used to visualize the ratings for valence, arousal and dominance. For liking (i.e. how much did you like the video?), thumbs up and thumbs down icons were used. Familiarity scale rated the videos from "never seen it before" (1) to "know the video very well" (9).

The file contains two tables, one for each short and long experiments.

Table Experiment_1 corresponds to the short videos experiment selfassessment. The table has one row per trial per participant and the following columns:

Column nameColumn contents
UserIDThe unique id of the participant (1-40).
VideoIDThe ID of the video.
Rep_IndexNumber of the video in the recording session (i.e. presentation order).
12 Initial Selfassessment Colums Selfassessment of initial affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), 4. liking (float between 1 and 9), 5. familiarity (float between 1 and 9), and selection of basic emotions: 6. neutral (binary 1 if selected), 7. disgust (binary 1 if selected), 8. happiness (binary 1 if selected), 9. surprise (binary 1 if selected), 10. anger (binary 1 if selected), 11. fear (binary 1 if selected), and 12. sadness (binary 1 if selected). They correspond to the participant's affective level before the video was reproduced.
12 Final Selfassessment Colums Selfassessment of final affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), 4. liking (float between 1 and 9), 5. familiarity (float between 1 and 9), and selection of basic emotions: 6. neutral (binary 1 if selected), 7. disgust (binary 1 if selected), 8. happiness (binary 1 if selected), 9. surprise (binary 1 if selected), 10. anger (binary 1 if selected), 11. fear (binary 1 if selected), and 12. sadness (binary 1 if selected). They correspond to the participant's affective level after the video was reproduced.

Table Experiment_2 corresponds to the long videos experiment selfassessment. The table has one row per participant and the following columns:

Column nameColumn contents
UserIDThe unique id of the participant (1-40).
Exp2_IDThe ID of the recording session.
10 Initial_Selfassessment_1 colums Selfassessment of initial affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), and selection of basic emotions: 4. neutral (binary 1 if selected), 5. disgust (binary 1 if selected), 6. happiness (binary 1 if selected), 7. surprise (binary 1 if selected), 8. anger (binary 1 if selected), 9. fear (binary 1 if selected), and 10. sadness (binary 1 if selected). They correspond to the participant's affective level before the first video was reproduced.
Trial 1(VideoID) The ID of the video reproduced in trial 1.
12 Selfassessment_1 colums Selfassessment of final affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), 4. liking (float between 1 and 9), 5. familiarity (float between 1 and 9), and selection of basic emotions: 6. neutral (binary 1 if selected), 7. disgust (binary 1 if selected), 8. happiness (binary 1 if selected), 9. surprise (binary 1 if selected), 10. anger (binary 1 if selected), 11. fear (binary 1 if selected), and 12. sadness (binary 1 if selected). They correspond to the participant's affective level after the first video was reproduced. They can be considered as the initial affective levels before reproduction of video 2.
Trial 2(VideoID) The ID of the video reproduced in trial 2.
12 Selfassessment_2 columns Selfassessment of final affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), 4. liking (float between 1 and 9), 5. familiarity (float between 1 and 9), and selection of basic emotions: 6. neutral (binary 1 if selected), 7. disgust (binary 1 if selected), 8. happiness (binary 1 if selected), 9. surprise (binary 1 if selected), 10. anger (binary 1 if selected), 11. fear (binary 1 if selected), and 12. sadness (binary 1 if selected). They correspond to the participant's affective level after video 2 was reproduced.
10 Initial_Selfassessment_2 colums Selfassessment of initial affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), and selection of basic emotions: 4. neutral (binary 1 if selected), 5. disgust (binary 1 if selected), 6. happiness (binary 1 if selected), 7. surprise (binary 1 if selected), 8. anger (binary 1 if selected), 9. fear (binary 1 if selected), and 10. sadness (binary 1 if selected). They correspond to the participant's affective level before video 3 was reproduced, just after a 15 min break.
Trial 3(VideoID) The ID of the video reproduced in trial 3.
12 Selfassessment_3 columns Selfassessment of final affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), 4. liking (float between 1 and 9), 5. familiarity (float between 1 and 9), and selection of basic emotions: 6. neutral (binary 1 if selected), 7. disgust (binary 1 if selected), 8. happiness (binary 1 if selected), 9. surprise (binary 1 if selected), 10. anger (binary 1 if selected), 11. fear (binary 1 if selected), and 12. sadness (binary 1 if selected). They correspond to the participant's affective level after video 3 was reproduced. They can be considered as the initial affective levels before reproduction of video 4.
Trial 4(VideoID) The ID of the video reproduced in trial 4.
12 Selfassessment_4 columns Selfassessment of final affective levels of: 1. arousal (float between 1 and 9), 2. valence (float between 1 and 9), 3. dominance (float between 1 and 9), 4. liking (float between 1 and 9), 5. familiarity (float between 1 and 9), and selection of basic emotions: 6. neutral (binary 1 if selected), 7. disgust (binary 1 if selected), 8. happiness (binary 1 if selected), 9. surprise (binary 1 if selected), 10. anger (binary 1 if selected), 11. fear (binary 1 if selected), and 12. sadness (binary 1 if selected). They correspond to the participant's affective level after video 4 was reproduced.

External_Annotations.zip

This file contains all the external annotations for valence and arousal. The file is available in Open-Office Calc (External_Annotations.ods), Microsoft Excel (External_Annotations.xls), and Comma-separated values (External_Annotations.csv) formats.

Valence and arousal were externally rated by three annotators in a continuous 9-point scale using a standard mouse. For each participant, face videos were split into 340 20s segments. SAM Manikins were used to visualize the ratings for valence and arousal. Videos of participants P8, P28 and P33 were not annotated since this participants did not took part in the long videos experiment.

The file contains one row per participant and video segment and the following columns:

Column nameColumn contents
UserIDThe unique id of the participant (1-40).
Video_NumberThe unique video number used in the experiment (1-20).
VideoIDThe video id corresponding to the same column in the video_list file.
Segment_NumberNumber of segment from the start of the video VideoID.
Valence_Annotator_1The valence rating (float between 1 and 9) from annotator 1.
Arousal_Annotator_1The arousal rating (float between 1 and 9) from annotator 1.
Valence_Annotator_2The valence rating (float between 1 and 9) from annotator 2.
Arousal_Annotator_2The arousal rating (float between 1 and 9) from annotator 2.
Valence_Annotator_3The valence rating (float between 1 and 9) from annotator 3.
Arousal_Annotator_3The arousal rating (float between 1 and 9) from annotator 3.

Data_preprocessed_matlab.zip

These files contain a downsampled (to 128Hz), preprocessed and segmented version of the data in Matlab format. This version of the data is well-suited to those wishing to quickly test a classification or regression technique without the hassle of processing all the data first. The data is split in 40 .zip files, one per participant. Data_Preprocessed_PXX.zip corresponds to the preprocessed data of participant XX, and contains the file Data_Preprocessed_PXX.mat (matlab file).

Each participant file contains three 20 matrix lists, 1 matrix correponding to each of the videos. Each of the lists' elements are accessed as follows:

List nameResulting matrix shapeMatrix contents
joined_data{1,YY}XX x 17samples (depends on the duration of the video) x channels for trial YY
labels_selfassessment{1,YY}1 x 121 x label (arousal, valence, dominance, liking, familiarity, neutral, disgust,happiness, surprise, anger, fear, and sadness) for trial YY
labels_ext_anotation{1,YY}ZZ x 3segments (20 second clips) x channels (segment_index, valence and arousal) for trial YY

The videos are in the order of Video_Number (See Video_List), so not in the order of presentation. This means the first video is the same for each participant. The following table shows the channel layout and the preprocessing performed:

Channel no.Channel contentPreprocessing
1 AF3
  1. The data was processsed at 128Hz.
  2. The data was averaged to the common reference.
  3. A bandpass frequency filter from 4.0-45.0Hz was applied.
  4. The trials were reordered from presentation order to video number (See video_list) order.
2 F7
3 F3
4 FC5
5 T7
6 P7
7 O1
8 O2
9 P8
10 T8
11 FC6
12 F4
13 F8
14 AF4
15ECG Right
  1. The data was downsampled to 128Hz.
  2. ECG was low-pass filtered with 60Hz cut-off frequency.
  3. GSR was reencoded to obtain skin conductance, and then GSR was calculated and low-pass filtered with 60Hz cut-off frequency.
  4. The trials were reordered from presentation order to video number (See video_list) order.
16ECG Left
17GSR

References

  1. AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups (PDF)", J.A. Miranda-Correa, M.K. Abadi, N. Sebe, and I. Patras, ArXiv e-prints, Feb. 2017.
  2. "Analyzing personality-related adjectives from an etic-emic perspective: The Big Five Marker Scales (BFMS) and the Italian AB5C taxonomy", M. Perugini, and L.D. Blas, Big Five Assessment, pp. 281–304, 2002.
  3. "The PANAS-X: Manual for the positive and negative affect schedule-expanded form", D. Watson, and L. Clark, The University of Iowa, Tech. Rep., 1999.
  4. "DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses", M. K. Abadi, R. Subramanian, S. M. Kia, P. Avesani, I. Patras, and N. Sebe, IEEE Transactions on Affective Computing, vol. 6, no. 3, pp. 209–222, July 2015.
  5. "A Multimodal Database for Affect Recognition and Implicit Tagging.", M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 42–55, 2012.