FERPlus

这是 Emotion FER 数据集的 FER+ 新标签注释。(This is the FER+ new label annotations for the Emotion FER dataset.)

  • Owner: microsoft/FERPlus
  • Platform: Linux, Mac, Windows
  • License:: Other
  • Category::
  • Topic:
  • Like:
    157
      Compare:

Github stars Tracking Chart

FER+

FER+ 注释为标准的 Emotion FER 数据集提供了一组新的标签。在 FER+ 中,每张图片都有10个众包标签,这比原始的FER标签提供了更好的静态图像情感的真相。每个图像有10个标记,研究人员就可以估计每张脸的情绪概率分布。这允许构建产生统计分布或多标签输出的算法,而不是传统的单标签输出,如下所述:https://arxiv.org/abs/1608.01041

下面是从上述论文中提取的 FER 和 FER+ 标签的一些例子(FER top,FER + bottom):

FER vs FER+ example

新的标签文件名为 fer2013new.csv,它包含与原始 fer2013.csv 标签文件具有相同顺序的相同行数,以便您可以推断哪个情感标签属于哪个图像。由于我们无法托管实际的图像内容,请在这里找到原始的 FER 数据集:https://www.kaggle.com/c/challenges-in-representat...

CSV文件的格式如下:使用,中性,幸福,惊喜,悲伤,愤怒,厌恶,恐惧,轻蔑,未知,NF。列“用法”与原始 FER 标签相同,以区分训练、公共测试和私人测试集。其他列是每个情感的投票数加上未知和 NF(不是一个面孔)。

训练

我们还为 https://arxiv.org/abs/1608.01041 中描述的所有训练模式(多数,概率,交叉熵和多标签)提供了一个训练代码。训练代码使用 MS 认知工具包(以前称为CNTK):https://github.com/Microsoft/CNTK。

在安装 Cognitive Toolkit 并下载数据集(我们将在下面讨论数据集布局)之后,您可以简单地运行以下操作来开始训练:

对于多数投票模式

python train.py -d <dataset base folder> -m majority

对于概率模式

python train.py -d <dataset base folder> -m probability

对于交叉熵模式

python train.py -d <dataset base folder> -m crossentropy

对于多目标模式

python train.py -d <dataset base folder> -m multi_target

FER+ 训练布局

有一个名为 data 的文件夹具有以下布局:

/data
  /FER2013Test
    label.csv
  /FER2013Train
    label.csv
  /FER2013Valid
    label.csv

label.csv 每个文件夹中的 label.csv 包含每个图像的实际标签,图像名称采用以下格式:ferXXXXXXXX.png,其中XXXXXXXX是原始FER csv文件的行索引。所以这里的头几个图像的名字:

fer0000000.png
fer0000001.png
fer0000002.png
fer0000003.png

该文件夹不包含实际的图像,你需要从https://www.kaggle.com/c/challenges-in-representat...下载,然后提取所有与“训练”对应的图像进入 FER2013Train 文件夹,对应 “PublicTest” 的所有图像进入 FER2013Valid 文件夹,对应 “PrivateTest” 的所有图像进入 FER2013Test 文件夹。或者你可以使用generate_training_data.py 脚本来完成上述所有的操作,就像下一节提到的那样。

训练数据

我们在 python 中提供了一个简单的脚本 generate_training_data.py,它将fer2013.csv 和 fer2013new.csv 作为输入,合并这两个 CSV 文件,并将所有图像导出为一个png文件供教练员处理。

python generate_training_data.py -d <dataset base folder> -fer <fer2013.csv path> -ferplus <fer2013new.csv path>

引文

@inproceedings{BarsoumICMI2016,
    title={Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution},
    author={Barsoum, Emad and Zhang, Cha and Canton Ferrer, Cristian and Zhang, Zhengyou},
    booktitle={ACM International Conference on Multimodal Interaction (ICMI)},
    year={2016}
}

Main metrics

Overview
Name With Ownermicrosoft/FERPlus
Primary LanguagePython
Program languagePython (Language Count: 1)
PlatformLinux, Mac, Windows
License:Other
所有者活动
Created At2016-09-14 06:35:21
Pushed At2023-06-12 20:52:53
Last Commit At2023-06-12 20:52:53
Release Count0
用户参与
Stargazers Count604
Watchers Count39
Fork Count169
Commits Count35
Has Issues Enabled
Issues Count16
Issue Open Count11
Pull Requests Count3
Pull Requests Open Count0
Pull Requests Close Count1
项目设置
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private

FER+

The FER+ annotations provide a set of new labels for the standard Emotion FER dataset. In FER+, each image has been labeled by 10 crowd-sourced taggers, which provide better quality ground truth for still image emotion than the original FER labels. Having 10 taggers for each image enables researchers to estimate an emotion probability distribution per face. This allows constructing algorithms that produce statistical distributions or multi-label outputs instead of the conventional single-label output, as described in: https://arxiv.org/abs/1608.01041

Here are some examples of the FER vs FER+ labels extracted from the abovementioned paper (FER top, FER+ bottom):

FER vs FER+ example

The new label file is named fer2013new.csv and contains the same number of rows as the original fer2013.csv label file with the same order, so that you infer which emotion tag belongs to which image. Since we can't host the actual image content, please find the original FER data set here: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data

The format of the CSV file is as follows: usage, neutral, happiness, surprise, sadness, anger, disgust, fear, contempt, unknown, NF. Columns "usage" is the same as the original FER label to differentiate between training, public test and private test sets. The other columns are the vote count for each emotion with the addition of unknown and NF (Not a Face).

Training

We also provide a training code with implementation for all the training modes (majority, probability, cross entropy and multi-label) described in https://arxiv.org/abs/1608.01041. The training code uses MS Cognitive Toolkit (formerly CNTK) available in: https://github.com/Microsoft/CNTK.

After installing Cognitive Toolkit and downloading the dataset (we will discuss the dataset layout next), you can simply run the following to start the training:

For majority voting mode

python train.py -d <dataset base folder> -m majority

For probability mode

python train.py -d <dataset base folder> -m probability

For cross entropy mode

python train.py -d <dataset base folder> -m crossentropy

For multi-target mode

python train.py -d <dataset base folder> -m multi_target

FER+ layout for Training

There is a folder named data that has the following layout:

/data
  /FER2013Test
    label.csv
  /FER2013Train
    label.csv
  /FER2013Valid
    label.csv

label.csv in each folder contains the actual label for each image, the image name is in the following format: ferXXXXXXXX.png, where XXXXXXXX is the row index of the original FER csv file. So here the names of the first few images:

fer0000000.png
fer0000001.png
fer0000002.png
fer0000003.png

The folders don't contain the actual images, you will need to download them from https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data, then extract the images from the FER csv file in such a way, that all images corresponding to "Training" go to FER2013Train folder, all images corresponding to "PublicTest" go to FER2013Valid folder and all images corresponding to "PrivateTest" go to FER2013Test folder. Or you can use generate_training_data.py script to do all the above for you as mentioned in next section.

Training data

We provide a simple script generate_training_data.py in python that takes fer2013.csv and fer2013new.csv as inputs, merge both CSV files and export all the images into a png files for the trainer to process.

python generate_training_data.py -d <dataset base folder> -fer <fer2013.csv path> -ferplus <fer2013new.csv path>

Citation

If you use the new FER+ label or the sample code or part of it in your research, please cite the following:

@inproceedings{BarsoumICMI2016,
    title={Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution},
    author={Barsoum, Emad and Zhang, Cha and Canton Ferrer, Cristian and Zhang, Zhengyou},
    booktitle={ACM International Conference on Multimodal Interaction (ICMI)},
    year={2016}
}