FRLL-Morphs
DISCLAIMER
The `preprocess.py` script included in this dataset is no longer necessary, and thus should NOT be run.
Database Description
FRLL-Morphs is a dataset of morphed faces based on images selected from the publicly available Face Research London Lab dataset.
We created 4 types of morphs for each pre-selected pair of images using the following morphing tools:
Instructions
This dataset is planned for vulnerability analysis experiments in the context of face recognition.
Therefore, it is intended to be used in conjunction with the original Face Research London Lab dataset.
To prepare this folder's file structure so it may easily be used for such experiments:
- Download and extract only the `neutral_front` and `smiling_front` datasets from the Face London Research Dataset.
- Place them in a new facelab_london/raw folder.
- Rename them simply as `neutral` and `smiling` respectively.
- Remove the `.tem` files from the `neutral` folder if not specifically required for any experiments as these could clash with other operations.
Once completed the directory's structure should be as given below:
+-- facelab_london
| +-- morph_amsl
| +-- morph_facemorpher
| +-- morph_opencv
| +-- morph_stylegan
| +-- morph_webmorph
| +-- raw
| +-- protocols
| +-- preprocess.py
| +-- README.txt
Protocols
The vulnerability analysis can be conducted in two ways, using:
- morphed images as references (`reverse-protocol`)
- morphed images as probes (`scores-protocol`)
The protocols for both types of experiments are provided in the `protocols` folder, each of which contains the file lists of detailing the exact images used as references (`for_models.lst`) and as probes (`for_probes.lst`) for each morphing tool.
The data is split into two sets, development (`dev`) and evaluation (`eval`), in order to be easily used by a toolkit such as bob. The split is made such as that no original identities used to make the morphed images overlap with one another, make the two sets completely independent of one another.
References
Any publication (eg. conference paper, journal article, technical report, book chapter, etc) resulting from the usage of FRLL_Morphs must cite the following papers:
@INPROCEEDINGS{9746477,
author = {Sarkar, Eklavya and Korshunov, Pavel and Colbois, Laurent and Marcel, Sébastien},
booktitle = {ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title = {Are GAN-based morphs threatening face recognition?},
year={2022},
pages={2959-2963},
url={https://doi.org/10.1109/ICASSP43922.2022.9746477}
doi={10.1109/ICASSP43922.2022.9746477}
}
@article{Sarkar2020,
title={Vulnerability Analysis of Face Morphing Attacks from Landmarks and Generative Adversarial Networks},
author={Eklavya Sarkar and Pavel Korshunov and Laurent Colbois and S\'{e}bastien Marcel},
year={2020},
month=oct,
journal={arXiv preprint},
url={https://arxiv.org/abs/2012.05344}
}
Any publication (eg. conference paper, journal article, technical report, book chapter, etc) resulting from the usage of Face Research London Lab must cite the following source:
@misc{debruine_jones_2017,
title={Face Research Lab London Set},
url={https://figshare.com/articles/dataset/Face_Research_Lab_London_Set/5047666/3},
DOI={10.6084/m9.figshare.5047666.v3},
publisher={figshare},
author={DeBruine, Lisa and Jones, Benedict},
year={2017},
month={May}
}
Any publication (eg. conference paper, journal article, technical report, book chapter, etc) resulting from the usage of Advanced Multimedia Security Lab’s (AMSL) Face Morph Image dataset must cite the following source:
@article{https://doi.org/10.1049/iet-bmt.2017.0147,
author={Neubert, Tom and Makrushin, Andrey and Hildebrandt, Mario and Kraetzer, Christian and Dittmann, Jana},
title={Extended StirTrace benchmarking of biometric and forensic qualities of morphed face images},
journal={IET Biometrics},
volume={7},
number={4},
pages={325-332},
doi={https://doi.org/10.1049/iet-bmt.2017.0147},
url={https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/iet-bmt.2017.0147},
eprint={https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/iet-bmt.2017.0147},
year={2018}
}