VRBiom


Get Data


Description

The VRBiom (Virtual Reality Dataset for Biometric Applications) dataset has been acquired using a head-mounted display (HMD) to benchmark and develop various biometric use-cases such as iris and periocular recognition and associated sub-tasks such as detection and semantic segmentation. The VRBiom dataset consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. Additionally, it has maintained an equal split of recordings without and with glasses to facilitate the analysis of eyewear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400 × 400). The dataset also includes 1104 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins.

 

Reference

If you use this dataset, please cite the following publication(s) depending on the use:

@article{vrbiom_dataset_arxiv2024,
author = {Kotwal, Ketan and Ulucan, Ibrahim and \”{O}zbulak, G\”{o}khan and Selliah, Janani and Marcel, S\'{e}bastien},
title = {VRBiom: A New Periocular Dataset for Biometric Applications of HMD},
year = {2024},
month = {Jul},
journal = {arXiv preprint arXiv:2407.02150},
DOI = {https://doi.org/10.48550/arXiv.2407.02150}
}

 

@inproceedings{vrbiom_pad_ijcb2024,
author = {Kotwal, Ketan and \”{O}zbulak, G\”{o}khan and Marcel, S\'{e}bastien},
title = {Assessing the Reliability of Biometric Authentication on Virtual Reality Devices},
booktitle = {Proceedings of IEEE International Joint Conference on Biometrics (IJCB2024)},
month = {Sep},
year = {2024}      
}

 

Data collection

The VRBiom dataset consists of nearly 2000 iris/periocular videos captured using cameras integrated into the Meta Quest Pro headset. This dataset includes 900 bona-fide videos from 25 subjects and 1104 PA videos. The process of acquiring both bona-fide and PA data recordings:

 

1. bona-fide Recordings:

A total of 25 subjects, aged between 18 and 50 and representing a diverse range of skin tones and eye colors, participated in the data collection process. The recordings were divided into two sub-sessions: the first without the subject wearing glasses and the second with glasses. For each sub-session, three recordings were captured for each of the following gaze variations:

  • Steady Gaze: The subject maintains a nearly fixed gaze position by fixating their eyes on a specific (virtual) object.
  • Moving Gaze: The subject’s gaze moves freely across the scene.
  • Partially Closed Eyes: The subject keeps their eyes partially closed without focusing on any particular gaze.

Figure 1. Samples of bona-fide recordings from the VRBiom dataset. Each row presents a sample of steady gaze, moving gaze, and partially closed eyes (from left to right). Top and bottom rows refer to recordings without and with glasses, respectively.

 

2. PA Recordings:

Most of the PAs were constructed by combining two categories of attack instruments:

  • the eyes region
  • the periocular region

For eyes, a variety of instruments including fake 3D eyes (eyeballs), printouts from synthetic and real identities, and plastic-made synthetic eyes were used to construct attacks. For the periocular region, mannequins and 3D masks made of different materials were employed.

Figure 2. PA instruments used to construct attacks in the VRBiom: (a) Rigid masks with own eyes, (b) rigid masks with fake eyeballs, (c) flex masks with print attacks, (d) flex masks with print attacks, (e) flex masks with fake eyeballs, and (f) mannequins.