Biometrics Evaluation and Testing (BEAT)
Competition on face recognition in mobile environment using the MOBIO database
In the context of BEAT project, the Biometric group at the Idiap Research Institute is organizing the second competition on face recognition for the 2013 International Conference on Biometrics (ICB-2013) to be held in Madrid, Spain on June 4-7, 2013. Researchers in biometrics are highly invited to participate in this competition. This will help them to evaluate the progress made in the last couple of years.
The competition will be carried out on the MOBIO database. MOBIO is a challenging bimodal (face/speaker) database recorded from 152 people. The database has a female-male ratio of nearly 1:2 (52 females and 100 males) and was collected from August 2008 until July 2010 in six different sites from five different countries. The images provided by the database were captured with cameras of mobile phones and a laptop computer.
More technical details about the MOBIO database can be found on its official webpage: https://www.idiap.ch/dataset/mobio
Particularity of MOBIO database
The MOBIO database is designed as to provide data that was recorded in a natural environment, i.e., using mobile devices. Hence, algorithms that perform well on this database are highly probable to be suitable for other real-world applications that do not require a predefined image recording setup.
In opposition to many of the well-known facial image databases, the MOBIO database provides unbiased face verification protocols, one for male and one for female clients. These protocols partition the clients of the database in three different groups:
- a Training set: used to train the parameters of your algorithm, e.g., to create the PCA projection matrix for an eigenface-based algorithm
- a Development set: used to evaluate hyper-parameters of your algorithm, e.g., to optimize the dimensions of the PCA matrix or the distance function
- an Evaluation set: used to evaluate the generalization performance of your algorithm on previously unseen data
Development and Evaluation set are again split into images that are used to enroll client models, and probe images to be tested against all client models. In opposition to many other protocols, here several images per client are used to enroll a model.
The following table gives a detailed overview about the partition of the database between Training set, Development set, and Evaluation set, including the number of clients (identities) in the sets, the number of files used for enrollment and probe, and the number of scores that needs to be computed for the experiments:
Training set | Development set | Evaluation set | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Protocol | Clients | Files | Clients | Enrollment | Probe | Scores | Clients | Enrollment | Probe | Scores |
MALE | 37 | 7104 | 24 | 120 | 2520 | 60480 | 38 | 190 | 3990 | 151620 |
FEMALE | 13 | 2496 | 18 | 90 | 1890 | 34020 | 20 | 100 | 2100 | 42000 |
(total) | 50 | 9600 | 42 | 210 | 4410 | 94500 | 58 | 290 | 6090 | 193620 |
Particularity of the evaluation
The particularity of the evaluation is the restriction of using defined evaluation protocol for all participants. This gives a more appropriate and objective comparison between different systems.
Since the evaluation is done on data recorded on mobile devices, the processing time and the memory footprint of the algorithms are given special attention. For this reason, participants are invited to provide more details about these details at all different stages of their system (a form will be sent to participants at evaluation time).
This competition is focused on evaluating face recognition systems rather than face detection systems. Hence, the database provides hand-labeled eye locations for all images of the database. If your algorithm does not make use of these positions, please add a note into your system description (see below).
Downloading the data
The data for the competition can be downloaded from the download section of the original MOBIO website. This will require an EULA to be signed. After getting access, you should download the file "ICB2013-train_dev_image.tar.gz", which contains both the image data and the file lists defining the evaluation protocol.
For an example, how to use these file lists, we provide a baseline PCA+LDA system, which you can download here. The baseline script is written in python. To run the baseline will require to have installed the open source library Bob (only avaliable for Linux and MacOS based operating systems, we are working on a MS Windows version of Bob). When you put the "baseline.py" file to the same directory where you extracted the MOBIO data, the script should run out of the box, otherwise you might have to specify some command line options (see "baseline.py --help" for details).
If you have trouble with either downloading the data or running the baseline script, please feel free to contact us.
Important Dates
At the beginning of the competition, participants will be provided with the training list, the list of the hand-labeled eye positions, and the enrollment and probe lists of the Development set. Additionally, the source code of a baseline face recognition system will be published to give an example on how to use these lists. At the evaluation time (March 1, 2013), the equivalent lists of the Evaluation set will be made available to the participants.
At the end of the competition (March 15, 2013), the authors are invited to deliver the resulting score files of the Development and the Evaluation set, and give a short description of their system(s). These results will be published in a conference paper at the ICB-2013.
Here is a summary of the important dates of the evaluation:
Registration Due | January 14, 2013 |
---|---|
Availability of Training and Development sets |
January 14, 2013 |
Availability of Evaluation set |
March 1, 2013 |
Submission of the Results + System description |
March 15, 2013 |
Publication of the Results at ICB-2013 | April 8, 2013 |
Contacts
Manuel Günther | Manuel.Guenther {at} idiap.ch |
Sébastien Marcel | Sebastien.Marcel {at} idiap.ch |
Competition on speaker recognition in mobile environment using the MOBIO database
In the context of BEAT project, the Biometric group at the Idiap Research Institute is organizing the second competition on text independent speaker recognition for the 2013 International Conference on Biometrics (ICB-2013) to be held in Madrid, Spain on June 4-7, 2013. Researchers in Biometrics are highly invited to participate to this competition. This will help them to evaluate the progress made in the last couple of years.
The competition will be carried on the MOBIO database. MOBIO is a challenging bimodal (face/speaker) database recorded from 152 people. The database has a female-male ratio of nearly 1:2 (52 females and 100 males) and was collected from August 2008 until July 2010 in six different sites from five different countries. This led to a diverse bi-modal database with both native and non-native English speakers.
More technical details about the MOBIO database can be found on its official webpage: https://www.idiap.ch/dataset/mobio
The evaluation Plan can be found here
Particularity of MOBIO database
Compared to other evaluation databases such as NIST-SRE Eval, MOBIO database is a more challenging database, especially regarding the following 2 points:
- The data is acquired on Mobile devices possibly with real noise.
- The duration of the speech segments is relatively very short (around 10s or less).
The following table gives a detailed overview about the partition of the database between training set (Background), development set (DEV), and evaluation set (EVAL).
Background | DEV | EVAL | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Train | Test | Train | Test | |||||||||
Spk | Files | Targets | Files | Spks | Files | Trials | Targets | Files | Spks | Files | Trials | |
MALE | 37 | 7104 | 24 | 120 | 24 | 2520 | 60480 | 38 | 190 | 38 | 3990 | 151620 |
FEMALE | 13 | 2496 | 18 | 90 | 18 | 1890 | 34020 | 20 | 100 | 20 | 2100 | 42000 |
TOTAL | 50 | 9600 | 42 | 210 | 42 | 4410 | 94500 | 58 | 290 | 58 | 6090 | 193620 |
Particularity of the evaluation
- The particularity of the evaluation is the restriction of using a unique protocol for all participants. This gives a more appropriate and objective comparison between different systems.
- Since the evaluation is done on data recorded on mobile devices, the processing time and the memory footprint are given special attention. For this reason, participants are invited to provide more details about these at all different stages of their system (A form will be sent to participants at evaluation time).
Downloading the data
The data for the competition can be downloaded from the download section of the original MOBIO website. This will require an EULA to be signed. After getting access, you should download the file "ICB2013-train_dev_audio.tar.gz", which contains both the audio data and the file lists defining the evaluation protocol.
For an example, how to use these file lists, we provide a baseline UBM-GMM system, which you can download here (last update: 31/01/2013. Thanks to Flavio!). The baseline script is written in python. To run the baseline will require to have installed the open source library Bob (only available for Linux and MacOS based operating systems, we are working on a MS Windows version of Bob). When you put the "baseline.py" file to the same directory where you extracted the MOBIO data, the script should run out of the box, otherwise you might have to specify some command line options (see "baseline.py --help" for details).
If you have trouble with either downloading the data or running the baseline script, please feel free to contact us.
Important Dates
At the beginning of the competition, participants will be provided with the training list of background models, and the training and test lists of the Development set. Additionally, the source code of a baseline speaker recognition system will be published to give an example on how to use these lists. At the evaluation time (March 1, 2013), the equivalent lists of the Evaluation set will be made available to the participants.
At the end of the competition (March 15, 2013), the authors are invited to deliver the resulting score files of the Development and the Evaluation set, and give a short description of their system(s). These results will be published in a conference paper at the ICB-2013.
Registration Due | January 14, 2013 |
Availability of Training and Development sets | January 14, 2013 |
Availability of Evaluation set | March 1, 2013 |
Submission of the Results + System description | March 15, 2013 |
Publication of the Results at ICB-2013 | April 8, 2013 |
Contacts
Elie Khoury | Elie.Khoury {at} idiap.ch |
Sébastien Marcel | Sebastien.Marcel {at} idiap.ch |