Resources¶
This section contains a listing of all ready-to-use resources you can find in this package.
Databases¶
These configuration files/resources contain parameters of available databases.
The configuration files contain at least the following arguments of the spoof.py
script:
database
protocol
groups
Replay-Attack Database¶
Replayattack is a database for face PAD experiments.
The Replay-Attack Database for face spoofing consists of 1300 video clips of photo and video attack attempts to 50 clients, under different lighting conditions. This Database was produced at the Idiap Research Institute, in Switzerland. The reference citation is [CAM12].
You can download the raw data of the Replayattack database by following the link.
-
bob.pad.face.config.replay_attack.
ORIGINAL_DIRECTORY
= '[YOUR_REPLAY_ATTACK_DIRECTORY]'¶ Value of
~/.bob_bio_databases.txt
for this database
-
bob.pad.face.config.replay_attack.
database
= <bob.pad.face.database.ReplayPadDatabase object>¶ The
bob.pad.base.database.PadDatabase
derivative with Replayattack database settingsWarning
This class only provides a programmatic interface to load data in an orderly manner, respecting usage protocols. It does not contain the raw data files. You should procure those yourself.
Notice that
original_directory
is set to[YOUR_MIFS_DATABASE_DIRECTORY]
. You must make sure to create${HOME}/.bob_bio_databases.txt
setting this value to the place where you actually installed the Replayattack Database, as explained in the section Executing Baseline Algorithms.
-
bob.pad.face.config.replay_attack.
protocol
= 'grandtest'¶ The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--protocol
on the command-line ofspoof.py
or using the keywordprotocol
on a configuration file that is loaded after this configuration resource.
-
bob.pad.face.config.replay_attack.
groups
= ['train', 'dev', 'eval']¶ The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--groups
on the command-line ofspoof.py
or using the keywordgroups
on a configuration file that is loaded after this configuration resource.
Replay-Mobile Database¶
Replay-Mobile is a database for face PAD experiments.
The Replay-Mobile Database for face spoofing consists of 1030 video clips of photo and video attack attempts to 40 clients, under different lighting conditions. These videos were recorded with current devices from the market – an iPad Mini2 (running iOS) and a LG-G4 smartphone (running Android). This Database was produced at the Idiap Research Institute (Switzerland) within the framework of collaboration with Galician Research and Development Center in Advanced Telecommunications - Gradiant (Spain). The reference citation is [CBVM16].
You can download the raw data of the Replay-Mobile database by following the link.
-
bob.pad.face.config.replay_mobile.
ORIGINAL_DIRECTORY
= '[YOUR_REPLAY_MOBILE_DIRECTORY]'¶ Value of
~/.bob_bio_databases.txt
for this database
-
bob.pad.face.config.replay_mobile.
database
= <bob.pad.face.database.ReplayMobilePadDatabase object>¶ The
bob.pad.base.database.PadDatabase
derivative with Replay-Mobile database settings.Warning
This class only provides a programmatic interface to load data in an orderly manner, respecting usage protocols. It does not contain the raw data files. You should procure those yourself.
Notice that
original_directory
is set to[YOUR_REPLAY_MOBILE_DIRECTORY]
. You must make sure to create${HOME}/.bob_bio_databases.txt
setting this value to the place where you actually installed the Replay-Mobile Database, as explained in the section Executing Baseline Algorithms.
-
bob.pad.face.config.replay_mobile.
protocol
= 'grandtest'¶ The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--protocol
on the command-line ofspoof.py
or using the keywordprotocol
on a configuration file that is loaded after this configuration resource.
-
bob.pad.face.config.replay_mobile.
groups
= ['train', 'dev', 'eval']¶ The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--groups
on the command-line ofspoof.py
or using the keywordgroups
on a configuration file that is loaded after this configuration resource.
MSU MFSD Database¶
MSU MFSD is a database for face PAD experiments.
Database created at MSU, for face-PAD experiments. The public version of the database contains 280 videos corresponding to 35 clients. The videos are grouped as ‘genuine’ and ‘attack’. The attack videos have been constructed from the genuine ones, and consist of three kinds: print, iPad (video-replay), and iPhone (video-replay). Face-locations are also provided for each frame of each video, but some (6 videos) face-locations are not reliable, because the videos are not correctly oriented. The reference citation is [WHJ15].
You can download the raw data of the MSU MFSD database by following the link.
-
bob.pad.face.config.msu_mfsd.
ORIGINAL_DIRECTORY
= '[YOUR_MSU_MFSD_DIRECTORY]'¶ Value of
~/.bob_bio_databases.txt
for this database
-
bob.pad.face.config.msu_mfsd.
database
= <bob.pad.face.database.MsuMfsdPadDatabase object>¶ The
bob.pad.base.database.PadDatabase
derivative with MSU MFSD database settings.Warning
This class only provides a programmatic interface to load data in an orderly manner, respecting usage protocols. It does not contain the raw data files. You should procure those yourself.
Notice that
original_directory
is set to[YOUR_MSU_MFSD_DIRECTORY]
. You must make sure to create${HOME}/.bob_bio_databases.txt
setting this value to the place where you actually installed the Replay-Mobile Database, as explained in the section Executing Baseline Algorithms.
-
bob.pad.face.config.msu_mfsd.
protocol
= 'grandtest'¶ The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--protocol
on the command-line ofspoof.py
or using the keywordprotocol
on a configuration file that is loaded after this configuration resource.
-
bob.pad.face.config.msu_mfsd.
groups
= ['train', 'dev', 'eval']¶ The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--groups
on the command-line ofspoof.py
or using the keywordgroups
on a configuration file that is loaded after this configuration resource.
Aggregated Database¶
Aggregated Db is a database for face PAD experiments. This database aggregates the data from 3 publicly available data-sets: REPLAYATTACK, REPLAY-MOBILE and MSU MFSD. You can download the data for the above databases by following the corresponding links.
The reference citation for the REPLAYATTACK is [CAM12]. The reference citation for the REPLAY-MOBILE is [CBVM16]. The reference citation for the MSU MFSD is [WHJ15].
-
bob.pad.face.config.aggregated_db.
ORIGINAL_DIRECTORY
= '[YOUR_AGGREGATED_DB_DIRECTORIES]'¶ Value of
~/.bob_bio_databases.txt
for this database
-
bob.pad.face.config.aggregated_db.
database
= <bob.pad.face.database.AggregatedDbPadDatabase object>¶ The
bob.pad.base.database.PadDatabase
derivative with Aggregated Db database settings.Warning
This class only provides a programmatic interface to load data in an orderly manner, respecting usage protocols. It does not contain the raw data files. You should procure those yourself.
Notice that
original_directory
is set to[YOUR_AGGREGATED_DB_DIRECTORIES]
. You must make sure to create${HOME}/.bob_bio_databases.txt
file setting this value to the places where you actually installed the Replay-Attack, Replay-Mobile and MSU MFSD Databases. In particular, the paths pointing to these 3 databases must be separated with a space. See the following note with an example of[YOUR_AGGREGATED_DB_DIRECTORIES]
entry in the${HOME}/.bob_bio_databases.txt
file.Note
[YOUR_AGGREGATED_DB_DIRECTORIES] = <PATH_TO_REPLAY_ATTACK> <PATH_TO_REPLAY_MOBILE> <PATH_TO_MSU_MFSD>
-
bob.pad.face.config.aggregated_db.
protocol
= 'grandtest'¶ The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--protocol
on the command-line ofspoof.py
or using the keywordprotocol
on a configuration file that is loaded after this configuration resource.
-
bob.pad.face.config.aggregated_db.
groups
= ['train', 'dev', 'eval']¶ The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--groups
on the command-line ofspoof.py
or using the keywordgroups
on a configuration file that is loaded after this configuration resource.
MIFS Database¶
MIFS is a face makeup spoofing database adapted for face PAD experiments.
Database assembled from a dataset consisting of 107 makeup-transformations taken from random YouTube makeup video tutorials, adapted in this package for face-PAD experiments. The public version of the database contains 107 such transformations with each time two images of a subject before makeup, two images of the same subject after makeup and two images of the target identity. For this package, a subset of 104 makeup transformations is selected, the target identities images discarded and the remaining images randomly distributed in three sets. More information can be found in the reference [CDSR17].
You can download the raw data of the MIFS database by following the link.
-
bob.pad.face.config.mifs.
ORIGINAL_DIRECTORY
= '[YOUR_MIFS_DATABASE_DIRECTORY]'¶ Value of
~/.bob_bio_databases.txt
for this database
-
bob.pad.face.config.mifs.
database
= <bob.pad.face.database.MIFSPadDatabase object>¶ The
bob.pad.base.database.PadDatabase
derivative with Replayattack database settingsWarning
This class only provides a programmatic interface to load data in an orderly manner, respecting usage protocols. It does not contain the raw data files. You should procure those yourself.
Notice that
original_directory
is set to[YOUR_MIFS_DATABASE_DIRECTORY]
. You must make sure to create${HOME}/.bob_bio_databases.txt
setting this value to the place where you actually installed the Replayattack Database, as explained in the section Executing Baseline Algorithms.
-
bob.pad.face.config.mifs.
protocol
= 'grandtest'¶ The default protocol to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--protocol
on the command-line ofspoof.py
or using the keywordprotocol
on a configuration file that is loaded after this configuration resource.
-
bob.pad.face.config.mifs.
groups
= ['train', 'dev', 'eval']¶ The default groups to use for reproducing the baselines.
You may modify this at runtime by specifying the option
--groups
on the command-line ofspoof.py
or using the keywordgroups
on a configuration file that is loaded after this configuration resource.
Available face PAD systems¶
These configuration files/resources contain parameters of available face PAD systems/algorithms.
The configuration files contain at least the following arguments of the spoof.py
script:
sub_directory
preprocessor
extractor
algorithm
LBP features of facial region + SVM for REPLAY-ATTACK¶
This file contains configurations to run LBP and SVM based face PAD baseline. The settings are tuned for the Replay-attack database. The idea of the algorithm is introduced in the following paper: [CAM12]. However some settings are different from the ones introduced in the paper.
-
bob.pad.face.config.lbp_svm.
sub_directory
= 'lbp_svm'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.lbp_svm.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces with the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
.
-
bob.pad.face.config.lbp_svm.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the LBP histograms are extracted from each frame of the preprocessed video.
The parameters are similar to the ones introduced in [CAM12].
-
bob.pad.face.config.lbp_svm.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
.In contrast to [CAM12], the grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined by
n_samples
parameter.The data is also mean-std normalized,
mean_std_norm_flag = True
.
Image Quality Measures as features of facial region + SVM for REPLAY-ATTACK¶
This file contains configurations to run Image Quality Measures (IQM) and SVM based face PAD baseline. The settings are tuned for the Replay-attack database. The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_svm.
sub_directory
= 'qm_svm'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.qm_svm.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces of the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
. The preprocessed frame is the RGB facial image, which is defined byRGB_OUTPUT_FLAG = True
.
-
bob.pad.face.config.qm_svm.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video. The features to be computed are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_svm.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined byn_samples
parameter.The data is also mean-std normalized,
mean_std_norm_flag = True
.
Frame differences based features (motion analysis) + SVM for REPLAY-ATTACK¶
This file contains configurations to run Frame Differences and SVM based face PAD baseline. The settings are tuned for the Replay-attack database. The idea of the algorithms is inherited from the following paper: [AM11].
-
bob.pad.face.config.frame_diff_svm.
sub_directory
= 'frame_diff_svm'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.frame_diff_svm.
preprocessor
= <bob.pad.face.preprocessor.FrameDifference object>¶ In the preprocessing stage the frame differences are computed for both facial and non-facial/background regions. In this case all frames of the input video are considered, which is defined by
number_of_frames = None
. The frames containing faces of the size belowmin_face_size = 50
threshold are discarded. Both RGB and gray-scale videos are acceptable by the preprocessor. The preprocessing idea is introduced in [AM11].
-
bob.pad.face.config.frame_diff_svm.
extractor
= <bob.pad.face.extractor.FrameDiffFeatures object>¶ In the feature extraction stage 5 features are extracted for all non-overlapping windows in the Frame Difference input signals. Five features are computed for each of windows in the facial face regions, the same is done for non-facial regions. The non-overlapping option is controlled by
overlap = 0
. The length of the window is defined bywindow_size
argument. The features are introduced in the following paper: [AM11].
-
bob.pad.face.config.frame_diff_svm.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined byn_samples
parameter.The data is also mean-std normalized,
mean_std_norm_flag = True
.
LBP features of facial region + SVM for Aggregated Database¶
This file contains configurations to run LBP and SVM based face PAD baseline. The settings of the preprocessor and extractor are tuned for the Replay-attack database. In the SVM algorithm the amount of training data is reduced speeding-up the training for large data sets, such as Aggregated PAD database. The idea of the algorithm is introduced in the following paper: [CAM12]. However some settings are different from the ones introduced in the paper.
-
bob.pad.face.config.lbp_svm_aggregated_db.
sub_directory
= 'lbp_svm_aggregated_db'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.lbp_svm_aggregated_db.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces with the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
.
-
bob.pad.face.config.lbp_svm_aggregated_db.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the LBP histograms are extracted from each frame of the preprocessed video. The parameters are similar to the ones introduced in [CAM12].
-
bob.pad.face.config.lbp_svm_aggregated_db.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined byn_samples
parameter. The final training of the SVM is done on the subset of training datareduced_train_data_flag = True
. The size of the subset for the final training stage is defined by then_train_samples
argument. The data is also mean-std normalized,mean_std_norm_flag = True
.
Image Quality Measures as features of facial region + SVM for Aggregated Database¶
This file contains configurations to run Image Quality Measures (IQM) and SVM based face PAD baseline. The settings of the preprocessor and extractor are tuned for the Replay-attack database. In the SVM algorithm the amount of training data is reduced speeding-up the training for large data sets, such as Aggregated PAD database. The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_svm_aggregated_db.
sub_directory
= 'qm_svm_aggregated_db'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.qm_svm_aggregated_db.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces of the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
. The preprocessed frame is the RGB facial image, which is defined byRGB_OUTPUT_FLAG = True
.
-
bob.pad.face.config.qm_svm_aggregated_db.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video. The features to be computed are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_svm_aggregated_db.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined byn_samples
parameter. The final training of the SVM is done on the subset of training datareduced_train_data_flag = True
. The size of the subset for the final training stage is defined by then_train_samples
argument. The data is also mean-std normalized,mean_std_norm_flag = True
.
Frame differences based features (motion analysis) + SVM for Aggregated Database¶
This file contains configurations to run Frame Differences and SVM based face PAD baseline. The settings of the preprocessor and extractor are tuned for the Replay-attack database. In the SVM algorithm the amount of training data is reduced speeding-up the training for large data sets, such as Aggregated PAD database. The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.frame_diff_svm_aggregated_db.
sub_directory
= 'frame_diff_svm'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.frame_diff_svm_aggregated_db.
preprocessor
= <bob.pad.face.preprocessor.FrameDifference object>¶ In the preprocessing stage the frame differences are computed for both facial and non-facial/background regions. In this case all frames of the input video are considered, which is defined by
number_of_frames = None
. The frames containing faces of the size belowmin_face_size = 50
threshold are discarded. Both RGB and gray-scale videos are acceptable by the preprocessor. The preprocessing idea is introduced in [AM11].
-
bob.pad.face.config.frame_diff_svm_aggregated_db.
extractor
= <bob.pad.face.extractor.FrameDiffFeatures object>¶ In the feature extraction stage 5 features are extracted for all non-overlapping windows in the Frame Difference input signals. Five features are computed for each of windows in the facial face regions, the same is done for non-facial regions. The non-overlapping option is controlled by
overlap = 0
. The length of the window is defined bywindow_size
argument. The features are introduced in the following paper: [AM11].
-
bob.pad.face.config.frame_diff_svm_aggregated_db.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined byn_samples
parameter. The final training of the SVM is done on the subset of training datareduced_train_data_flag = True
. The size of the subset for the final training stage is defined by then_train_samples
argument. The data is also mean-std normalized,mean_std_norm_flag = True
.
Image Quality Measures as features of facial region + Logistic Regression¶
This file contains configurations to run Image Quality Measures (IQM) and LR based face PAD algorithm. The settings of the preprocessor and extractor are tuned for the Replay-attack database. The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_lr.
sub_directory
= 'qm_lr'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.qm_lr.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces of the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
. The preprocessed frame is the RGB facial image, which is defined byRGB_OUTPUT_FLAG = True
.
-
bob.pad.face.config.qm_lr.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video. The features to be computed are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_lr.
algorithm
= <bob.pad.base.algorithm.LogRegr object>¶ The Logistic Regression is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The sub-sampling of training data is not used here, sub-sampling flags have defaultFalse
values.
Image Quality Measures as features of facial region + GMM-based one-class classifier (anomaly detector)¶
This file contains configurations to run Image Quality Measures (IQM) and one-class GMM based face PAD algorithm. The settings of the preprocessor and extractor are tuned for the Replay-attack database. The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_one_class_gmm.
sub_directory
= 'qm_one_class_gmm'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.qm_one_class_gmm.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces of the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
. The preprocessed frame is the RGB facial image, which is defined byRGB_OUTPUT_FLAG = True
.
-
bob.pad.face.config.qm_one_class_gmm.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video. The features to be computed are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_one_class_gmm.
algorithm
= <bob.pad.base.algorithm.OneClassGMM object>¶ The GMM with 50 clusters is trained using samples from the real class only. The pre-trained GMM is next used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
.
Image Quality Measures as features of facial region + one-class SVM classifier (anomaly detector) for Aggregated Database¶
This file contains configurations to run Image Quality Measures (IQM) and one-class SVM based face PAD algorithm. The settings of the preprocessor and extractor are tuned for the Replay-attack database. In the SVM algorithm the amount of training data is reduced speeding-up the training for large data sets, such as Aggregated PAD database. The IQM features used in this algorithm/resource are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_one_class_svm_aggregated_db.
sub_directory
= 'qm_one_class_svm_aggregated_db'¶ Sub-directory where results will be placed.
You may change this setting using the
--sub-directory
command-line option or the attributesub_directory
in a configuration file loaded after this resource.
-
bob.pad.face.config.qm_one_class_svm_aggregated_db.
preprocessor
= <bob.bio.video.preprocessor.Wrapper object>¶ In the preprocessing stage the face is cropped in each frame of the input video given facial annotations. The size of the face is normalized to
FACE_SIZE
dimensions. The faces of the size belowMIN_FACE_SIZE
threshold are discarded. The preprocessor is similar to the one introduced in [CAM12], which is defined byFACE_DETECTION_METHOD = None
. The preprocessed frame is the RGB facial image, which is defined byRGB_OUTPUT_FLAG = True
.
-
bob.pad.face.config.qm_one_class_svm_aggregated_db.
extractor
= <bob.bio.video.extractor.Wrapper object>¶ In the feature extraction stage the Image Quality Measures are extracted from each frame of the preprocessed RGB video. The features to be computed are introduced in the following papers: [WHJ15] and [CBVM16].
-
bob.pad.face.config.qm_one_class_svm_aggregated_db.
algorithm
= <bob.pad.base.algorithm.SVM object>¶ The one-class SVM algorithm with RBF kernel is used to classify the data into real and attack classes. One score is produced for each frame of the input video,
frame_level_scores_flag = True
. The grid search of SVM parameters is used to select the successful settings. The grid search is done on the subset of training data. The size of this subset is defined byn_samples
parameter. The final training of the SVM is done on all training datareduced_train_data_flag = False
. The data is also mean-std normalized,mean_std_norm_flag = True
.