Tools implemented in bob.bio.face

Summary

Databases

bob.bio.face.database.ARFaceBioDatabase([…])

ARFace database implementation of bob.bio.base.database.BioDatabase interface.

bob.bio.face.database.AtntBioDatabase([…])

ATNT database implementation of bob.bio.base.database.BioDatabase interface.

bob.bio.face.database.CasiaAfricaDatabase(…)

The Casia-Face-Africa dataset is composed of 1133 identities from different ethical groups in Nigeria.

bob.bio.face.database.MobioDatabase(protocol)

The MOBIO dataset is a video database containing bimodal data (face/speaker).

bob.bio.face.database.ReplayBioDatabase(**kwargs)

Replay attack database implementation of bob.bio.base.database.BioDatabase interface.

bob.bio.face.database.ReplayMobileBioDatabase([…])

ReplayMobile database implementation of bob.bio.base.database.BioDatabase interface.

bob.bio.face.database.GBUBioDatabase([…])

GBU database implementation of bob.bio.base.database.BioDatabase interface.

bob.bio.face.database.LFWBioDatabase([…])

LFW database implementation of bob.bio.base.database.Database interface.

bob.bio.face.database.MultipieDatabase(protocol)

The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months.

bob.bio.face.database.FargoBioDatabase([…])

FARGO database implementation of bob.bio.base.database.BioDatabase interface.

bob.bio.face.database.MEDSDatabase(protocol)

The MEDS II database was developed by NIST to support and assists their biometrics evaluation program.

bob.bio.face.database.MorphDatabase(protocol)

The MORPH dataset is relatively old, but is getting some traction recently mostly because its richness with respect to sensitive attributes.

bob.bio.face.database.PolaThermalDatabase(…)

Collected by USA Army, the Polarimetric Thermal Database contains basically VIS and Thermal face images.

bob.bio.face.database.CBSRNirVis2Database(…)

This package contains the access API and descriptions for the CASIA NIR-VIS 2.0 Database <http://www.cbsr.ia.ac.cn/english/NIR-VIS-2.0-Database.html>.

Face Image Annotators

bob.bio.face.annotator.Base()

Base class for all face annotators

bob.bio.face.annotator.BobIpFacedetect([…])

Annotator using bob.ip.facedetect Provides topleft and bottomright annoations.

bob.bio.face.annotator.BobIpFlandmark(**kwargs)

Annotator using bob.ip.flandmark.

bob.bio.face.annotator.BobIpMTCNN([…])

Annotator using mtcnn in bob.ip.facedetect

Image Preprocessors

bob.bio.face.preprocessor.Base([dtype, …])

Performs color space adaptations and data type corrections for the given image.

bob.bio.face.preprocessor.FaceCrop(…[, …])

Crops the face according to the given annotations.

bob.bio.face.preprocessor.TanTriggs(face_cropper)

Crops the face (if desired) and applies Tan&Triggs algorithm [TT10] to photometrically enhance the image.

bob.bio.face.preprocessor.HistogramEqualization(…)

Crops the face (if desired) and performs histogram equalization to photometrically enhance the image.

bob.bio.face.preprocessor.SelfQuotientImage(…)

Crops the face (if desired) and applies self quotient image algorithm [WLW04] to photometrically enhance the image.

bob.bio.face.preprocessor.INormLBP(face_cropper)

Performs I-Norm LBP on the given image

Image Feature Extractors

bob.bio.face.extractor.DCTBlocks([…])

Extracts Discrete Cosine Transform (DCT) features from (overlapping) image blocks.

bob.bio.face.extractor.GridGraph([…])

Extracts Gabor jets in a grid structure [GHW12] using functionalities from bob.ip.gabor.

bob.bio.face.extractor.LGBPHS(block_size[, …])

Extracts Local Gabor Binary Pattern Histogram Sequences (LGBPHS) [ZSG05] from the images, using functionality from bob.ip.base and bob.ip.gabor.

Face Recognition Algorithms

bob.bio.face.algorithm.GaborJet(…[, …])

Computes a comparison of lists of Gabor jets using a similarity function of bob.ip.gabor.Similarity.

bob.bio.face.algorithm.Histogram([…])

Computes the distance between histogram sequences.

Databases

class bob.bio.face.database.ARFaceBioDatabase(original_directory=None, original_extension='.ppm', **kwargs)

Bases: bob.bio.base.database.BioDatabase

ARFace database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to ARFACE database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
class bob.bio.face.database.AtntBioDatabase(original_directory=None, original_extension='.pgm', **kwargs)

Bases: bob.bio.base.database.BioDatabase

ATNT database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of the database interface, which directly talks to ATNT database, for verification experiments (good to use in bob.bio.base framework).

annotations(file)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

class bob.bio.face.database.CBSRNirVis2Database(protocol)

Bases: bob.bio.base.database.CSVDataset

This package contains the access API and descriptions for the CASIA NIR-VIS 2.0 Database <http://www.cbsr.ia.ac.cn/english/NIR-VIS-2.0-Database.html>. The actual raw data for the database should be downloaded from the original URL. This package only contains the Bob accessor methods to use the DB directly from python, with the original protocol of the database.

CASIA NIR-VIS 2.0 database offers pairs of mugshot images and their correspondent NIR photos. The images of this database were collected in four recording sessions: 2007 spring, 2009 summer, 2009 fall and 2010 summer, in which the first session is identical to the CASIA HFB database. It consists of 725 subjects in total. There are [1-22] VIS and [5-50] NIR face images per subject. The eyes positions are also distributed with the images.

@inproceedings{li2013casia,
title={The casia nir-vis 2.0 face database},
author={Li, Stan Z and Yi, Dong and Lei, Zhen and Liao, Shengcai},
booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Conference on},
pages={348--353},
year={2013},
organization={IEEE}
}

Warning

Use the command below to set the path of the real data:

$ bob config set bob.db.cbsr-nir-vis-2.directory [PATH-TO-CBSR-DATA]
Parameters

protocol (str) – One of the database protocols.

static protocols()[source]
static urls()[source]
class bob.bio.face.database.CasiaAfricaDatabase(protocol)

Bases: bob.bio.base.database.CSVDataset

The Casia-Face-Africa dataset is composed of 1133 identities from different ethical groups in Nigeria.

The capturing locations are:
  • Dabai city in Katsina state

  • Hotoro in Kano state

  • Birget in Kano state

  • Gandun Albasa in Kano state

  • Sabon Gari inKano state

  • Kano State School of Technology

These locations were strategically selected as they are known to have diverse population of local ethnicities.

Warning

Only 17 subjects had their images capture in two sessions.

Images were captured during daytime and night using three different cameras:
  • C1: Visual Light Camera

  • C2: Visual Light Camera

  • C3: NIR camera

This dataset interface implemented the three verificatio protocols: “ID-V-All-Ep1”, “ID-V-All-Ep2”, and “ID-V-All-Ep3” and they are organized as the following:

Dev. Set

protocol name

Cameras (gallery/probe)

Identities

Gallery

Probes

ID-V-All-Ep1

C1/C2

1133

2455

2426

ID-V-All-Ep2

C1/C3

1133

2455

1171

ID-V-All-Ep3

C2/C3

1133

2466

1193

Warning

Use the command below to set the path of the real data:

$ bob config set bob.db.casia-africa.directory [PATH-TO-MEDS-DATA]
@article{jawad2020,
   author  =  {Jawad,  Muhammad  and  Yunlong,  Wang  andCaiyong,  Wang  and  Kunbo,  Zhang  and Zhenan, Sun},
   title = {CASIA-Face-Africa: A Large-scale African Face Image Database},
   journal = {IEEE Transactions on Information Forensics and Security},
   pages = {},
   ISSN = {},
   year = {},
   type = {Journal Article}
}

Example

Fetching biometric references:

>>> from bob.bio.face.database import CasiaAfricaDatabase
>>> database = CasiaAfricaDatabase(protocol="ID-V-All-Ep1")
>>> database.references()

Fetching probes:

>>> from bob.bio.face.database import CasiaAfricaDatabase
>>> database = CasiaAfricaDatabase(protocol="ID-V-All-Ep1")
>>> database.probes()
Parameters

protocol (str) – One of the database protocols. Options are “ID-V-All-Ep1”, “ID-V-All-Ep2” and “ID-V-All-Ep3”

static protocols()[source]
static urls()[source]
class bob.bio.face.database.FaceBioFile(client_id, path, file_id, **kwargs)

Bases: bob.bio.base.database.BioFile

class bob.bio.face.database.FargoBioDatabase(original_directory=None, original_extension='.png', protocol='mc-rgb', **kwargs)

Bases: bob.bio.base.database.BioDatabase

FARGO database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of the database interface, which directly talks to ATNT database, for verification experiments (good to use in bob.bio.base framework).

annotations(file)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, purposes=None, protocol=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

class bob.bio.face.database.GBUBioDatabase(original_directory=None, original_extension='.jpg', **kwargs)

Bases: bob.bio.base.database.BioDatabase

GBU database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to GBU database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
class bob.bio.face.database.IJBCBioDatabase(original_directory=None, annotation_directory=None, original_extension=None, **kwargs)[source]

Bases: bob.bio.base.database.BioDatabase

IJBC database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to IJBC database, for verification experiments (good to use in bob.bio.base framework).

property original_directory
property annotation_directory
uses_probe_file_sets()[source]

Defines if, for the current protocol, the database uses several probe files to generate a score. Returns True if the given protocol specifies file sets for probes, instead of a single probe file. In this default implementation, False is returned, throughout. If you need different behavior, please overload this function in your derived class.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol='1:1', purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

object_sets(groups=None, protocol='1:1', purposes=None, model_ids=None)[source]

This function returns lists of FileSet objects, which fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

annotations(biofile)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

client_id_from_model_id(model_id, group='dev')[source]

Return the client id associated with the given model id. In this base class implementation, it is assumed that only one model is enrolled for each client and, thus, client id and model id are identical. All key word arguments are ignored. Please override this function in derived class implementations to change this behavior.

original_file_names(files)[source]

Returns the full path of the original data of the given File objects.

Parameters

files (list of bob.db.base.File) – The list of file object to retrieve the original data file names for.

Returns

The paths extracted for the files, in the same order.

Return type

list of str

class bob.bio.face.database.LFWBioDatabase(original_directory=None, original_extension='.jpg', annotation_type=None, **kwargs)

Bases: bob.bio.base.database.BioDatabase

LFW database implementation of bob.bio.base.database.Database interface. It is an extension of an SQL-based database interface, which directly talks to LFW database, for verification experiments (good to use in bob.bio.base framework).

annotations(myfile)[source]

Returns the annotations for the given File object, if available. You need to override this method in your high-level implementation. If your database does not have annotations, it should return None.

Parameters:

filebob.bio.base.database.BioFile

The file for which annotations should be returned.

Returns:

annotsdict or None

The annotations for the file, if available.

client_id_from_model_id(model_id, group='dev')[source]

Return the client id associated with the given model id. In this base class implementation, it is assumed that only one model is enrolled for each client and, thus, client id and model id are identical. All key word arguments are ignored. Please override this function in derived class implementations to change this behavior.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
class bob.bio.face.database.MEDSDatabase(protocol)

Bases: bob.bio.base.database.csv_dataset.CSVDatasetZTNorm

The MEDS II database was developed by NIST to support and assists their biometrics evaluation program. It is composed by 518 identities from both men/women (labeled as M and F) and five different race annotations (Asian, Black, American Indian, Unknown and White) (labeled as A, B, I, U and W.

Unfortunately, the distribution of gender and race is extremely unbalanced as it can be observed in their statistics. Furthermore, only 256 subjects has more than one image sample (obviously it is not possible to do a biometric evaluation with one sample per subject). For this reason, this interface contains a subset of the data, which is composed only by 383 subjects (White and Black men only).

This dataset contains three verification protocols and they are: verification_fold1, verification_fold2 and verification_fold1. Follow below the identities distribution in each set for the for each protocol:

Training set

Dev. Set

Eval. Set

T-References

Z-Probes

verification_fold1

80

80

111

112

verification_fold2

80

80

111

112

verification_fold3

80

80

111

112

Example

Fetching biometric references:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.references()

Fetching probes:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.probes()

Fetching refererences for T-Norm normalization:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.trerefences()

Fetching probes for Z-Norm normalization:

>>> from bob.bio.face.database import MEDSDatabase
>>> database = MEDSDatabase(protocol="verification_fold1")
>>> database.zprobes()

Warning

Use the command below to set the path of the real data:

$ bob config set bob.db.meds.directory [PATH-TO-MEDS-DATA]
Parameters

protocol (str) – One of the database protocols. Options are verification_fold1, verification_fold2 and verification_fold3

static urls()[source]
class bob.bio.face.database.MobioDatabase(protocol)

Bases: bob.bio.base.database.csv_dataset.CSVDatasetZTNorm

The MOBIO dataset is a video database containing bimodal data (face/speaker). It is composed by 152 people (split in the two genders male and female), mostly Europeans, split in 5 sessions (few weeks time lapse between sessions). The database was recorded using two types of mobile devices: mobile phones (NOKIA N93i) and laptop computers(standard 2008 MacBook).

For face recognition images are used instead of videos. One image was extracted from each video by choosing the video frame after 10 seconds. The eye positions were manually labelled and distributed with the database.

For more information check:

@article{McCool_IET_BMT_2013,
    title = {Session variability modelling for face authentication},
    author = {McCool, Chris and Wallace, Roy and McLaren, Mitchell and El Shafey, Laurent and Marcel, S{'{e}}bastien},
    month = sep,
    journal = {IET Biometrics},
    volume = {2},
    number = {3},
    year = {2013},
    pages = {117-129},
    issn = {2047-4938},
    doi = {10.1049/iet-bmt.2012.0059},
}
static protocols()[source]
static urls()[source]
class bob.bio.face.database.MorphDatabase(protocol)

Bases: bob.bio.base.database.csv_dataset.CSVDatasetZTNorm

The MORPH dataset is relatively old, but is getting some traction recently mostly because its richness with respect to sensitive attributes. It is composed by 55,000 samples from 13,000 subjects from men and women and five race clusters (called ancestry) and they are the following: African, European, Asian, Hispanic and Others. Figure 8 present some samples from this database.

This dataset contains faces from five ethnicities (African, European, Asian, Hispanic, “Other”) and two genders (Male and Female). Furthermore, this interface contains three verification protocols and they are: verification_fold1, verification_fold2 and verification_fold1. Follow below the identities distribution in each set for the for each protocol:

Training set

Dev. Set

Eval. Set

T-References

Z-Probes

verification_fold1

69

66

6738

6742

verification_fold2

69

67

6734

6737

verification_fold3

70

66

6736

6740

Warning

Use the command below to set the path of the real data:

$ bob config set bob.db.morph.directory [PATH-TO-MORPH-DATA]
Parameters

protocol (str) – One of the database protocols. Options are verification_fold1, verification_fold2 and verification_fold3

static urls()[source]
class bob.bio.face.database.MultipieDatabase(protocol)

Bases: bob.bio.base.database.CSVDataset

The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 view points and 19 illumination conditions while displaying a range of facial expressions. In addition, high resolution frontal images were acquired as well. In total, the database contains more than 305 GB of face data.

The data has been recorded over 4 sessions. For each session, the subjects were asked to display a few different expressions. For each of those expressions, a complete set of 30 pictures is captured that includes 15 different view points times 20 different illumination conditions (18 with various flashes, plus 2 pictures with no flash at all).

Available expressions:

  • Session 1 : neutral, smile

  • Session 2 : neutral, surprise, squint

  • Session 3 : neutral, smile, disgust

  • Session 4 : neutral, neutral, scream.

Camera and flash positioning:

The different view points are obtained by a set of 13 cameras located at head height, spaced at 15° intervals, from the -90° to the 90° angle, plus 2 additional cameras located above the subject to simulate a typical surveillance view. A flash coincides with each camera, and 3 additional flashes are positioned above the subject, for a total of 18 different possible flashes.

Protocols:

Expression protocol

Protocol E

  • Only frontal view (camera 05_1); only no-flash (shot 0)

  • Enrolled : 1x neutral expression (session 1; recording 1)

  • Probes : 4x neutral expression + other expressions (session 2, 3, 4; all recordings)

Pose protocol

Protocol P

  • Only neutral expression (recording 1 from each session, + recording 2 from session 4); only no-flash (shot 0)

  • Enrolled : 1x frontal view (session 1; camera 05_1)

  • Probes : all views from cameras at head height (i.e excluding 08_1 and 19_1), including camera 05_1 from session 2,3,4.

Illumination protocols

N.B : shot 19 is never used in those protocols as it is redundant with shot 0 (both are no-flash).

Protocol M

  • Only frontal view (camera 05_1); only neutral expression (recording 1 from each session, + recording 2 from session 4)

  • Enrolled : no-flash (session 1; shot 0)

  • Probes : no-flash (session 2, 3, 4; shot 0)

Protocol U

  • Only frontal view (camera 05_1); only neutral expression (recording 1 from each session, + recording 2 from session 4)

  • Enrolled : no-flash (session 1; shot 0)

  • Probes : all shots from session 2, 3, 4, including shot 0.

Protocol G

  • Only frontal view (camera 05_1); only neutral expression (recording 1 from each session, + recording 2 from session 4)

  • Enrolled : all shots (session 1; all shots)

  • Probes : all shots from session 2, 3, 4.

static protocols()[source]
static urls()[source]
class bob.bio.face.database.PolaThermalDatabase(protocol)

Bases: bob.bio.base.database.CSVDataset

Collected by USA Army, the Polarimetric Thermal Database contains basically VIS and Thermal face images.

Follow bellow the description of the imager used to capture this device.

The polarimetric LWIR imager used to collect this database was developed by Polaris Sensor Technologies. The imager is based on the division-of-time spinning achromatic retarder (SAR) design that uses a spinning phase-retarder mounted in series with a linear wire-grid polarizer. This system, also referred to as a polarimeter, has a spectral response range of 7.5-11.1, using a Stirling-cooled mercury telluride focal plane array with pixel array dimensions of 640×480. A Fourier modulation technique is applied to the pixel readout, followed by a series expansion and inversion to compute the Stokes images. Data were recorded at 60 frames per second (fps) for this database, using a wide FOV of 10.6°×7.9°. Prior to collecting data for each subject, a two-point non-uniformity correction (NUC) was performed using a Mikron blackbody at 20°C and 40°C, which covers the range of typical facial temperatures (30°C-35°C). Data was recorded on a laptop using custom vendor software.

An array of four Basler Scout series cameras was used to collect the corresponding visible spectrum imagery. Two of the cameras are monochrome (model # scA640-70gm), with pixel array dimensions of 659×494. The other two cameras are color (model # scA640-70gc), with pixel array dimensions of 658×494.

The dataset contains 60 subjects in total. For VIS images (considered only the 87 pixels interpupil distance) there are 4 samples per subject with neutral expression (called baseline condition B) and 12 samples per subject varying the facial expression (called expression E). Such variability was introduced by asking the subject to count orally. In total there are 960 images for this modality. For the thermal images there are 4 types of thermal imagery based on the Stokes parameters (\(S_0\), \(S_1\), \(S_2\) and \(S_3\)) commonly used to represent the polarization state. The thermal imagery is the following:

  • \(S_0\): The conventional thermal image

  • \(S_1\)

  • \(S_2\)

  • DoLP: The degree-of-linear-polarization (DoLP) describes the portion of an electromagnetic wave that is linearly polarized, as defined \(\frac{sqrt(S_{1}^{2} + S_{2}^{2})}{S_0}\).

Since \(S_3\) is very small and usually taken to be zero, the authors of the database decided not to provide this part of the data. The same facial expression variability introduced in VIS is introduced for Thermal images. The distance between the subject and the camera is the last source of variability introduced in the thermal images. There are 3 ranges: R1 (2.5m), R2 (5m) and R3 (7.5m). In total there are 11,520 images for this modality and for each subject they are split as the following:

Imagery/Range

R1 (B/E)

R2 (B/E)

R3 (B/E)

\(S_0\)

16 (8/8)

16 (8/8)

16 (8/8)

\(S_1\)

16 (8/8)

16 (8/8)

16 (8/8)

\(S_2\)

16 (8/8)

16 (8/8)

16 (8/8)

DoLP

16 (8/8)

16 (8/8)

16 (8/8)

Warning

Use the command below to set the path of the real data:

$ bob config set bob.db.pola-thermal.directory [PATH-TO-MEDS-DATA]
Parameters

protocol (str) – One of the database protocols.

static protocols()[source]
static urls()[source]
class bob.bio.face.database.ReplayBioDatabase(**kwargs)

Bases: bob.bio.base.database.BioDatabase

Replay attack database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Replay database, for verification experiments (good to use in bob.bio.base framework). It also implements a kind of hack so that you can run vulnerability analysis with it.

annotations(file)[source]

Will return the bounding box annotation of 10th frame of the video.

arrange_by_client(files) → files_by_client[source]

Arranges the given list of files by client id. This function returns a list of lists of File’s.

Parameters:

filesbob.bio.base.database.BioFile

A list of files that should be split up by BioFile.client_id.

Returns:

files_by_client[[bob.bio.base.database.BioFile]]

The list of lists of files, where each sub-list groups the files with the same BioFile.client_id

groups()[source]

Returns the names of all registered groups in the database

Keyword parameters:

protocol: str

The protocol for which the groups should be retrieved. If you do not have protocols defined, just ignore this field.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
protocol_names()[source]

Returns all registered protocol names Here I am going to hack and double the number of protocols with -licit and -spoof. This is done for running vulnerability analysis

class bob.bio.face.database.ReplayMobileBioDatabase(max_number_of_frames=None, annotation_directory=None, annotation_extension='.json', annotation_type='json', original_directory=None, original_extension='.mov', name='replay-mobile', **kwargs)

Bases: bob.bio.base.database.BioDatabase

ReplayMobile database implementation of bob.bio.base.database.BioDatabase interface. It is an extension of an SQL-based database interface, which directly talks to ReplayMobile database, for verification experiments (good to use in bob.bio.base framework).

property annotation_directory
property annotation_extension
property annotation_type
annotations(myfile)[source]

Will return the bounding box annotation of nth frame of the video.

arrange_by_client(files) → files_by_client[source]

Arranges the given list of files by client id. This function returns a list of lists of File’s.

Parameters:

filesbob.bio.base.database.BioFile

A list of files that should be split up by BioFile.client_id.

Returns:

files_by_client[[bob.bio.base.database.BioFile]]

The list of lists of files, where each sub-list groups the files with the same BioFile.client_id

groups()[source]

Returns the names of all registered groups in the database

Keyword parameters:

protocol: str

The protocol for which the groups should be retrieved. If you do not have protocols defined, just ignore this field.

model_ids_with_protocol(groups=None, protocol=None, **kwargs) → ids[source]

Returns a list of model ids for the given groups and given protocol.

Parameters:

groupsone or more of ('world', 'dev', 'eval')

The groups to get the model ids for.

protocol: a protocol name

Returns:

ids[int] or [str]

The list of (unique) model ids for the given groups.

objects(groups=None, protocol=None, purposes=None, model_ids=None, **kwargs)[source]

This function returns a list of bob.bio.base.database.BioFile objects or the list of objects which inherit from this class. Returned files fulfill the given restrictions.

Keyword parameters:

groupsstr or [str]

The groups of which the clients should be returned. Usually, groups are one or more elements of (‘world’, ‘dev’, ‘eval’)

protocol

The protocol for which the clients should be retrieved. The protocol is dependent on your database. If you do not have protocols defined, just ignore this field.

purposesstr or [str]

The purposes for which File objects should be retrieved. Usually, purposes are one of (‘enroll’, ‘probe’).

model_ids[various type]

The model ids for which the File objects should be retrieved. What defines a ‘model id’ is dependent on the database. In cases, where there is only one model per client, model ids and client ids are identical. In cases, where there is one model per file, model ids and file ids are identical. But, there might also be other cases.

property original_directory
property original_extension
protocol_names()[source]

Annotators

class bob.bio.face.annotator.Base

Bases: bob.bio.base.annotator.Annotator

Base class for all face annotators

annotate(sample, **kwargs)[source]

Annotates an image and returns annotations in a dictionary. All annotator should return at least the topleft and bottomright coordinates. Some currently known annotation points such as reye and leye are formalized in bob.bio.face.preprocessor.FaceCrop.

Parameters
  • sample (numpy.ndarray) – The image should be a Bob format (#Channels, Height, Width) RGB image.

  • **kwargs – The extra arguments that may be passed.

transform(samples, **kwargs)[source]

Annotates an image and returns annotations in a dictionary.

All annotator should add at least the topleft and bottomright coordinates. Some currently known annotation points such as reye and leye are formalized in bob.bio.face.preprocessor.FaceCrop.

Parameters
  • sample (Sample) – The image int the sample object should be a Bob format (#Channels, Height, Width) RGB image.

  • **kwargs – Extra arguments that may be passed.

class bob.bio.face.annotator.BobIpFacedetect(cascade=None, detection_overlap=0.2, distance=2, scale_base=0.9576032806985737, lowest_scale=0.125, eye_estimate=False, **kwargs)

Bases: bob.bio.face.annotator.Base

Annotator using bob.ip.facedetect Provides topleft and bottomright annoations.

Parameters
annotate(image, **kwargs)[source]

Return topleft and bottomright and expected eye positions

Parameters
  • image (array) – Image in Bob format RGB image.

  • **kwargs – Ignored.

Returns

The annotations in a dictionary. The keys are topleft, bottomright, quality, leye, reye.

Return type

dict

fit(X=None, y=None, **kwargs)[source]
class bob.bio.face.annotator.BobIpFlandmark(**kwargs)

Bases: bob.bio.face.annotator.Base

Annotator using bob.ip.flandmark. This annotator needs the topleft and bottomright annotations provided.

Example usage:

>>> from bob.bio.base.annotator import FailSafe
>>> from bob.bio.face.annotator import (
...     BobIpFacedetect, BobIpFlandmark)
>>> annotator = FailSafe(
...     [BobIpFacedetect(), BobIpFlandmark()],
...     required_keys=('reye', 'leye'))
annotate(image, annotations, **kwargs)[source]

Annotates an image.

Parameters
  • image (array) – Image in Bob format RGB.

  • annotations (dict) – The topleft and bottomright annotations are required.

  • **kwargs – Ignored.

Returns

Annotations with reye and leye keys or None if it fails.

Return type

dict

class bob.bio.face.annotator.BobIpMTCNN(min_size=40, factor=0.709, thresholds=0.6, 0.7, 0.7, **kwargs)

Bases: bob.bio.face.annotator.Base

Annotator using mtcnn in bob.ip.facedetect

annotate(image, **kwargs)[source]

Annotates an image using mtcnn

Parameters
  • image (numpy.array) – An RGB image in Bob format.

  • **kwargs – Ignored.

Returns

Annotations contain: (topleft, bottomright, leye, reye, nose, mouthleft, mouthright, quality).

Return type

dict

property factor
property min_size
property thresholds
bob.bio.face.annotator.bounding_box_to_annotations(bbx)[source]

Converts bob.ip.facedetect.BoundingBox to dictionary annotations.

Parameters

bbx (bob.ip.facedetect.BoundingBox) – The given bounding box.

Returns

A dictionary with topleft and bottomright keys.

Return type

dict

bob.bio.face.annotator.min_face_size_validator(annotations, min_face_size=32, 32)[source]

Validates annotations based on face’s minimal size.

Parameters
  • annotations (dict) – The annotations in dictionary format.

  • min_face_size ((int, int), optional) – The minimal size of a face.

Returns

True, if the face is large enough.

Return type

bool

Preprocessors

class bob.bio.face.preprocessor.Base(dtype=None, color_channel='gray', **kwargs)

Bases: sklearn.base.TransformerMixin, sklearn.base.BaseEstimator

Performs color space adaptations and data type corrections for the given image.

Parameters:

dtypenumpy.dtype or convertible or None

The data type that the resulting image will have.

color_channelone of ('gray', 'red', 'gren', 'blue', 'rgb')

The specific color channel, which should be extracted from the image.

change_color_channel(image)[source]

color_channel(image) -> channel

Returns the channel of the given image, which was selected in the constructor. Currently, gray, red, green and blue channels are supported.

Parameters:

image2D or 3D numpy.ndarray

The image to get the specified channel from.

Returns:

channel2D or 3D numpy.ndarray

The extracted color channel.

property channel
data_type(image) → image[source]

Converts the given image into the data type specified in the constructor of this class. If no data type was specified, or the image is None, no conversion is performed.

Parameters:

image2D or 3D numpy.ndarray

The image to convert.

Returns:

image2D or 3D numpy.ndarray

The image converted to the desired data type, if any.

fit(X, y=None)[source]
transform(images, annotations=None)[source]

Extracts the desired color channel and converts to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The image to preprocess.

annotationsany

Ignored.

Returns:

image2D numpy.ndarray

The image converted converted to the desired color channel and type.

class bob.bio.face.preprocessor.FaceCrop(cropped_image_size, cropped_positions, fixed_positions=None, mask_sigma=None, mask_neighbors=5, mask_seed=None, annotator=None, allow_upside_down_normalized_faces=False, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face according to the given annotations.

This class is designed to perform a geometric normalization of the face based on the eye locations, using bob.ip.base.FaceEyesNorm. Usually, when executing the crop_face() function, the image and the eye locations have to be specified. There, the given image will be transformed such that the eye locations will be placed at specific locations in the resulting image. These locations, as well as the size of the cropped image, need to be specified in the constructor of this class, as cropped_positions and cropped_image_size.

Some image databases do not provide eye locations, but rather bounding boxes. This is not a problem at all. Simply define the coordinates, where you want your cropped_positions to be in the cropped image, by specifying the same keys in the dictionary that will be given as annotations to the crop_face() function.

Note

These locations can even be outside of the cropped image boundary, i.e., when the crop should be smaller than the annotated bounding boxes.

Sometimes, databases provide pre-cropped faces, where the eyes are located at (almost) the same position in all images. Usually, the cropping does not conform with the cropping that you like (i.e., image resolution is wrong, or too much background information). However, the database does not provide eye locations (since they are almost identical for all images). In that case, you can specify the fixed_positions in the constructor, which will be taken instead of the annotations inside the crop_face() function (in which case the annotations are ignored).

Sometimes, the crop of the face is outside of the original image boundaries. Usually, these pixels will simply be left black, resulting in sharp edges in the image. However, some feature extractors do not like these sharp edges. In this case, you can set the mask_sigma to copy pixels from the valid border of the image and add random noise (see bob.ip.base.extrapolate_mask()).

Parameters
  • cropped_image_size ((int, int)) – The resolution of the cropped image, in order (HEIGHT,WIDTH); if not given, no face cropping will be performed

  • cropped_positions (dict) – The coordinates in the cropped image, where the annotated points should be put to. This parameter is a dictionary with usually two elements, e.g., {'reye':(RIGHT_EYE_Y, RIGHT_EYE_X) , 'leye':(LEFT_EYE_Y, LEFT_EYE_X)}. However, also other parameters, such as {'topleft' : ..., 'bottomright' : ...} are supported, as long as the annotations in the __call__ function are present.

  • fixed_positions (dict or None) – If specified, ignore the annotations from the database and use these fixed positions throughout.

  • mask_sigma (float or None) – Fill the area outside of image boundaries with random pixels from the border, by adding noise to the pixel values. To disable extrapolation, set this value to None. To disable adding random noise, set it to a negative value or 0.

  • mask_neighbors (int) – The number of neighbors used during mask extrapolation. See bob.ip.base.extrapolate_mask() for details.

  • mask_seed (int or None) –

    The random seed to apply for mask extrapolation.

    Warning

    When run in parallel, the same random seed will be applied to all parallel processes. Hence, results of parallel execution will differ from the results in serial execution.

  • allow_upside_down_normalized_faces (bool, optional) – If False (default), a ValueError is raised when normalized faces are going to be upside down compared to input image. This allows you to catch wrong annotations in your database easily. If you are sure about your input, you can set this flag to True.

  • annotator (bob.bio.base.annotator.Annotator) – If provided, the annotator will be used if the required annotations are missing.

  • kwargs – Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

crop_face(image, annotations=None)[source]

Crops the face. Executes the face cropping on the given image and returns the cropped version of it.

Parameters
  • image (2D numpy.ndarray) – The face image to be processed.

  • annotations (dict or None) – The annotations that fit to the given image. None is only accepted, when fixed_positions were specified in the constructor.

Returns

face – The cropped face.

Return type

2D numpy.ndarray (float)

Raises

ValueError – If the annotations is None.

is_annotations_valid(annotations)[source]
transform(X, annotations=None)[source]

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is cropped, according to the given annotations (or to fixed_positions, see crop_face()). Finally, the resulting face is converted to the desired data type.

Parameters
  • image (2D or 3D numpy.ndarray) – The face image to be processed.

  • annotations (dict or None) – The annotations that fit to the given image.

Returns

face – The cropped face.

Return type

2D numpy.ndarray

class bob.bio.face.preprocessor.HistogramEqualization(face_cropper, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face (if desired) and performs histogram equalization to photometrically enhance the image.

Parameters
  • face_cropper (str or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None) –

    The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

    Note

    The given class needs to contain a crop_face method.

  • kwargs – Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

equalize_histogram(image) → equalized[source]

Performs the histogram equalization on the given image.

Parameters:

image2D numpy.ndarray

The image to berform histogram equalization with. The image will be transformed to type uint8 before computing the histogram.

Returns:

equalized2D numpy.ndarray (float)

The photometrically enhanced image.

transform(X, annotations=None)[source]

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced using histogram equalization. Finally, the resulting face is converted to the desired data type.

Parameters:

X2D or 3D numpy.ndarray

The face image to be processed.

annotationsdict or None

The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns:

face2D numpy.ndarray

The cropped and photometrically enhanced face.

class bob.bio.face.preprocessor.INormLBP(face_cropper, radius=2, is_circular=True, compare_to_average=False, elbp_type='regular', **kwargs)

Bases: bob.bio.face.preprocessor.Base

Performs I-Norm LBP on the given image

transform(X, annotations=None)[source]

__call__(image, annotations = None) -> face

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced by extracting LBP features [HRM06]. Finally, the resulting face is converted to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The face image to be processed.

annotationsdict or None

The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns:

face2D numpy.ndarray

The cropped and photometrically enhanced face.

class bob.bio.face.preprocessor.MultiFaceCrop(cropped_image_size, cropped_positions_list, fixed_positions_list=None, mask_sigma=None, mask_neighbors=5, mask_seed=None, annotator=None, allow_upside_down_normalized_faces=False, **kwargs)[source]

Bases: bob.bio.face.preprocessor.Base

Wraps around FaceCrop to enable a dynamical cropper that can handle several annotation types. Initialization and usage is similar to the FaceCrop, but the main difference here is that one specifies a list of cropped_positions, and optionally a list of associated fixed positions.

For each set of cropped_positions in the list, a new FaceCrop will be instanciated that handles this exact set of annotations. When calling the transform method, the MultiFaceCrop matches each sample to its associated cropper based on the received annotation, then performs the cropping of each subset, and finally gathers the results.

In case of ambiguity (when no cropper is a match for the received annotations, or when several croppers match the received annotations), raises a ValueError.

transform(X, annotations=None)[source]

Extracts the desired color channel and converts to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The image to preprocess.

annotationsany

Ignored.

Returns:

image2D numpy.ndarray

The image converted converted to the desired color channel and type.

fit(X, y=None)[source]
bob.bio.face.preprocessor.Scale(target_img_size)

A transformer that scales images. It accepts a list of inputs

Parameters

target_img_size (tuple) – Target image size, specified as a tuple of (H, W)

class bob.bio.face.preprocessor.SelfQuotientImage(face_cropper, sigma=1.4142135623730951, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face (if desired) and applies self quotient image algorithm [WLW04] to photometrically enhance the image.

Parameters:

face_cropperstr or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

sigmafloat

Please refer to the [WLW04] original paper (see bob.ip.base.SelfQuotientImage documentation).

kwargs

Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

transform(X, annotations=None)[source]

__call__(image, annotations = None) -> face

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced using the self quotient image algorithm [WLW04]. Finally, the resulting face is converted to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The face image to be processed.

annotationsdict or None

The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns:

face2D numpy.ndarray

The cropped and photometrically enhanced face.

class bob.bio.face.preprocessor.TanTriggs(face_cropper, gamma=0.2, sigma0=1, sigma1=2, size=5, threshold=10.0, alpha=0.1, **kwargs)

Bases: bob.bio.face.preprocessor.Base

Crops the face (if desired) and applies Tan&Triggs algorithm [TT10] to photometrically enhance the image.

Parameters:

face_cropperstr or bob.bio.face.preprocessor.FaceCrop or bob.bio.face.preprocessor.FaceDetect or None

The face image cropper that should be applied to the image. If None is selected, no face cropping is performed. Otherwise, the face cropper might be specified as a registered resource, a configuration file, or an instance of a preprocessor.

Note

The given class needs to contain a crop_face method.

gamma, sigma0, sigma1, size, threshold, alpha

Please refer to the [TT10] original paper (see bob.ip.base.TanTriggs documentation).

kwargs

Remaining keyword parameters passed to the Base constructor, such as color_channel or dtype.

transform(X, annotations=None)[source]

__call__(image, annotations = None) -> face

Aligns the given image according to the given annotations.

First, the desired color channel is extracted from the given image. Afterward, the face is eventually cropped using the face_cropper specified in the constructor. Then, the image is photometrically enhanced using the Tan&Triggs algorithm [TT10]. Finally, the resulting face is converted to the desired data type.

Parameters:

image2D or 3D numpy.ndarray

The face image to be processed.

annotationsdict or None

The annotations that fit to the given image. Might be None, when the face_cropper is None or of type FaceDetect.

Returns:

face2D numpy.ndarray

The cropped and photometrically enhanced face.

Extractors

class bob.bio.face.extractor.DCTBlocks(block_size=12, block_overlap=11, number_of_dct_coefficients=45, normalize_blocks=True, normalize_dcts=True, auto_reduce_coefficients=False)

Bases: sklearn.base.TransformerMixin, sklearn.base.BaseEstimator

Extracts Discrete Cosine Transform (DCT) features from (overlapping) image blocks. These features are based on the bob.ip.base.DCTFeatures class. The default parametrization is the one that performed best on the BANCA database in [WMM11].

Usually, these features are used in combination with the algorithms defined in bob.bio.gmm. However, you can try to use them with other algorithms.

Parameters:

block_sizeint or (int, int)

The size of the blocks that will be extracted. This parameter might be either a single integral value, or a pair (block_height, block_width) of integral values.

block_overlapint or (int, int)

The overlap of the blocks in vertical and horizontal direction. This parameter might be either a single integral value, or a pair (block_overlap_y, block_overlap_x) of integral values. It needs to be smaller than the block_size.

number_of_dct_coefficientsint

The number of DCT coefficients to use. The actual number will be one less since the first DCT coefficient (which should be 0, if normalization is used) will be removed.

normalize_blocksbool

Normalize the values of the blocks to zero mean and unit standard deviation before extracting DCT coefficients.

normalize_dctsbool

Normalize the values of the DCT components to zero mean and unit standard deviation. Default is True.

fit(X, y=None)[source]
transform(X)[source]

__call__(image) -> feature

Computes and returns DCT blocks for the given input image.

Parameters:

image2D numpy.ndarray (floats)

The image to extract the features from.

Returns:

feature2D numpy.ndarray (floats)

The extracted DCT features for all blocks inside the image. The first index is the block index, while the second index is the DCT coefficient.

class bob.bio.face.extractor.GridGraph(gabor_directions=8, gabor_scales=5, gabor_sigma=6.283185307179586, gabor_maximum_frequency=1.5707963267948966, gabor_frequency_step=0.7071067811865476, gabor_power_of_k=0, gabor_dc_free=True, normalize_gabor_jets=True, eyes=None, nodes_between_eyes=4, nodes_along_eyes=2, nodes_above_eyes=3, nodes_below_eyes=7, node_distance=None, first_node=None)

Bases: bob.bio.base.extractor.Extractor

Extracts Gabor jets in a grid structure [GHW12] using functionalities from bob.ip.gabor.

The grid can be either aligned to the eye locations (in which case the grid might be rotated), or a fixed grid graph can be extracted.

In the first case, the eye locations in the aligned image need to be provided. Additionally, the number of node between, along, above and below the eyes need to be specified.

In the second case, a regular grid graph is created, by specifying the distance between two nodes. Additionally, the coordinate of the first node can be provided, which otherwise is calculated to evenly fill the whole image with nodes.

Parameters:

gabor_directions, gabor_scales, gabor_sigma, gabor_maximum_frequency, gabor_frequency_step, gabor_power_of_k, gabor_dc_free

The parameters of the Gabor wavelet family, with its default values set as given in [WFK97]. Please refer to bob.ip.gabor.Transform for the documentation of these values.

normalize_gabor_jetsbool

Perform Gabor jet normalization during extraction?

eyesdict or None

If specified, the grid setup will be aligned to the eye positions {‘reye’ : (re_y, re_x), ‘leye’ : (le_y, le_x)}. Otherwise a regular grid graph will be extracted.

nodes_between_eyes, nodes_along_eyes, nodes_above_eyes, nodes_below_eyesint

Only used when eyes is not None. The number of nodes to be placed between, along, above or below the eyes. The final number of nodes will be: (above + below + 1) times (between + 2*along + 2).

node_distance(int, int)

Only used when eyes is None. The distance between two nodes in the regular grid graph.

first_node(int, int) or None

Only used when eyes is None. If None, it is calculated automatically to equally cover the whole image.

load(**kwargs)[source]

Loads the parameters required for feature extraction from the extractor file. This function usually is only useful in combination with the train() function. In this base class implementation, it does nothing.

Parameters:

extractor_filestr

The file to read the extractor from.

read_feature(feature_file) → feature[source]

Reads the feature written by the write_feature() function from the given file.

Parameters:

feature_filestr or bob.io.base.HDF5File

The name of the file or the file opened for reading.

Returns:

feature[bob.ip.gabor.Jet]

The list of Gabor jets read from file.

static serialize_jets(jets)[source]
train(**kwargs)[source]

This function can be overwritten to train the feature extractor. If you do this, please also register the function by calling this base class constructor and enabling the training by requires_training = True.

Parameters:

training_data[object] or [[object]]

A list of preprocessed data that can be used for training the extractor. Data will be provided in a single list, if split_training_features_by_client = False was specified in the constructor, otherwise the data will be split into lists, each of which contains the data of a single (training-)client.

extractor_filestr

The file to write. This file should be readable with the load() function.

write_feature(feature, feature_file)[source]

Writes the feature extracted by the __call__ function to the given file.

Parameters:

feature[bob.ip.gabor.Jet]

The list of Gabor jets extracted from the image.

feature_filestr or bob.io.base.HDF5File

The name of the file or the file opened for writing.

class bob.bio.face.extractor.LGBPHS(block_size, block_overlap=0, gabor_directions=8, gabor_scales=5, gabor_sigma=6.283185307179586, gabor_maximum_frequency=1.5707963267948966, gabor_frequency_step=0.7071067811865476, gabor_power_of_k=0, gabor_dc_free=True, use_gabor_phases=False, lbp_radius=2, lbp_neighbor_count=8, lbp_uniform=True, lbp_circular=True, lbp_rotation_invariant=False, lbp_compare_to_average=False, lbp_add_average=False, sparse_histogram=False, split_histogram=None)

Bases: sklearn.base.TransformerMixin, sklearn.base.BaseEstimator

Extracts Local Gabor Binary Pattern Histogram Sequences (LGBPHS) [ZSG05] from the images, using functionality from bob.ip.base and bob.ip.gabor.

The block size and the overlap of the blocks can be varied, as well as the parameters of the Gabor wavelet (bob.ip.gabor.Transform) and the LBP extractor (bob.ip.base.LBP).

Parameters:

block_sizeint or (int, int)

The size of the blocks that will be extracted. This parameter might be either a single integral value, or a pair (block_height, block_width) of integral values.

block_overlapint or (int, int)

The overlap of the blocks in vertical and horizontal direction. This parameter might be either a single integral value, or a pair (block_overlap_y, block_overlap_x) of integral values. It needs to be smaller than the block_size.

gabor_directions, gabor_scales, gabor_sigma, gabor_maximum_frequency, gabor_frequency_step, gabor_power_of_k, gabor_dc_free

The parameters of the Gabor wavelet family, with its default values set as given in [WFK97]. Please refer to bob.ip.gabor.Transform for the documentation of these values.

use_gabor_phasesbool

Extract also the Gabor phases (inline) and not only the absolute values. In this case, Extended LGBPHS features [ZSQ09] will be extracted.

lbp_radius, lbp_neighbor_count, lbp_uniform, lbp_circular, lbp_rotation_invariant, lbp_compare_to_average, lbp_add_average

The parameters of the LBP. Please see bob.ip.base.LBP for the documentation of these values.

Note

The default values are as given in [ZSG05] (the values of [ZSQ09] might differ).

sparse_histogrambool

If specified, the histograms will be handled in a sparse way. This reduces the size of the extracted features, but the computation will take longer.

Note

Sparse histograms are only supported, when split_histogram = None.

split_histogramone of ('blocks', 'wavelets', 'both') or None

Defines, how the histogram sequence is split. This could be interesting, if the histograms should be used in another way as simply concatenating them into a single histogram sequence (the default).

fit(X, y=None)[source]
transform(X)[source]

__call__(image) -> feature

Extracts the local Gabor binary pattern histogram sequence from the given image.

Parameters:

image2D numpy.ndarray (floats)

The image to extract the features from.

Returns:

feature2D or 3D numpy.ndarray (floats)

The list of Gabor jets extracted from the image. The 2D location of the jet’s nodes is not returned.

Algorithms

class bob.bio.face.algorithm.GaborJet(gabor_jet_similarity_type, multiple_feature_scoring='max_jet', gabor_directions=8, gabor_scales=5, gabor_sigma=6.283185307179586, gabor_maximum_frequency=1.5707963267948966, gabor_frequency_step=0.7071067811865476, gabor_power_of_k=0, gabor_dc_free=True)

Bases: bob.bio.base.algorithm.Algorithm

Computes a comparison of lists of Gabor jets using a similarity function of bob.ip.gabor.Similarity.

The model enrollment simply stores all extracted Gabor jets for all enrollment features. By default (i.e., multiple_feature_scoring = 'max_jet'), the scoring uses an advanced local strategy. For each node, the similarity between the given probe jet and all model jets is computed, and only the highest value is kept. These values are finally averaged over all node positions. Other strategies can be obtained using a different multiple_feature_scoring.

Parameters:

gabor_jet_similarity_typestr:

The type of Gabor jet similarity to compute. Please refer to the documentation of bob.ip.gabor.Similarity for a list of possible values.

multiple_feature_scoringstr

How to fuse the local similarities into a single similarity value. Possible values are:

  • 'average_model' : During enrollment, an average model is computed using functionality of bob.ip.gabor.

  • 'average' : For each node, the average similarity is computed. Finally, the average of those similarities is returned.

  • 'min_jet', 'max_jet', 'med_jet' : For each node, the minimum, maximum or median similarity is computed. Finally, the average of those similarities is returned.

  • 'min_graph', 'max_graph', 'med_graph' : For each node, the average similarity is computed. Finally, the minimum, maximum or median of those similarities is returned.

gabor_directions, gabor_scales, gabor_sigma, gabor_maximum_frequency, gabor_frequency_step, gabor_power_of_k, gabor_dc_free

These parameters are required by the disparity-based Gabor jet similarity functions, see bob.ip.gabor.Similarity.. The default values are identical to the ones in the bob.bio.face.extractor.GridGraph. Please assure that this class and the bob.bio.face.extractor.GridGraph class get the same configuration, otherwise unexpected things might happen.

enroll(enroll_features) → model[source]

Enrolls the model using one of several strategies. Commonly, the bunch graph strategy [WFK97] is applied, by storing several Gabor jets for each node.

When multiple_feature_scoring = 'average_model', for each node the average bob.ip.gabor.Jet is computed. Otherwise, all enrollment jets are stored, grouped by node.

Parameters:

enroll_features[[bob.ip.gabor.Jet]]

The list of enrollment features. Each sub-list contains a full graph.

Returns:

model[[bob.ip.gabor.Jet]]

The enrolled model. Each sub-list contains a list of jets, which correspond to the same node. When multiple_feature_scoring = 'average_model' each sub-list contains a single bob.ip.gabor.Jet.

load_enroller(**kwargs)[source]

Loads the parameters required for model enrollment from file. This function usually is only useful in combination with the train_enroller() function. This function is always called after calling load_projector(). In this base class implementation, it does nothing.

Parameters:

enroller_filestr

The file to read the enroller from.

load_projector(**kwargs)[source]

Loads the parameters required for feature projection from file. This function usually is useful in combination with the train_projector() function. In this base class implementation, it does nothing.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from.

project(feature) → projected[source]

This function will project the given feature. It must be overwritten by derived classes, as soon as performs_projection = True was set in the constructor. It is assured that the load_projector() was called once before the project function is executed.

Parameters:

featureobject

The feature to be projected.

Returns:

projectedobject

The projected features. Must be writable with the write_feature() function and readable with the read_feature() function.

read_feature(feature_file) → feature[source]

Reads the projected feature from file. In this base class implementation, it uses bob.io.base.load() to do that. If you have different format, please overwrite this function.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

feature_filestr or bob.io.base.HDF5File

The file open for reading, or the file name to read from.

Returns:

featureobject

The feature that was read from file.

read_model(model_file) → model[source]

Reads the model written by the write_model() function from the given file.

Parameters:

model_filestr or bob.io.base.HDF5File

The name of the file or the file opened for reading.

Returns:

model[[bob.ip.gabor.Jet]]

The list of Gabor jets read from file.

score(model, probe)score[source]

Computes the score of the probe and the model using the desired Gabor jet similarity function and the desired score fusion strategy.

Parameters:

model[[bob.ip.gabor.Jet]]

The model enrolled by the enroll() function.

probe[bob.ip.gabor.Jet]

The probe, e.g., read by the bob.bio.face.extractor.GridGraph.read_feature() function.

Returns:

scorefloat

The fused similarity score.

score_for_multiple_models(models, probe)score[source]

This function computes the score between the given model list and the given probe. In this base class implementation, it computes the scores for each model using the score() method, and fuses the scores using the fusion method specified in the constructor of this class. Usually this function is called from derived class score() functions.

Parameters:

models[object]

A list of model objects.

probeobject

The probe object to compare the models with.

Returns:

scorefloat

The fused similarity between the given models and the probe.

score_for_multiple_probes(model, probes)[source]

score(model, probes) -> score

This function computes the score between the given model graph(s) and several given probe graphs. The same local scoring strategy as for several model jets is applied, but this time the local scoring strategy is applied between all graphs from the model and probes.

Parameters:

model[[bob.ip.gabor.Jet]]

The model enrolled by the enroll() function. The sub-lists are groups by nodes.

probes[[bob.ip.gabor.Jet]]

A list of probe graphs. The sub-lists are groups by graph.

Returns:

scorefloat

The fused similarity score.

train_enroller(**kwargs)[source]

This function can be overwritten to train the model enroller. If you do this, please also register the function by calling this base class constructor and enabling the training by require_enroller_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be split into lists, each of which contains the features of a single (training-)client.

enroller_filestr

The file to write. This file should be readable with the load_enroller() function.

train_projector(**kwargs)[source]

This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by requires_projector_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be provided in a single list, if split_training_features_by_client = False was specified in the constructor, otherwise the features will be split into lists, each of which contains the features of a single (training-)client.

projector_filestr

The file to write. This file should be readable with the load_projector() function.

write_feature(**kwargs)[source]

Saves the given projected feature to a file with the given name. In this base class implementation:

  • If the given feature has a save attribute, it calls feature.save(bob.io.base.HDF5File(feature_file), 'w'). In this case, the given feature_file might be either a file name or a bob.io.base.HDF5File.

  • Otherwise, it uses bob.io.base.save() to do that.

If you have a different format, please overwrite this function.

Please register ‘performs_projection = True’ in the constructor to enable this function.

Parameters:

featureobject

A feature as returned by the project() function, which should be written.

feature_filestr or bob.io.base.HDF5File

The file open for writing, or the file name to write to.

write_model(model, model_file)[source]

Writes the model enrolled by the enroll() function to the given file.

Parameters:

model[[bob.ip.gabor.Jet]]

The enrolled model.

model_filestr or bob.io.base.HDF5File

The name of the file or the file opened for writing.

class bob.bio.face.algorithm.Histogram(distance_function=<built-in function chi_square>, is_distance_function=True, multiple_probe_scoring='average')

Bases: bob.bio.base.algorithm.Algorithm

Computes the distance between histogram sequences.

Both sparse and non-sparse representations of histograms are supported. For enrollment, to date only the averaging of histograms is implemented.

Parameters:

distance_functionfunction

The function to be used to compare two histograms. This function should accept sparse histograms.

is_distance_functionbool

Is the given distance_function distance function (lower values are better) or a similarity function (higher values are better)?

multiple_probe_scoringstr or None

The way, scores are fused when multiple probes are available. See bob.bio.base.score_fusion_strategy() for possible values.

enroll(enroll_features) → model[source]

Enrolls a model by taking the average of all histograms.

enroll_features[1D or 2D numpy.ndarray]

The histograms that should be averaged. Histograms can be specified sparse (2D) or non-sparse (1D)

Returns:

model1D or 2D numpy.ndarray

The averaged histogram, sparse (2D) or non-sparse (1D).

load_enroller(**kwargs)[source]

Loads the parameters required for model enrollment from file. This function usually is only useful in combination with the train_enroller() function. This function is always called after calling load_projector(). In this base class implementation, it does nothing.

Parameters:

enroller_filestr

The file to read the enroller from.

load_projector(**kwargs)[source]

Loads the parameters required for feature projection from file. This function usually is useful in combination with the train_projector() function. In this base class implementation, it does nothing.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

projector_filestr

The file to read the projector from.

project(feature) → projected[source]

This function will project the given feature. It must be overwritten by derived classes, as soon as performs_projection = True was set in the constructor. It is assured that the load_projector() was called once before the project function is executed.

Parameters:

featureobject

The feature to be projected.

Returns:

projectedobject

The projected features. Must be writable with the write_feature() function and readable with the read_feature() function.

read_feature(feature_file) → feature[source]

Reads the projected feature from file. In this base class implementation, it uses bob.io.base.load() to do that. If you have different format, please overwrite this function.

Please register performs_projection = True in the constructor to enable this function.

Parameters:

feature_filestr or bob.io.base.HDF5File

The file open for reading, or the file name to read from.

Returns:

featureobject

The feature that was read from file.

score(model, probe)score[source]

Computes the score of the probe and the model using the desired histogram distance function. The resulting score is the negative distance, if is_distance_function = True. Both sparse and non-sparse models and probes are accepted, but their sparseness must agree.

Parameters:

model1D or 2D numpy.ndarray

The model enrolled by the enroll() function.

probe1D or 2D numpy.ndarray

The probe histograms, which can be specified sparse (2D) or non-sparse (1D)

Returns:

scorefloat

The resulting similarity score.

score_for_multiple_models(models, probe)score[source]

This function computes the score between the given model list and the given probe. In this base class implementation, it computes the scores for each model using the score() method, and fuses the scores using the fusion method specified in the constructor of this class. Usually this function is called from derived class score() functions.

Parameters:

models[object]

A list of model objects.

probeobject

The probe object to compare the models with.

Returns:

scorefloat

The fused similarity between the given models and the probe.

train_enroller(**kwargs)[source]

This function can be overwritten to train the model enroller. If you do this, please also register the function by calling this base class constructor and enabling the training by require_enroller_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be split into lists, each of which contains the features of a single (training-)client.

enroller_filestr

The file to write. This file should be readable with the load_enroller() function.

train_projector(**kwargs)[source]

This function can be overwritten to train the feature projector. If you do this, please also register the function by calling this base class constructor and enabling the training by requires_projector_training = True.

Parameters:

training_features[object] or [[object]]

A list of extracted features that can be used for training the projector. Features will be provided in a single list, if split_training_features_by_client = False was specified in the constructor, otherwise the features will be split into lists, each of which contains the features of a single (training-)client.

projector_filestr

The file to write. This file should be readable with the load_projector() function.

write_feature(**kwargs)[source]

Saves the given projected feature to a file with the given name. In this base class implementation:

  • If the given feature has a save attribute, it calls feature.save(bob.io.base.HDF5File(feature_file), 'w'). In this case, the given feature_file might be either a file name or a bob.io.base.HDF5File.

  • Otherwise, it uses bob.io.base.save() to do that.

If you have a different format, please overwrite this function.

Please register ‘performs_projection = True’ in the constructor to enable this function.

Parameters:

featureobject

A feature as returned by the project() function, which should be written.

feature_filestr or bob.io.base.HDF5File

The file open for writing, or the file name to write to.