Tools implemented in bob.bio.video¶
Summary¶
bob.bio.video.FrameSelector([…]) |
A class for selecting frames from videos. |
bob.bio.video.FrameContainer([hdf5, …]) |
A class for reading, manipulating and saving video content. |
bob.bio.video.preprocessor.Wrapper([…]) |
Wrapper class to run image preprocessing algorithms on video data. |
bob.bio.video.extractor.Wrapper(extractor[, …]) |
Wrapper class to run feature extraction algorithms on frame containers. |
bob.bio.video.algorithm.Wrapper(algorithm[, …]) |
Wrapper class to run face recognition algorithms on video data. |
Annotators¶
bob.bio.video.annotator.Base([…]) |
The base class for video annotators. |
bob.bio.video.annotator.Wrapper(annotator[, …]) |
Annotates video files using the provided image annotator. |
bob.bio.video.annotator.FailSafeVideo(annotators) |
A fail-safe video annotator. |
Databases¶
bob.bio.video.database.MobioBioDatabase([…]) |
MOBIO database implementation of bob.bio.base.database.ZTDatabase interface. |
bob.bio.video.database.YoutubeBioDatabase([…]) |
YouTube Faces database implementation of bob.bio.base.database.ZTBioDatabase interface. |
Details¶
-
class
bob.bio.video.FrameContainer(hdf5=None, load_function=<function load>)[source]¶ Bases:
objectA class for reading, manipulating and saving video content.
-
as_array()[source]¶ Returns the data of frames as a numpy array.
Returns: The frames are returned as an array with the shape of (n_frames, …) like a video. Return type: numpy.ndarray
-
-
class
bob.bio.video.FrameSelector(max_number_of_frames=20, selection_style='spread', step_size=10)[source]¶ Bases:
objectA class for selecting frames from videos. In total, up to
max_number_of_framesis selected (unless selection style isallDifferent selection styles are supported:
- first : The first frames are selected
- spread : Frames are selected to be taken from the whole video
- step : Frames are selected every
step_sizeindices, starting atstep_size/2Think twice if you want to have that when giving FrameContainer data! - all : All frames are stored unconditionally
- quality (only valid for FrameContainer data) : Select the frames based on the highest internally stored quality value
-
bob.bio.video.annotator.normalize_annotations(annotations, validator, max_age=-1)[source]¶ Normalizes the annotations of one video sequence. It fills the annotations for frames from previous ones if the annotation for the current frame is not valid.
Parameters: - annotations (collections.OrderedDict) – A dict of dict where the keys to the first dict are frame indices as strings (starting from 0). The inside dicts contain annotations for that frame. The dictionary needs to be an ordered dict in order for this to work.
- validator (callable) – Takes a dict (annotations) and returns True if the annotations are valid.
This can be a check based on minimal face size for example: see
bob.bio.face.annotator.min_face_size_validator. - max_age (
int, optional) – An integer indicating for a how many frames a detected face is valid if no detection occurs after such frame. A value of -1 == forever
Yields: - str – The index of frame.
- dict – The corrected annotations of the frame.
-
class
bob.bio.video.annotator.Base(frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, read_original_data=None, **kwargs)¶ Bases:
bob.bio.base.annotator.AnnotatorThe base class for video annotators.
Parameters: - frame_selector (
bob.bio.video.FrameSelector) – A frame selector class to define, which frames of the video to use. - read_original_data (callable) – A function with the signature of
data = read_original_data(biofile, directory, extension)that will be used to load the data from biofiles. By default theframe_selectoris used to load the data.
-
annotate(frames, **kwargs)[source]¶ Annotates videos.
Parameters: - frames (
bob.bio.video.FrameContainerornumpy.array) – The frames of the video file. - **kwargs – Extra arguments that annotators may need.
Returns: A dictionary where its key is the frame id as a string and its value is a dictionary that are the annotations for that frame.
Return type: Note
You can use the
Base.frame_ids_and_framesfunctions to normalize the input in your implementation.- frames (
-
static
frame_ids_and_frames(frames)[source]¶ Takes the frames and yields frame_ids and frames.
Parameters: frames (
bob.bio.video.FrameContaineror an iterable of arrays) – The frames of the video file.Yields: - frame_id (str) – A string that represents the frame id.
- frame (
numpy.array) – The frame of the video file as an array.
- frame_selector (
-
class
bob.bio.video.annotator.FailSafeVideo(annotators, max_age=15, validator=<function min_face_size_validator>, **kwargs)¶ Bases:
bob.bio.video.annotator.BaseA fail-safe video annotator. It tries several annotators in order and tries the next one if the previous one fails. However, the difference between this annotator and
bob.bio.base.annotator.FailSafeis that this one tries to use annotations from older frames (if valid) before trying the next annotator.Warning
You must be careful in using this annotator since different annotators could have different results. For example the bounding box of one annotator be totally different from another annotator.
Parameters: - annotators (
list) – A list of annotators to try. - max_age (int) – The maximum number of frames that an annotation is valid for next frames.
This value should be positive. If you want to set max_age to infinite,
then you can use the
bob.bio.video.annotator.Wrapperinstead. - validator (callable) – A function that takes the annotations of a frame and validates it.
Please see
Basefor more accepted parameters.-
annotate(frames, **kwargs)[source]¶ See
Base.annotate
- annotators (
-
class
bob.bio.video.annotator.Wrapper(annotator, normalize=False, validator=<function min_face_size_validator>, max_age=-1, **kwargs)¶ Bases:
bob.bio.video.annotator.BaseAnnotates video files using the provided image annotator. See the documentation of
Basetoo.Parameters: - annotator (
bob.bio.base.annotator.Annotatoror str) – The image annotator to be used. The annotator could also be the name of a bob.bio.annotator resource which will be loaded. - max_age (int) – see
normalize_annotations. - normalize (bool) – If True, it will normalize annotations using
normalize_annotations - validator (object) – See
normalize_annotationsandbob.bio.face.annotator.min_face_size_validatorfor one example.
Please see
Basefor more accepted parameters.Warning
You should only set
normalizeto True only if you are annotating all frames of the video file.-
annotate(frames, **kwargs)[source]¶ See
Base.annotate
- annotator (
-
class
bob.bio.video.preprocessor.Wrapper(preprocessor='landmark-detect', frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, quality_function=None, compressed_io=False, read_original_data=None)¶ Bases:
bob.bio.base.preprocessor.PreprocessorWrapper class to run image preprocessing algorithms on video data.
This class provides functionality to read original video data from several databases. So far, the video content from bob.db.mobio and the image list content from bob.db.youtube are supported.
Furthermore, frames are extracted from these video data, and a
preprocessoralgorithm is applied on all selected frames. The preprocessor can either be provided as a registered resource, i.e., one of Preprocessors, or an instance of a preprocessing class. Since most of the databases do not provide annotations for all frames of the videos, commonly the preprocessor needs to apply face detection.The
frame_selectorcan be chosen to select some frames from the video. By default, a few frames spread over the whole video sequence are selected.The
quality_functionis used to assess the quality of the frame. If noquality_functionis given, the quality is based on the face detector, or simply left asNone. So far, the quality of the frames are not used, but it is foreseen to select frames based on quality.Parameters:
- preprocessor : str or
bob.bio.base.preprocessor.Preprocessorinstance - The preprocessor to be used to preprocess the frames.
- frame_selector :
bob.bio.video.FrameSelector - A frame selector class to define, which frames of the video to use.
- quality_function : function or
None - A function assessing the quality of the preprocessed image.
If
None, no quality assessment is performed. If the preprocessor contains aqualityattribute, this is taken instead. - compressed_io : bool
- Use compression to write the resulting preprocessed HDF5 files. This is experimental and might cause trouble. Use this flag with care.
- read_original_data: callable or
None - Function that loads the raw data.
If not explicitly defined the raw data will be loaded by
bob.bio.video.database.VideoBioFile.load()using the specifiedframe_selector
-
read_data(filename) → frames[source]¶ Reads the preprocessed data from file and returns them in a frame container. The preprocessors
read_datafunction is used to read the data for each frame.Parameters:
- filename : str
- The name of the preprocessed data file.
Returns:
- frames :
bob.bio.video.FrameContainer - The read frames, stored in a frame container.
-
write_data(frames, filename)[source]¶ Writes the preprocessed data to file.
The preprocessors
write_datafunction is used to write the data for each frame.Parameters:
- frames :
bob.bio.video.FrameContainer - The preprocessed frames, as returned by the __call__ function.
- filename : str
- The name of the preprocessed data file to write.
- frames :
- preprocessor : str or
-
class
bob.bio.video.extractor.Wrapper(extractor, frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, compressed_io=False)¶ Bases:
bob.bio.base.extractor.ExtractorWrapper class to run feature extraction algorithms on frame containers.
Features are extracted for all frames in the frame container using the provided
extractor. Theextractorcan either be provided as a registered resource, i.e., one of Feature extractors, or an instance of an extractor class.The
frame_selectorcan be chosen to select some frames from the frame container. By default, all frames from the previous preprocessing step are kept, but fewer frames might be selected in this stage.Parameters:
- extractor : str or
bob.bio.base.extractor.Extractorinstance - The extractor to be used to extract features from the frames.
- frame_selector :
bob.bio.video.FrameSelector - A frame selector class to define, which frames of the preprocessed frame container to use.
- compressed_io : bool
- Use compression to write the resulting features to HDF5 files. This is experimental and might cause trouble. Use this flag with care.
-
load(extractor_file)[source]¶ Loads the trained extractor from file.
This function calls the wrapped classes
loadfunction.- extractor_file : str
- The name of the extractor that should be loaded.
-
read_feature(filename) → frames[source]¶ Reads the extracted data from file and returns them in a frame container. The extractors
read_featurefunction is used to read the data for each frame.Parameters:
- filename : str
- The name of the extracted data file.
Returns:
- frames :
bob.bio.video.FrameContainer - The read frames, stored in a frame container.
-
train(training_frames, extractor_file)[source]¶ Trains the feature extractor with the preprocessed data of the given frames.
Note
This function is not called, when the given
extractordoes not require training.This function will train the feature extractor using all data from the selected frames of the training data. The training_frames must be aligned by client if the given
extractorrequires that.Parameters:
- training_frames : [
bob.bio.video.FrameContainer] or [[bob.bio.video.FrameContainer]] - The set of training frames, which will be used to train the
extractor. - extractor_file : str
- The name of the extractor that should be written.
- training_frames : [
-
write_feature(frames, filename)[source]¶ Writes the extracted features to file.
The extractors
write_featuresfunction is used to write the features for each frame.Parameters:
- frames :
bob.bio.video.FrameContainer - The extracted features for the selected frames, as returned by the __call__ function.
- filename : str
- The file name to write the extracted feature into.
- frames :
- extractor : str or
-
class
bob.bio.video.algorithm.Wrapper(algorithm, frame_selector=<bob.bio.video.utils.FrameSelector.FrameSelector object>, compressed_io=False)¶ Bases:
bob.bio.base.algorithm.AlgorithmWrapper class to run face recognition algorithms on video data.
This class provides a generic interface for all face recognition algorithms to use several frames of a video. The
algorithmcan either be provided as a registered resource, or an instance of an extractor class. Already in previous stages, features were extracted from only some selected frames of the image. This algorithm now uses these features to perform face recognition, i.e., by enrolling a model from several frames (possibly of several videos), and fusing scores from several model frames and several probe frames. Since the functionality to handle several images for enrollment and probing is already implemented in the wrapped class, here we only care about providing the right data at the right time.Parameters:
- algorithm : str or
bob.bio.base.algorithm.Algorithminstance - The algorithm to be used.
- frame_selector :
bob.bio.video.FrameSelector - A frame selector class to define, which frames of the extracted features of the frame container to use. By default, all features are selected.
- compressed_io : bool
- Use compression to write the projected features to HDF5 files. This is experimental and might cause trouble. Use this flag with care.
-
enroll(enroll_frames) → model[source]¶ Enrolls the model from features of all selected frames of all enrollment videos for the current client.
This function collects all desired frames from all enrollment videos and enrolls a model with that, using the algorithms
enrollfunction.Parameters:
- enroll_frames : [
bob.bio.video.FrameContainer] - Extracted or projected features from one or several videos of the same client.
Returns:
- model : object
- The model as created by the algorithms
enrollfunction.
- enroll_frames : [
-
load_enroller(enroller_file)[source]¶ Loads the trained enroller from file.
This function calls the wrapped classes
load_enrollerfunction.- enroller_file : str
- The name of the enroller that should be loaded.
-
load_projector(projector_file)[source]¶ Loads the trained extractor from file.
This function calls the wrapped classes
load_projectorfunction.- projector_file : str
- The name of the projector that should be loaded.
-
project(frames) → projected[source]¶ Projects the frames from the extracted frames and returns a frame container.
This function is used to project features using the desired
algorithmfor all frames that are selected by theframe_selectorspecified in the constructor of this class.Parameters:
- frames :
bob.bio.video.FrameContainer - The frame container containing extracted feature frames.
Returns:
- projected :
bob.bio.video.FrameContainer - A frame container containing projected features.
- frames :
-
read_feature(projected_file) → frames[source]¶ Reads the projected data from file and returns them in a frame container. The algorithms
read_featurefunction is used to read the data for each frame.Parameters:
- filename : str
- The name of the projected data file.
Returns:
- frames :
bob.bio.video.FrameContainer - The read frames, stored in a frame container.
-
read_model(filename)[source]¶ Reads the model using the algorithms
read_modelfunction.Parameters:
- filename : str
- The file name to read the model from.
Returns:
- model : object
- The model read from file.
-
score(model, probe) → score[source]¶ Computes the score between the given model and the probe.
As the probe is a frame container, several scores are computed, one for each frame of the probe. This is achieved by using the algorithms
score_for_multiple_probesfunction. The final result is, hence, a fusion of several scores.Parameters:
- model : object
- The model in the type desired by the wrapped algorithm.
- probe :
bob.bio.video.FrameContainer - The selected frames from the probe objects, which contains the probes are desired by the wrapped algorithm.
Returns:
- score : float
- A fused score between the given model and all probe frames.
-
score_for_multiple_probes(model, probes) → score[source]¶ Computes the score between the given model and the given list of probes.
As each probe is a frame container, several scores are computed, one for each frame of each probe. This is achieved by using the algorithms
score_for_multiple_probesfunction. The final result is, hence, a fusion of several scores.Parameters:
- model : object
- The model in the type desired by the wrapped algorithm.
- probes : [
bob.bio.video.FrameContainer] - The selected frames from the probe objects, which contains the probes are desired by the wrapped algorithm.
Returns:
- score : float
- A fused score between the given model and all probe frames.
-
train_enroller(training_frames, enroller_file)[source]¶ Trains the enroller with the features of the given frames.
Note
This function is not called, when the given
algorithmdoes not require enroller training.This function will train the enroller using all data from the selected frames of the training data.
Parameters:
- training_frames : [[
bob.bio.video.FrameContainer]] - The set of training frames aligned by client, which will be used to perform enroller training of the
algorithm. - enroller_file : str
- The name of the enroller that should be written.
- training_frames : [[
-
train_projector(training_frames, projector_file)[source]¶ Trains the projector with the features of the given frames.
Note
This function is not called, when the given
algorithmdoes not require projector training.This function will train the projector using all data from the selected frames of the training data. The training_frames must be aligned by client if the given
algorithmrequires that.Parameters:
- training_frames : [
bob.bio.video.FrameContainer] or [[bob.bio.video.FrameContainer]] - The set of training frames, which will be used to perform projector training of the
algorithm. - extractor_file : str
- The name of the projector that should be written.
- training_frames : [
-
write_feature(frames, projected_file)[source]¶ Writes the projected features to file.
The extractors
write_featuresfunction is used to write the features for each frame.Parameters:
- frames :
bob.bio.video.FrameContainer - The projected features for the selected frames, as returned by the
project()function. - projected_file : str
- The file name to write the projetced feature into.
- frames :
- algorithm : str or
-
class
bob.bio.video.database.MobioBioDatabase(original_directory=None, original_extension=None, annotation_directory=None, annotation_extension='.pos', **kwargs)¶ Bases:
bob.bio.base.database.ZTBioDatabaseMOBIO database implementation of bob.bio.base.database.ZTDatabase interface. It is an extension of an SQL-based database interface, which directly talks to Mobio database, for verification experiments (good to use in bob.bio.base framework).
-
class
bob.bio.video.database.VideoBioFile(client_id, path, file_id, **kwargs)¶
-
class
bob.bio.video.database.YoutubeBioDatabase(original_directory=None, original_extension='.jpg', annotation_extension='.labeled_faces.txt', **kwargs)¶ Bases:
bob.bio.base.database.ZTBioDatabaseYouTube Faces database implementation of
bob.bio.base.database.ZTBioDatabaseinterface. It is an extension of an SQL-based database interface, which directly talks tobob.db.youtube.Databasedatabase, for verification experiments (good to use inbob.bioframework).-
original_directory¶
-