Python API¶
This section includes information for using the Python API of
bob.fusion.base
.
Summary¶
Algorithms¶
A class to be used in score fusion |
|
A class to be used in score fusion using bob machines. |
|
|
weighted sum (default: mean) |
GMM Score fusion |
Preprocessors¶
A tanh feature scaler: |
|
ZNorm feature scaler This scaler works just like |
Fusion Algorithms¶
- class bob.fusion.base.algorithm.Algorithm(preprocessors=None, classifier=None, **kwargs)¶
Bases:
object
A class to be used in score fusion
- classifier¶
- preprocessors¶
- __init__(preprocessors=None, classifier=None, **kwargs)[source]¶
- Parameters:
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
- fuse(scores)[source]¶
- scores: numpy.ndarray
A numpy.ndarray with the shape of (n_samples, n_systems).
Returns:
- fused_score: numpy.ndarray
The fused scores in shape of (n_samples,).
- load(model_file)[source]¶
Load the algorithm the same way it was saved. A new instance will be returned.
Returns:
- loaded_algorithm: Algorithm
A new instance of the loaded algorithm.
- preprocess(scores)[source]¶
scores: numpy.ndarray with the shape of (n_samples, n_systems). returns the transformed scores.
- save(model_file)[source]¶
Save the instance of the algorithm.
- model_file: str
A path to save the file. Please note that file objects are not accepted. The filename MUST end with “.pkl”. Also, an algorithm may save itself in multiple files with different extensions such as model_file and model_file[:-3]+’hdf5’.
- train(train_neg, train_pos, devel_neg=None, devel_pos=None)[source]¶
If you use development data for training you need to override this method.
- train_neg: numpy.ndarray
Negatives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- train_pos: numpy.ndarray
Positives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- devel_neg, devel_pos: numpy.ndarray
Same as
train
but used for development (validation).
- class bob.fusion.base.algorithm.AlgorithmBob(preprocessors=None, classifier=None, **kwargs)¶
Bases:
Algorithm
A class to be used in score fusion using bob machines.
- class bob.fusion.base.algorithm.Empty(**kwargs)¶
Bases:
Algorithm
Empty algorithm This algorithm does not change scores by itself and only applies the preprocessors.
- __init__(**kwargs)[source]¶
- Parameters:
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
- class bob.fusion.base.algorithm.GMM(number_of_gaussians=None, gmm_training_iterations=25, training_threshold=0.0005, variance_threshold=0.0005, update_weights=True, update_means=True, update_variances=True, init_seed=5489, **kwargs)¶
Bases:
AlgorithmBob
GMM Score fusion
- __init__(number_of_gaussians=None, gmm_training_iterations=25, training_threshold=0.0005, variance_threshold=0.0005, update_weights=True, update_means=True, update_variances=True, init_seed=5489, **kwargs)[source]¶
- Parameters:
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
- train(train_neg, train_pos, devel_neg=None, devel_pos=None)[source]¶
If you use development data for training you need to override this method.
- train_neg: numpy.ndarray
Negatives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- train_pos: numpy.ndarray
Positives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- devel_neg, devel_pos: numpy.ndarray
Same as
train
but used for development (validation).
- class bob.fusion.base.algorithm.Weighted_Sum(weights=None, **kwargs)¶
Bases:
Algorithm
weighted sum (default: mean)
- __init__(weights=None, **kwargs)[source]¶
- Parameters:
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
Fusion Preprocessors¶
- class bob.fusion.base.preprocessor.Tanh(copy=True, **kwargs)¶
Bases:
StandardScaler
A tanh feature scaler:
\[0.5 \left( \tanh\left( 0.01 \cdot \frac{X - \mu}{\sigma}\right) + 1 \right)\]This scaler is both efficient and is robust to outliers.
The original implementation in
Hampel, Frank R., et al. "Robust statistics: the approach based on influence functions." (1986).
uses an influence function but this is not used here.- __init__(copy=True, **kwargs)[source]¶
Initialize self. See help(type(self)) for accurate signature.
- fit(X, y=None)[source]¶
Estimates the mean and standard deviation of samples. Only positive samples are used in estimation.
- set_inverse_transform_request(*, copy: bool | None | str = '$UNCHANGED$') Tanh ¶
Request metadata passed to the
inverse_transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toinverse_transform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toinverse_transform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- set_partial_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') Tanh ¶
Request metadata passed to the
partial_fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topartial_fit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topartial_fit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- set_transform_request(*, copy: bool | None | str = '$UNCHANGED$') Tanh ¶
Request metadata passed to the
transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it totransform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- class bob.fusion.base.preprocessor.ZNorm(copy=True, **kwargs)¶
Bases:
StandardScaler
ZNorm feature scaler This scaler works just like
sklearn.preprocessing.StandardScaler
but only takes the zero effort impostors into account when estimating the mean and standard deviation. You should not use this scaler when PAD scores are present.- __init__(copy=True, **kwargs)[source]¶
Initialize self. See help(type(self)) for accurate signature.
- fit(X, y=None)[source]¶
Estimates the mean and standard deviation of samples. Only positive samples are used in estimation.
- set_inverse_transform_request(*, copy: bool | None | str = '$UNCHANGED$') ZNorm ¶
Request metadata passed to the
inverse_transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toinverse_transform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toinverse_transform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- set_partial_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') ZNorm ¶
Request metadata passed to the
partial_fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topartial_fit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topartial_fit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
- set_transform_request(*, copy: bool | None | str = '$UNCHANGED$') ZNorm ¶
Request metadata passed to the
transform
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed totransform
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it totransform
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.
Fusion Scripts¶
- bob.fusion.base.script.routine_fusion(algorithm, model_file, scores_train_lines, scores_train, train_neg, train_pos, fused_train_file, scores_dev_lines=None, scores_dev=None, dev_neg=None, dev_pos=None, fused_dev_file=None, scores_eval_lines=None, scores_eval=None, fused_eval_file=None, force=False, min_file_size=1000, do_training=True)[source]¶