Python API¶
This section includes information for using the Python API of
bob.fusion.base
.
Summary¶
Algorithms¶
A class to be used in score fusion |
|
A class to be used in score fusion using bob machines. |
|
|
weighted sum (default: mean) |
GMM Score fusion |
Preprocessors¶
A tanh feature scaler: |
|
ZNorm feature scaler This scaler works just like |
Fusion Algorithms¶
- class bob.fusion.base.algorithm.Algorithm(preprocessors=None, classifier=None, **kwargs)¶
Bases:
object
A class to be used in score fusion
- classifier¶
- preprocessors¶
- __init__(preprocessors=None, classifier=None, **kwargs)[source]¶
- Parameters
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
- fuse(scores)[source]¶
- scores: numpy.ndarray
A numpy.ndarray with the shape of (n_samples, n_systems).
Returns:
- fused_score: numpy.ndarray
The fused scores in shape of (n_samples,).
- load(model_file)[source]¶
Load the algorithm the same way it was saved. A new instance will be returned.
Returns:
- loaded_algorithm: Algorithm
A new instance of the loaded algorithm.
- preprocess(scores)[source]¶
scores: numpy.ndarray with the shape of (n_samples, n_systems). returns the transformed scores.
- save(model_file)[source]¶
Save the instance of the algorithm.
- model_file: str
A path to save the file. Please note that file objects are not accepted. The filename MUST end with “.pkl”. Also, an algorithm may save itself in multiple files with different extensions such as model_file and model_file[:-3]+’hdf5’.
- train(train_neg, train_pos, devel_neg=None, devel_pos=None)[source]¶
If you use development data for training you need to override this method.
- train_neg: numpy.ndarray
Negatives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- train_pos: numpy.ndarray
Positives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- devel_neg, devel_pos: numpy.ndarray
Same as
train
but used for development (validation).
- class bob.fusion.base.algorithm.AlgorithmBob(preprocessors=None, classifier=None, **kwargs)¶
Bases:
Algorithm
A class to be used in score fusion using bob machines.
- class bob.fusion.base.algorithm.Empty(**kwargs)¶
Bases:
Algorithm
Empty algorithm This algorithm does not change scores by itself and only applies the preprocessors.
- __init__(**kwargs)[source]¶
- Parameters
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
- class bob.fusion.base.algorithm.GMM(number_of_gaussians=None, gmm_training_iterations=25, training_threshold=0.0005, variance_threshold=0.0005, update_weights=True, update_means=True, update_variances=True, init_seed=5489, **kwargs)¶
Bases:
AlgorithmBob
GMM Score fusion
- __init__(number_of_gaussians=None, gmm_training_iterations=25, training_threshold=0.0005, variance_threshold=0.0005, update_weights=True, update_means=True, update_variances=True, init_seed=5489, **kwargs)[source]¶
- Parameters
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
- train(train_neg, train_pos, devel_neg=None, devel_pos=None)[source]¶
If you use development data for training you need to override this method.
- train_neg: numpy.ndarray
Negatives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- train_pos: numpy.ndarray
Positives training data should be numpy.ndarray with the shape of (n_samples, n_systems).
- devel_neg, devel_pos: numpy.ndarray
Same as
train
but used for development (validation).
- class bob.fusion.base.algorithm.Weighted_Sum(weights=None, **kwargs)¶
Bases:
Algorithm
weighted sum (default: mean)
- __init__(weights=None, **kwargs)[source]¶
- Parameters
preprocessors (
list
) – An optional list of preprocessors that follow the API ofsklearn.preprocessing.StandardScaler
. Especially fit_transform and transform must be implemented.classifier – An instance of a class that implements fit(X[, y]) and decision_function(X) like:
sklearn.linear_model.LogisticRegression
**kwargs – All extra
Fusion Preprocessors¶
- class bob.fusion.base.preprocessor.Tanh(copy=True, **kwargs)¶
Bases:
StandardScaler
A tanh feature scaler:
\[0.5 \left( \tanh\left( 0.01 \cdot \frac{X - \mu}{\sigma}\right) + 1 \right)\]This scaler is both efficient and is robust to outliers.
The original implementation in
Hampel, Frank R., et al. "Robust statistics: the approach based on influence functions." (1986).
uses an influence function but this is not used here.- __init__(copy=True, **kwargs)[source]¶
Initialize self. See help(type(self)) for accurate signature.
- class bob.fusion.base.preprocessor.ZNorm(copy=True, **kwargs)¶
Bases:
StandardScaler
ZNorm feature scaler This scaler works just like
sklearn.preprocessing.StandardScaler
but only takes the zero effort impostors into account when estimating the mean and standard deviation. You should not use this scaler when PAD scores are present.
Fusion Scripts¶
- bob.fusion.base.script.routine_fusion(algorithm, model_file, scores_train_lines, scores_train, train_neg, train_pos, fused_train_file, scores_dev_lines=None, scores_dev=None, dev_neg=None, dev_pos=None, fused_dev_file=None, scores_eval_lines=None, scores_eval=None, fused_eval_file=None, force=False, min_file_size=1000, do_training=True)[source]¶