Bob 2.0 projection of features on GMM model. Input id is a string.
Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.
Endpoint Name | Data Format | Nature |
---|---|---|
features | system/array_2d_floats/1 | Input |
classifier | pkorshunov/two-classes-gmm/1 | Input |
scores | system/float/1 | Output |
xxxxxxxxxx
import numpy
import bob.learn.em
def gmm_from_data(data):
"""Loads a bob.learn.em.GMMMachine from a BEAT Data object"""
dim_c, dim_d = data.means.shape
gmm = bob.learn.em.GMMMachine(dim_c, dim_d)
gmm.weights = data.weights
gmm.means = data.means
gmm.variances = data.variances
gmm.variance_thresholds = data.variance_thresholds
return gmm
class Algorithm:
def __init__(self):
self.gmm_one = None
self.gmm_two = None
self.features = []
def process(self, inputs, outputs):
# retrieve the classifier (with two GMM models) once
if self.gmm_one is None:
two_gmms = inputs['classifier'].data
self.gmm_one = gmm_from_data(two_gmms.model_one)
self.gmm_two = gmm_from_data(two_gmms.model_two)
# read the features and project on both GMMs
projection_one = 0
projection_two = 0
features = inputs['features'].data.value
for feature in features:
# project the feature on both GMMs, result of projection is a log likelihood
projection_one += self.gmm_one.log_likelihood(feature)
projection_two += self.gmm_two.log_likelihood(feature)
projection_one /= features.shape[0]
projection_two /= features.shape[0]
# compute the score as the difference between two average log-likelihoods
score = projection_one - projection_two
# output the projection score
outputs['scores'].write({
'value': score
})
return True
The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box
For a given set of feature vectors and a Gaussian Mixture Models (GMM), this algorithm implements the Maximum-a-posteriori (MAP) estimation (adapting only the means).
Details of MAP estimation can be found in the paper: Reynolds, Douglas A., Thomas F. Quatieri, and Robert B. Dunn. "Speaker verification using adapted Gaussian mixture models." Digital signal processing 10.1 (2000): 19-41. A very good description on how the MAP estimation works can be found in the `Mathematical Monks's <https://www.youtube.com/watch?v=kkhdIriddSI&index=31&list=PLD0F06AA0D2E8FFBA&spfreload=1>`_ YouTube channel.z
This algorithm relies on the `Bob <http://www.idiap.ch/software/bob/>`_ library.
Updated | Name | Databases/Protocols | Analyzers | |||
---|---|---|---|---|---|---|
pkorshunov/pkorshunov/isv-asv-pad-fusion-complete/1/asv_isv-pad_gmm-fusion_lr-pa | avspoof/2@physicalaccess_verification,avspoof/2@physicalaccess_verify_train,avspoof/2@physicalaccess_verify_train_spoof,avspoof/2@physicalaccess_antispoofing,avspoof/2@physicalaccess_verification_spoof | pkorshunov/spoof-score-fusion-roc_hist/1 |
This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.