Performs the UBM training

This algorithm is a legacy one. The API has changed since its implementation. New versions and forks will need to be updated.

Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.

Unnamed group

Endpoint Name Data Format Nature
features system/array_2d_floats/1 Input
ubm tutorial/gmm/1 Output

Parameters allow users to change the configuration of an algorithm when scheduling an experiment

Name Description Type Default Range/Choices
number-of-gaussians uint32 100
maximum-number-of-iterations uint32 10
xxxxxxxxxx
67
 
1
import bob
2
import numpy
3
from bob.machine import GMMMachine
4
5
6
7
class Algorithm:
8
9
    def __init__(self):
10
        self.number_of_gaussians = 100
11
        self.max_iterations = 10
12
        self.data = []
13
14
15
    def setup(self, parameters):
16
        self.number_of_gaussians = parameters.get('number-of-gaussians',
17
                                                  self.number_of_gaussians)
18
19
        self.max_iterations = parameters.get('maximum-number-of-iterations',
20
                                             self.max_iterations)
21
22
        return True
23
24
25
    def process(self, inputs, outputs):
26
        self.data.append(inputs["features"].data.value)
27
28
        if not(inputs.hasMoreData()):
29
            # create array set used for training
30
            training_set = numpy.vstack(self.data)
31
            input_size = training_set.shape[1]
32
33
            # create the KMeans and UBM machine
34
            kmeans = bob.machine.KMeansMachine(int(self.number_of_gaussians), input_size)
35
            ubm = bob.machine.GMMMachine(int(self.number_of_gaussians), input_size)
36
37
            # create the KMeansTrainer
38
            kmeans_trainer = bob.trainer.KMeansTrainer()
39
            kmeans_trainer.initialization_method = bob.trainer.KMeansTrainer.RANDOM_NO_DUPLICATE
40
            kmeans_trainer.max_iterations = int(self.max_iterations)
41
42
            # train using the KMeansTrainer
43
            kmeans_trainer.train(kmeans, training_set)
44
45
            (variances, weights) = kmeans.get_variances_and_weights_for_each_cluster(training_set)
46
            means = kmeans.means
47
48
            # initialize the GMM
49
            ubm.means = means
50
            ubm.variances = variances
51
            ubm.weights = weights
52
53
            # train the GMM
54
            trainer = bob.trainer.ML_GMMTrainer()
55
            trainer.max_iterations = int(self.max_iterations)
56
            trainer.train(ubm, training_set)
57
58
            # outputs data
59
            outputs["ubm"].write({
60
                'weights':              ubm.weights,
61
                'means':                ubm.means,
62
                'variances':            ubm.variances,
63
                'variance_thresholds':  ubm.variance_thresholds,
64
            })
65
66
        return True
67

The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box

For a Gaussian Mixture Models (GMM), this algorithm implements the Universal Background Model (UBM) training described in [Reynolds2000].

First, this algorithm estimates the means, diagonal covariance matrix and the weights of each gaussian component using the KMeans clustering. After, only the means are re-estimated using the Maximum Likelihood (ML) estimator.

This algorithm relies on the Bob library.

The input, features, is a training set of floating point vectors as a two-dimensional array of floats (64 bits), the number of rows corresponding to the number of training samples, and the number of columns to the dimensionality of the training samples. The output, ubm, is the GMM trained using the ML estimator.

[Reynolds2000]Reynolds, Douglas A., Thomas F. Quatieri, and Robert B. Dunn. "Speaker verification using adapted Gaussian mixture models." Digital signal processing 10.1 (2000): 19-41.

Experiments

Updated Name Databases/Protocols Analyzers
martabarrero/smarcel/full_isv/1/Prueba_ISV_2 banca/1@Mc tutorial/eerhter_postperf/1
Created with Raphaël 2.1.2[compare]tpereira/ubm_training/1tpereira/ubm_training/2Aug29tpereira/ubm_training_nomalize_kmeans/12014Sep5tpereira/ubm_training/6Mar282015Jul22

This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.

Terms of Service | Contact Information | BEAT platform version 2.2.1b0 | © Idiap Research Institute - 2013-2025