Bob 2.0 training of two GMMs for two types of features

This algorithm is a legacy one. The API has changed since its implementation. New versions and forks will need to be updated.

Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.

Group: main

Endpoint Name Data Format Nature
features system/array_2d_floats/1 Input
class system/text/1 Input
classifier pkorshunov/two-classes-gmm/1 Output

Parameters allow users to change the configuration of an algorithm when scheduling an experiment

Name Description Type Default Range/Choices
number-of-gaussians The number of Gaussian Components uint32 100
maximum-number-of-iterations The maximum number of iterations for the EM algorithm uint32 10
xxxxxxxxxx
82
 
1
import numpy
2
import bob.learn.em
3
4
class Algorithm:
5
6
    def __init__(self):
7
        self.number_of_gaussians = 100
8
        self.max_iterations = 10
9
        self.positives = []
10
        self.negatives = []
11
12
13
    def setup(self, parameters):
14
        self.number_of_gaussians = parameters.get('number-of-gaussians',
15
                                                  self.number_of_gaussians)
16
17
        self.max_iterations = parameters.get('maximum-number-of-iterations',
18
                                             self.max_iterations)
19
20
        return True
21
22
    def create_gmm(self, training_data):
23
        input_size = training_data.shape[1]
24
        # create the KMeans and GMM machine
25
        kmeans = bob.learn.em.KMeansMachine(int(self.number_of_gaussians), input_size)
26
        gmm = bob.learn.em.GMMMachine(int(self.number_of_gaussians), input_size)
27
28
        # create the KMeansTrainer
29
        kmeans_trainer = bob.learn.em.KMeansTrainer('RANDOM_NO_DUPLICATE')
30
31
        # train using the KMeansTrainer
32
        bob.learn.em.train(kmeans_trainer, kmeans, training_data, int(self.max_iterations))
33
34
        (variances, weights) = kmeans.get_variances_and_weights_for_each_cluster(training_data)
35
        means = kmeans.means
36
37
        # initialize the GMM
38
        gmm.means = means
39
        gmm.variances = variances
40
        gmm.weights = weights
41
42
        # train the GMM
43
        trainer = bob.learn.em.ML_GMMTrainer()
44
        bob.learn.em.train(trainer, gmm, training_data, int(self.max_iterations))
45
        
46
        return gmm
47
48
    def process(self, inputs, outputs):
49
        # accumulates the input data in different
50
        # containers for hit or miss
51
        feature_vector = inputs["features"].data.value
52
        if inputs["class"].data.text == 'real':
53
            self.positives.append(feature_vector)
54
        else:
55
            self.negatives.append(feature_vector)
56
57
        if not(inputs.hasMoreData()):
58
            # create array set used for training
59
            self.positives = numpy.vstack(self.positives)
60
            self.negatives = numpy.vstack(self.negatives)
61
62
            gmm_real = self.create_gmm(self.positives)
63
            gmm_attacks = self.create_gmm(self.negatives)
64
65
            # outputs data
66
            outputs["classifier"].write({
67
                "model_one": {
68
                        'weights':              gmm_real.weights,
69
                        'means':                gmm_real.means,
70
                        'variances':            gmm_real.variances,
71
                        'variance_thresholds':  gmm_real.variance_thresholds,
72
                 },
73
                "model_two": {
74
                        'weights':              gmm_attacks.weights,
75
                        'means':                gmm_attacks.means,
76
                        'variances':            gmm_attacks.variances,
77
                        'variance_thresholds':  gmm_attacks.variance_thresholds,
78
                 },
79
            })
80
81
        return True
82

The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box

Implements a GMM-based training, each GMM model for each of two types of data

Experiments

Updated Name Databases/Protocols Analyzers
pkorshunov/pkorshunov/isv-asv-pad-fusion-complete/1/asv_isv-pad_gmm-fusion_lr-pa avspoof/2@physicalaccess_verification,avspoof/2@physicalaccess_verify_train,avspoof/2@physicalaccess_verify_train_spoof,avspoof/2@physicalaccess_antispoofing,avspoof/2@physicalaccess_verification_spoof pkorshunov/spoof-score-fusion-roc_hist/1
pkorshunov/pkorshunov/speech-pad-simple/1/speech-pad_gmm-pa avspoof/2@physicalaccess_antispoofing pkorshunov/simple_antispoofing_analyzer/4
Created with Raphaël 2.1.2[compare]pkorshunov/gmm-spoofing/42016Apr1

This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.

Terms of Service | Contact Information | BEAT platform version 2.2.1b0 | © Idiap Research Institute - 2013-2025