A face recognition algorithm to compare one probe image against a set of template images.

This algorithm is a sequential one. The platform will call its process() method once per data incoming on its inputs.

Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.

Group: main

Endpoint Name Data Format Nature
probe_image system/array_2d_uint8/1 Input
template_images system/array_3d_uint8/1 Input
score system/float/1 Output

Group: model

Endpoint Name Data Format Nature
model_pb system/text/1 Input
xxxxxxxxxx
92
 
1
import numpy as np
2
import scipy.spatial
3
import tensorflow.compat.v1 as tf
4
import base64
5
6
7
def gray_to_rgb(img):
8
    img = img[None, ...]
9
    img = np.vstack((img, img, img))
10
    return img
11
12
13
def to_matplotlib(img):
14
    if img.ndim < 3:
15
        return img
16
    return np.moveaxis(img, -3, -1)
17
18
def prewhiten(img):
19
    mean = np.mean(img)
20
    std = np.std(img)
21
    std_adj = np.maximum(std, 1.0 / np.sqrt(img.size))
22
    y = np.multiply(np.subtract(img, mean), 1 / std_adj)
23
    return y
24
25
class Algorithm:
26
    def __init__(self):
27
        self.session = tf.InteractiveSession()
28
        self.graph = tf.get_default_graph()
29
        self.image_size = 160
30
        self.counter = 0
31
32
    def prepare(self, data_loaders):
33
        # Loads the model at the beginning
34
        loader = data_loaders.loaderOf("model_pb")
35
        for i in range(loader.count()):
36
            view = loader.view("model_pb", i)
37
            data, _, _ = view[0]
38
            data = data["model_pb"].text
39
            data = base64.b64decode(data)
40
            print("importing model to graph")
41
            graph_def = tf.GraphDef()
42
            graph_def.ParseFromString(data)
43
            tf.import_graph_def(graph_def, name="")
44
45
        # Get input and output tensors
46
        self.images_placeholder = self.graph.get_tensor_by_name("input:0")
47
        self.embeddings = self.graph.get_tensor_by_name("embeddings:0")
48
        self.phase_train_placeholder = self.graph.get_tensor_by_name("phase_train:0")
49
50
        return True
51
52
    def _check_feature(self, img):
53
        img = np.ascontiguousarray(img)
54
        if img.ndim == 2:
55
            img = gray_to_rgb(img)
56
        img = to_matplotlib(img)
57
        with self.graph.as_default():
58
            img = tf.image.resize_images(img, [self.image_size, self.image_size]).eval(session=self.session)
59
            img = prewhiten(img)
60
        return img[None, ...]
61
62
    def project(self, img):
63
        images = self._check_feature(img)
64
        feed_dict = {
65
            self.images_placeholder: images,
66
            self.phase_train_placeholder: False,
67
        }
68
        features = self.session.run(self.embeddings, feed_dict=feed_dict)
69
        return features.flatten()
70
71
    def process(self, inputs, data_loaders, outputs):
72
73
        # collect all the image projections for the current template
74
        self.counter += 1
75
        print(self.counter, "processing one probe image and its templates")
76
        probe_image = inputs["probe_image"].data.value.astype("float64")
77
        probe_image = self.project(probe_image)
78
79
        template_images = inputs["template_images"].data.value.astype("float64")
80
        template_images = [self.project(img) for img in template_images]
81
82
        score = -np.min(
83
            [
84
                scipy.spatial.distance.cosine(template_img, probe_image)
85
                for template_img in template_images
86
            ]
87
        )
88
89
        outputs["score"].write({"value": score})
90
91
        return True
92

The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box

A face recognition algorithm to compare one probe image against a set of template images. The images must be gray-scale and should contain the face region only. Internally, the images are resized to 160x160 pixels. This algorithm expects the pre-trained FaceNet model to be provided as input as well. The model can be downloaded from https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk which was made available in https://github.com/davidsandberg/facenet/tree/b95c9c3290455cabc425dc3f9435650679a74c50

Experiments

Updated Name Databases/Protocols Analyzers
amohammadi/amohammadi/atnt_facenet/1/atnt_facenet_1 atnt/5@idiap,facenet-20170512-110547/1@facenet-20170512-110547 amohammadi/eer_analyzer/1
Created with Raphaël 2.1.2[compare]amohammadi/facenet_projection_and_comparison/12021Mar9

This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.

Terms of Service | Contact Information | BEAT platform version 2.2.1b0 | © Idiap Research Institute - 2013-2025