A face recognition algorithm to compare one probe image against a set of template images.
Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.
Endpoint Name | Data Format | Nature |
---|---|---|
probe_image | system/array_2d_uint8/1 | Input |
template_images | system/array_3d_uint8/1 | Input |
score | system/float/1 | Output |
Endpoint Name | Data Format | Nature |
---|---|---|
model_pb | system/text/1 | Input |
xxxxxxxxxx
import numpy as np
import scipy.spatial
import tensorflow.compat.v1 as tf
import base64
def gray_to_rgb(img):
img = img[None, ...]
img = np.vstack((img, img, img))
return img
def to_matplotlib(img):
if img.ndim < 3:
return img
return np.moveaxis(img, -3, -1)
def prewhiten(img):
mean = np.mean(img)
std = np.std(img)
std_adj = np.maximum(std, 1.0 / np.sqrt(img.size))
y = np.multiply(np.subtract(img, mean), 1 / std_adj)
return y
class Algorithm:
def __init__(self):
self.session = tf.InteractiveSession()
self.graph = tf.get_default_graph()
self.image_size = 160
self.counter = 0
def prepare(self, data_loaders):
# Loads the model at the beginning
loader = data_loaders.loaderOf("model_pb")
for i in range(loader.count()):
view = loader.view("model_pb", i)
data, _, _ = view[0]
data = data["model_pb"].text
data = base64.b64decode(data)
print("importing model to graph")
graph_def = tf.GraphDef()
graph_def.ParseFromString(data)
tf.import_graph_def(graph_def, name="")
# Get input and output tensors
self.images_placeholder = self.graph.get_tensor_by_name("input:0")
self.embeddings = self.graph.get_tensor_by_name("embeddings:0")
self.phase_train_placeholder = self.graph.get_tensor_by_name("phase_train:0")
return True
def _check_feature(self, img):
img = np.ascontiguousarray(img)
if img.ndim == 2:
img = gray_to_rgb(img)
img = to_matplotlib(img)
with self.graph.as_default():
img = tf.image.resize_images(img, [self.image_size, self.image_size]).eval(session=self.session)
img = prewhiten(img)
return img[None, ...]
def project(self, img):
images = self._check_feature(img)
feed_dict = {
self.images_placeholder: images,
self.phase_train_placeholder: False,
}
features = self.session.run(self.embeddings, feed_dict=feed_dict)
return features.flatten()
def process(self, inputs, data_loaders, outputs):
# collect all the image projections for the current template
self.counter += 1
print(self.counter, "processing one probe image and its templates")
probe_image = inputs["probe_image"].data.value.astype("float64")
probe_image = self.project(probe_image)
template_images = inputs["template_images"].data.value.astype("float64")
template_images = [self.project(img) for img in template_images]
score = -np.min(
[
scipy.spatial.distance.cosine(template_img, probe_image)
for template_img in template_images
]
)
outputs["score"].write({"value": score})
return True
The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box
A face recognition algorithm to compare one probe image against a set of template images. The images must be gray-scale and should contain the face region only. Internally, the images are resized to 160x160 pixels. This algorithm expects the pre-trained FaceNet model to be provided as input as well. The model can be downloaded from https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk which was made available in https://github.com/davidsandberg/facenet/tree/b95c9c3290455cabc425dc3f9435650679a74c50
Updated | Name | Databases/Protocols | Analyzers | |||
---|---|---|---|---|---|---|
amohammadi/amohammadi/atnt_facenet/1/atnt_facenet_1 | atnt/5@idiap,facenet-20170512-110547/1@facenet-20170512-110547 | amohammadi/eer_analyzer/1 |
This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.