Python API¶
Estimators¶
Logits estimator. |
|
Logits estimator with center loss. |
|
NN estimator for Triplet networks. |
|
NN estimator for Siamese Networks. |
|
An estimator for regression problems |
|
Creates a callable that can be given to bob.learn.tensorflow.estimators |
|
A simple learning_rate_decay_fn. |
Architectures¶
Class that creates the architecture presented in the paper: |
|
Creates the graph for the Light CNN-9 in |
|
|
Create all the necessary variables for this CNN |
|
An MLP is a representation of a Multi-Layer Perceptron. |
|
|
Creates the Inception Resnet V2 model. |
|
Creates the Inception Resnet V1 model. |
|
|
Creates the Inception Resnet V2 model applying batch not to each Convolutional and FullyConnected layer. |
|
Creates the Inception Resnet V1 model applying batch not to each Convolutional and FullyConnected layer. |
Oxford Net VGG 19-Layers version E Example from tf-slim |
|
Oxford Net VGG 16-Layers version E Example from tf-slim |
Data¶
A generator class which wraps bob.bio.base databases so that they can be used with tf.data.Dataset.from_generator |
|
|
Dump random batches from a list of image paths and labels: |
|
Dump random batches for siamese networks from a list of image paths and labels: |
|
Dump random batches for triplee networks from a list of image paths and labels: |
|
Dump random batches from a list of tf-record files and applies some image augmentation |
|
Dump random batches from a list of tf-record files |
|
A generator class which wraps samples so that they can be used with tf.data.Dataset.from_generator |
Converts the image to channel_last format. |
|
Converts the image to channel_first format. |
Style Transfer¶
Trains neural style transfer using the approach presented in: |
Losses¶
Simple CrossEntropy loss. |
|
Implementation of the CrossEntropy + Center Loss from the paper “A Discriminative Feature Learning Approach for Deep Face Recognition”(http://ydwen.github.io/papers/WenECCV16.pdf) |
|
Compute the contrastive loss as in |
|
Compute the triplet loss as in |
|
Compute the triplet loss as in |
|
Implements the style loss from: |
|
Implements the content loss from: |
|
Computes the denoising loss as in: |
|
|
Computes weights that normalizes your loss per class. |
|
Computes weights that normalizes your loss per class. |
Detailed Information¶
-
bob.learn.tensorflow.
get_config
()[source]¶ Returns a string containing the configuration information.
-
class
bob.learn.tensorflow.estimators.
Logits
(architecture, optimizer, loss_op, n_classes, config=None, embedding_validation=False, model_dir='', validation_batch_size=None, params=None, extra_checkpoint=None, apply_moving_averages=True, add_histograms=None, vat_loss=None, architecture_has_logits=False, balanced_loss_weight=False, use_sigmoid=False, labels_are_one_hot=False, optimize_loss=<function optimize_loss>, optimize_loss_learning_rate=None)¶ Bases:
tensorflow_estimator.python.estimator.estimator.Estimator
Logits estimator.
NN estimator with Cross entropy loss in the hot-encoded layer
bob.learn.tensorflow.estimators.Logits
.The architecture function should follow the following pattern:
def my_beautiful_architecture(placeholder, **kwargs): end_points = dict() graph = convXX(placeholder) end_points['conv'] = graph return graph, end_points
The loss function should follow the following pattern:
def my_beautiful_loss(logits, labels, **kwargs): return loss_set_of_ops(logits, labels)
-
architecture
¶ Pointer to a function that builds the graph.
-
optimizer
¶ One of the tensorflow solvers
-
config
¶
-
n_classes
¶ Number of classes of your problem. The logits will be appended in this class
-
loss_op
¶ Pointer to a function that computes the loss.
-
embedding_validation
¶ Run the validation using embeddings?? [default: False]
-
model_dir
¶ Model path
-
validation_batch_size
¶ Size of the batch for validation. This value is used when the validation with embeddings is used. This is a hack.
-
params
¶ Extra params for the model function (please see https://www.tensorflow.org/extend/estimators for more info)
-
extra_checkpoint
¶ In case you want to use other model to initialize some variables. This argument should be in the following format:
extra_checkpoint = { "checkpoint_path": <YOUR_CHECKPOINT>, "scopes": dict({"<SOURCE_SCOPE>/": "<TARGET_SCOPE>/"}), "trainable_variables": [<LIST OF VARIABLES OR SCOPES THAT YOU WANT TO RETRAIN>] }
- Type
-
apply_moving_averages
¶ Apply exponential moving average in the training variables and in the loss. https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage By default the decay for the variable averages is 0.9999 and for the loss is 0.9
- Type
-
-
class
bob.learn.tensorflow.estimators.
LogitsCenterLoss
(architecture=None, optimizer=None, config=None, n_classes=0, embedding_validation=False, model_dir='', alpha=0.9, factor=0.01, validation_batch_size=None, params=None, extra_checkpoint=None, apply_moving_averages=True, optimize_loss=<function optimize_loss>, optimize_loss_learning_rate=None)¶ Bases:
tensorflow_estimator.python.estimator.estimator.Estimator
Logits estimator with center loss.
NN estimator with Cross entropy loss in the hot-encoded layer
bob.learn.tensorflow.estimators.Logits
plus the center loss implemented in: “Wen, Yandong, et al. “A discriminative feature learning approach for deep face recognition.” European Conference on Computer Vision. Springer, Cham, 2016.”See
Logits
for the description of parameters.
-
class
bob.learn.tensorflow.estimators.
MovingAverageOptimizer
(optimizer, **kwargs)¶ Bases:
object
Creates a callable that can be given to bob.learn.tensorflow.estimators
This class is useful when you want to have a learning_rate_decay_fn and a moving average optimizer and use bob.learn.tensorflow.estimators
-
optimizer
¶ A tf.train.Optimizer that is created and wrapped with tf.contrib.opt.MovingAverageOptimizer.
- Type
Example
>>> import tensorflow as tf >>> from bob.learn.tensorflow.estimators import MovingAverageOptimizer >>> optimizer = MovingAverageOptimizer("adam") >>> actual_optimizer = optimizer(lr=1e-3) >>> isinstance(actual_optimizer, tf.train.Optimizer) True >>> actual_optimizer is optimizer.optimizer True
-
-
class
bob.learn.tensorflow.estimators.
Regressor
(architecture, optimizer=<tensorflow.python.training.adam.AdamOptimizer object>, loss_op=<function mean_squared_error>, label_dimension=1, config=None, model_dir=None, apply_moving_averages=True, add_regularization_losses=True, extra_checkpoint=None, add_histograms=None, optimize_loss=<function optimize_loss>, optimize_loss_learning_rate=None, architecture_has_logits=False)¶ Bases:
tensorflow_estimator.python.estimator.estimator.Estimator
An estimator for regression problems
-
class
bob.learn.tensorflow.estimators.
Siamese
(architecture=None, optimizer=None, config=None, loss_op=None, model_dir='', validation_batch_size=None, params=None, extra_checkpoint=None, add_histograms=None, add_regularization_losses=True, optimize_loss=<function optimize_loss>, optimize_loss_learning_rate=None)¶ Bases:
tensorflow_estimator.python.estimator.estimator.Estimator
NN estimator for Siamese Networks. Proposed in: “Chopra, Sumit, Raia Hadsell, and Yann LeCun. “Learning a similarity metric discriminatively, with application to face verification.” Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 1. IEEE, 2005.”
See
Logits
for the description of parameters.
-
class
bob.learn.tensorflow.estimators.
Triplet
(architecture=None, optimizer=None, config=None, loss_op=<function triplet_loss>, model_dir='', validation_batch_size=None, extra_checkpoint=None, optimize_loss=<function optimize_loss>, optimize_loss_learning_rate=None)¶ Bases:
tensorflow_estimator.python.estimator.estimator.Estimator
NN estimator for Triplet networks.
Schroff, Florian, Dmitry Kalenichenko, and James Philbin. “Facenet: A unified embedding for face recognition and clustering.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
See
Logits
for the description of parameters.
-
bob.learn.tensorflow.estimators.
get_trainable_variables
(extra_checkpoint, mode='train')[source]¶ Given the extra_checkpoint dictionary provided to the estimator, extract the content of “trainable_variables”.
If trainable_variables is not provided, all end points are trainable by default. If trainable_variables==[], all end points are NOT trainable. If trainable_variables contains some end_points, ONLY these endpoints will be trainable.
-
bob.learn.tensorflow.estimators.
extra_checkpoint
¶ The extra_checkpoint dictionary provided to the estimator
- Type
-
bob.learn.tensorflow.estimators.
mode
¶ The estimator mode. TRAIN, EVAL, and PREDICT. If not TRAIN, None is returned.
- Returns
Returns None if trainable_variables is not in extra_checkpoint;
otherwise returns the content of extra_checkpoint .
-
-
bob.learn.tensorflow.estimators.
learning_rate_decay_fn
(learning_rate, global_step, decay_steps, decay_rate, staircase=False)[source]¶ A simple learning_rate_decay_fn.
To use it with
tf.contrib.layer.optimize_loss
:>>> from bob.learn.tensorflow.estimators import learning_rate_decay_fn >>> from functools import partial >>> learning_rate_decay_fn = partial( ... learning_rate_decay_fn, ... decay_steps=1000, ... decay_rate=0.9, ... staircase=True, ... )
-
bob.learn.tensorflow.dataset.
from_filename_to_tensor
(filename, extension=None)[source]¶ Read a file and it convert it to tensor.
If the file extension is something that tensorflow understands (.jpg, .bmp, .tif,…), it uses the tf.image.decode_image otherwise it uses bob.io.base.load
-
bob.learn.tensorflow.dataset.
append_image_augmentation
(image, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, random_gamma=False, random_crop=False)[source]¶ Append to the current tensor some random image augmentation operation
- Parameters
- gray_scale:
Convert to gray scale?
- output_shape:
If set, will randomly crop the image given the output shape
- random_flip:
Randomly flip an image horizontally (https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)
- random_brightness:
Adjust the brightness of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_brightness)
- random_contrast:
Adjust the contrast of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_contrast)
- random_saturation:
Adjust the saturation of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_saturation)
- random_rotate:
Randomly rotate face images between -5 and 5 degrees
- per_image_normalization:
Linearly scales image to have zero mean and unit norm.
-
bob.learn.tensorflow.dataset.
triplets_random_generator
(input_data, input_labels)[source]¶ Giving a list of samples and a list of labels, it dumps a series of triplets for triple nets.
Parameters
input_data: List of whatever representing the data samples
input_labels: List of the labels (needs to be in EXACT same order as input_data)
-
bob.learn.tensorflow.dataset.
siamease_pairs_generator
(input_data, input_labels)[source]¶ Giving a list of samples and a list of labels, it dumps a series of pairs for siamese nets.
Parameters
input_data: List of whatever representing the data samples
input_labels: List of the labels (needs to be in EXACT same order as input_data)
-
bob.learn.tensorflow.dataset.
blocks_tensorflow
(images, block_size)[source]¶ Return all non-overlapping blocks of an image using tensorflow operations.
- Parameters
- Returns
blocks (tf.Tensor) – All the blocks in the batch dimension. The output will be of size [?, block_size[0], block_size[1], C].
n_blocks (int) – The number of blocks that was obtained per image.
-
bob.learn.tensorflow.dataset.
tf_repeat
(tensor, repeats)[source]¶ - Parameters
tensor – A Tensor. 1-D or higher.
repeats – A list. Number of repeat for each dimension, length must be the same as the number of dimensions in input
- Returns
A Tensor. Has the same type as input. Has the shape of tensor.shape *
repeats
-
bob.learn.tensorflow.dataset.
all_patches
(image, label, key, size)[source]¶ Extracts all patches of an image
- Parameters
- Returns
blocks – The non-overlapping blocks of size from image and labels and keys are repeated.
label
key
-
class
bob.learn.tensorflow.dataset.generator.
Generator
(samples, reader, multiple_samples=False, shuffle_on_epoch_end=False, **kwargs)[source]¶ Bases:
object
A generator class which wraps samples so that they can be used with tf.data.Dataset.from_generator
-
multiple_samples
¶ If true, it assumes that the bio database’s samples actually contain multiple samples. This is useful for when you want to for example treat video databases as image databases.
- Type
bool
, optional
-
reader
¶ A callable with the signature of
data, label, key = reader(sample)
which takes a sample and loads it.- Type
object
, optional
-
output_shapes
¶ The shapes of the returned samples.
- Type
(tf.TensorShape, tf.TensorShape, tf.TensorShape)
-
property
output_types
-
property
output_shapes
-
-
bob.learn.tensorflow.dataset.generator.
dataset_using_generator
(samples, reader, **kwargs)[source]¶ A generator class which wraps samples so that they can be used with tf.data.Dataset.from_generator
- Parameters
- Returns
A tf.data.Dataset
- Return type
-
class
bob.learn.tensorflow.dataset.bio.
BioGenerator
(database, biofiles, load_data=None, biofile_to_label=None, multiple_samples=False, **kwargs)[source]¶ Bases:
bob.learn.tensorflow.dataset.generator.Generator
A generator class which wraps bob.bio.base databases so that they can be used with tf.data.Dataset.from_generator
-
biofile_to_label
¶ A callable with the signature of
label = biofile_to_label(biofile)
. By default -1 is returned as label.- Type
object
, optional
-
database
¶ The database that you want to use.
-
load_data
¶ A callable with the signature of
data = load_data(database, biofile)
.bob.bio.base.read_original_data
is wrapped to be used by default.- Type
object
, optional
-
biofiles
¶ The list of the bio files .
-
property
labels
-
property
keys
-
property
biofiles
-
-
bob.learn.tensorflow.dataset.image.
shuffle_data_and_labels_image_augmentation
(filenames, labels, data_shape, data_type, batch_size, epochs=None, buffer_size=1000, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Dump random batches from a list of image paths and labels:
The list of files and labels should be in the same order e.g. filenames = [‘class_1_img1’, ‘class_1_img2’, ‘class_2_img1’] labels = [0, 0, 1]
Parameters
- filenames:
List containing the path of the images
- labels:
List containing the labels (needs to be in EXACT same order as filenames)
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
- batch_size:
Size of the batch
- epochs:
Number of epochs to be batched
- buffer_size:
Size of the shuffle bucket
- gray_scale:
Convert to gray scale?
- output_shape:
If set, will randomly crop the image given the output shape
- random_flip:
Randomly flip an image horizontally (https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)
- random_brightness:
Adjust the brightness of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_brightness)
- random_contrast:
Adjust the contrast of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_contrast)
- random_saturation:
Adjust the saturation of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_saturation)
- random_rotate:
Randomly rotate face images between -5 and 5 degrees
- per_image_normalization:
Linearly scales image to have zero mean and unit norm.
- extension:
If None, will load files using tf.image.decode.. if set to hdf5, will load with bob.io.base.load
-
bob.learn.tensorflow.dataset.image.
create_dataset_from_path_augmentation
(filenames, labels, data_shape, data_type, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Create dataset from a list of tf-record files
Parameters
- filenames:
List containing the path of the images
- labels:
List containing the labels (needs to be in EXACT same order as filenames)
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
feature:
-
bob.learn.tensorflow.dataset.image.
image_augmentation_parser
(filename, label, data_shape, data_type, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Parses a single tf.Example into image and label tensors.
-
bob.learn.tensorflow.dataset.image.
load_pngs
(img_path, img_shape)[source]¶ Read png files using tensorflow API You must know the shape of the image beforehand to use this function.
-
bob.learn.tensorflow.dataset.siamese_image.
shuffle_data_and_labels_image_augmentation
(filenames, labels, data_shape, data_type, batch_size, epochs=None, buffer_size=1000, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Dump random batches for siamese networks from a list of image paths and labels:
The list of files and labels should be in the same order e.g. filenames = [‘class_1_img1’, ‘class_1_img2’, ‘class_2_img1’] labels = [0, 0, 1]
The batches returned with tf.Session.run() with be in the following format: data a dictionary containing the keys [‘left’, ‘right’], each one representing one element of the pair and labels which is [0, 1] where 0 is the genuine pair and 1 is the impostor pair.
Parameters
- filenames:
List containing the path of the images
- labels:
List containing the labels (needs to be in EXACT same order as filenames)
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
- batch_size:
Size of the batch
- epochs:
Number of epochs to be batched
- buffer_size:
Size of the shuffle bucket
- gray_scale:
Convert to gray scale?
- output_shape:
If set, will randomly crop the image given the output shape
- random_flip:
Randomly flip an image horizontally (https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)
- random_brightness:
Adjust the brightness of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_brightness)
- random_contrast:
Adjust the contrast of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_contrast)
- random_saturation:
Adjust the saturation of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_saturation)
- random_rotate:
Randomly rotate face images between -5 and 5 degrees
- per_image_normalization:
Linearly scales image to have zero mean and unit norm.
- extension:
If None, will load files using tf.image.decode.. if set to hdf5, will load with bob.io.base.load
-
bob.learn.tensorflow.dataset.siamese_image.
create_dataset_from_path_augmentation
(filenames, labels, data_shape, data_type, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Create dataset from a list of tf-record files
Parameters
- filenames:
List containing the path of the images
- labels:
List containing the labels (needs to be in EXACT same order as filenames)
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
- batch_size:
Size of the batch
- epochs:
Number of epochs to be batched
- buffer_size:
Size of the shuffle bucket
- gray_scale:
Convert to gray scale?
- output_shape:
If set, will randomly crop the image given the output shape
- random_flip:
Randomly flip an image horizontally (https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)
- random_brightness:
Adjust the brightness of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_brightness)
- random_contrast:
Adjust the contrast of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_contrast)
- random_saturation:
Adjust the saturation of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_saturation)
- random_rotate:
Randomly rotate face images between -10 and 10 degrees
- per_image_normalization:
Linearly scales image to have zero mean and unit norm.
- extension:
If None, will load files using tf.image.decode.. if set to hdf5, will load with bob.io.base.load
-
bob.learn.tensorflow.dataset.siamese_image.
image_augmentation_parser
(filename_left, filename_right, label, data_shape, data_type, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Parses a single tf.Example into image and label tensors.
-
bob.learn.tensorflow.dataset.triplet_image.
shuffle_data_and_labels_image_augmentation
(filenames, labels, data_shape, data_type, batch_size, epochs=None, buffer_size=1000, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Dump random batches for triplee networks from a list of image paths and labels:
The list of files and labels should be in the same order e.g. filenames = [‘class_1_img1’, ‘class_1_img2’, ‘class_2_img1’] labels = [0, 0, 1]
The batches returned with tf.Session.run() with be in the following format: data a dictionary containing the keys [‘anchor’, ‘positive’, ‘negative’].
Parameters
- filenames:
List containing the path of the images
- labels:
List containing the labels (needs to be in EXACT same order as filenames)
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
- batch_size:
Size of the batch
- epochs:
Number of epochs to be batched
- buffer_size:
Size of the shuffle bucket
- gray_scale:
Convert to gray scale?
- output_shape:
If set, will randomly crop the image given the output shape
- random_flip:
Randomly flip an image horizontally (https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)
- random_brightness:
Adjust the brightness of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_brightness)
- random_contrast:
Adjust the contrast of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_contrast)
- random_saturation:
Adjust the saturation of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_saturation)
- random_rotate:
Randomly rotate face images between -5 and 5 degrees
- per_image_normalization:
Linearly scales image to have zero mean and unit norm.
- extension:
If None, will load files using tf.image.decode.. if set to hdf5, will load with bob.io.base.load
-
bob.learn.tensorflow.dataset.triplet_image.
create_dataset_from_path_augmentation
(filenames, labels, data_shape, data_type=tf.float32, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Create dataset from a list of tf-record files
Parameters
- filenames:
List containing the path of the images
- labels:
List containing the labels (needs to be in EXACT same order as filenames)
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
feature:
-
bob.learn.tensorflow.dataset.triplet_image.
image_augmentation_parser
(anchor, positive, negative, data_shape, data_type=tf.float32, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, extension=None)[source]¶ Parses a single tf.Example into image and label tensors.
Utilities for TFRecords
-
bob.learn.tensorflow.dataset.tfrecords.
dataset_to_tfrecord
(dataset, output)[source]¶ Writes a tf.data.Dataset into a TFRecord file.
- Parameters
dataset (
tf.data.Dataset
) – The tf.data.Dataset that you want to write into a TFRecord file.output (str) – Path to the TFRecord file. Besides this file, a .json file is also created. This json file is needed when you want to convert the TFRecord file back into a dataset.
- Returns
A tf.Operation that, when run, writes contents of dataset to a file. When running in eager mode, calling this function will write the file. Otherwise, you have to call session.run() on the returned operation.
- Return type
tf.Operation
-
bob.learn.tensorflow.dataset.tfrecords.
dataset_from_tfrecord
(tfrecord, num_parallel_reads=None)[source]¶ Reads TFRecords and returns a dataset. The TFRecord file must have been created using the
dataset_to_tfrecord
function.- Parameters
- Returns
A dataset that contains the data from the TFRecord file.
- Return type
tf.data.Dataset
-
bob.learn.tensorflow.dataset.tfrecords.
write_a_sample
(writer, data, label, key, feature=None, size_estimate=False)[source]¶
-
bob.learn.tensorflow.dataset.tfrecords.
example_parser
(serialized_example, feature, data_shape, data_type)[source]¶ Parses a single tf.Example into image and label tensors.
-
bob.learn.tensorflow.dataset.tfrecords.
image_augmentation_parser
(serialized_example, feature, data_shape, data_type, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, random_gamma=False, random_crop=False)[source]¶ Parses a single tf.Example into image and label tensors.
-
bob.learn.tensorflow.dataset.tfrecords.
read_and_decode
(filename_queue, data_shape, data_type=tf.float32, feature=None)[source]¶ Simples parse possible for a tfrecord. It assumes that you have the pair train/data and train/label
-
bob.learn.tensorflow.dataset.tfrecords.
create_dataset_from_records
(tfrecord_filenames, data_shape, data_type, feature=None)[source]¶ Create dataset from a list of tf-record files
Parameters
- tfrecord_filenames:
List containing the tf-record paths
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
feature:
-
bob.learn.tensorflow.dataset.tfrecords.
create_dataset_from_records_with_augmentation
(tfrecord_filenames, data_shape, data_type, feature=None, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, random_gamma=False, random_crop=False)[source]¶ Create dataset from a list of tf-record files
Parameters
- tfrecord_filenames:
List containing the tf-record paths
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
feature:
-
bob.learn.tensorflow.dataset.tfrecords.
shuffle_data_and_labels_image_augmentation
(tfrecord_filenames, data_shape, data_type, batch_size, epochs=None, buffer_size=1000, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, random_gamma=False, random_crop=False, drop_remainder=False)[source]¶ Dump random batches from a list of tf-record files and applies some image augmentation
- Parameters
tfrecord_filenames – List containing the tf-record paths
data_shape – Samples shape saved in the tf-record
data_type – tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
batch_size – Size of the batch
epochs – Number of epochs to be batched
buffer_size – Size of the shuffle bucket
gray_scale – Convert to gray scale?
output_shape – If set, will randomly crop the image given the output shape
random_flip – Randomly flip an image horizontally (https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right)
random_brightness – Adjust the brightness of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_brightness)
random_contrast – Adjust the contrast of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_contrast)
random_saturation – Adjust the saturation of an RGB image by a random factor (https://www.tensorflow.org/api_docs/python/tf/image/random_saturation)
random_rotate – Randomly rotate face images between -5 and 5 degrees
per_image_normalization – Linearly scales image to have zero mean and unit norm.
drop_remainder – If True, the last remaining batch that has smaller size than batch_size will be dropped.
-
bob.learn.tensorflow.dataset.tfrecords.
shuffle_data_and_labels
(tfrecord_filenames, data_shape, data_type, batch_size, epochs=None, buffer_size=1000)[source]¶ Dump random batches from a list of tf-record files
Parameters
- tfrecord_filenames:
List containing the tf-record paths
- data_shape:
Samples shape saved in the tf-record
- data_type:
tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
- batch_size:
Size of the batch
- epochs:
Number of epochs to be batched
- buffer_size:
Size of the shuffle bucket
-
bob.learn.tensorflow.dataset.tfrecords.
batch_data_and_labels
(tfrecord_filenames, data_shape, data_type, batch_size, epochs=1)[source]¶ Dump in order batches from a list of tf-record files
- Parameters
tfrecord_filenames – List containing the tf-record paths
data_shape – Samples shape saved in the tf-record
data_type – tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
batch_size – Size of the batch
epochs – Number of epochs to be batched
-
bob.learn.tensorflow.dataset.tfrecords.
batch_data_and_labels_image_augmentation
(tfrecord_filenames, data_shape, data_type, batch_size, epochs=1, gray_scale=False, output_shape=None, random_flip=False, random_brightness=False, random_contrast=False, random_saturation=False, random_rotate=False, per_image_normalization=True, random_gamma=False, random_crop=False, drop_remainder=False)[source]¶ Dump in order batches from a list of tf-record files
- Parameters
tfrecord_filenames – List containing the tf-record paths
data_shape – Samples shape saved in the tf-record
data_type – tf data type(https://www.tensorflow.org/versions/r0.12/resources/dims_types#data_types)
batch_size – Size of the batch
epochs – Number of epochs to be batched
drop_remainder – If True, the last remaining batch that has smaller size than batch_size will be dropped.
-
bob.learn.tensorflow.dataset.tfrecords.
describe_tf_record
(tf_record_path, shape, batch_size=1)[source]¶ Describe the number of samples and the number of classes of a tf-record
-
bob.learn.tensorflow.network.
chopra
(inputs, conv1_kernel_size=[7, 7], conv1_output=15, pooling1_size=[2, 2], conv2_kernel_size=[6, 6], conv2_output=45, pooling2_size=[4, 3], fc1_output=250, seed=10, reuse=False)¶ Class that creates the architecture presented in the paper:
Chopra, Sumit, Raia Hadsell, and Yann LeCun. “Learning a similarity metric discriminatively, with application to face verification.” 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). Vol. 1. IEEE, 2005.
This is modifield version of the original architecture. It is inspired on https://gitlab.idiap.ch/bob/xfacereclib.cnn/blob/master/lua/network.lua
– C1 : Convolutional, kernel = 7x7 pixels, 15 feature maps
– M2 : MaxPooling, 2x2
– HT : Hard Hyperbolic Tangent
– C3 : Convolutional, kernel = 6x6 pixels, 45 feature maps
– M4 : MaxPooling, 4x3
– HT : Hard Hyperbolic Tangent
– R : Reshaping layer HT 5x5 => 25 (45 times; once for each feature map)
– L5 : Linear 25 => 250
Parameters
conv1_kernel_size:
conv1_output:
pooling1_size:
conv2_kernel_size:
conv2_output:
pooling2_size
fc1_output:
seed:
-
bob.learn.tensorflow.network.
dummy
(inputs, reuse=False, mode='train', trainable_variables=None, **kwargs)¶ Create all the necessary variables for this CNN
- Parameters
inputs –
reuse –
mode –
trainable_variables –
-
bob.learn.tensorflow.network.
inception_resnet_v1
(inputs, dropout_keep_prob=0.8, bottleneck_layer_size=128, reuse=None, scope='InceptionResnetV1', mode='train', trainable_variables=None, **kwargs)¶ Creates the Inception Resnet V1 model.
- Parameters
inputs – 4-D tensor of size [batch_size, height, width, 3].
num_classes – number of predicted classes.
is_training – whether is training or not.
dropout_keep_prob (float) – the fraction to keep before final layer.
reuse – whether or not the network and its variables should be reused. To be able to reuse ‘scope’ must be given.
scope – Optional variable_scope.
trainable_variables (
list
) – List of variables to be trainable=True
- Returns
logits – the logits outputs of the model.
end_points – the set of end_points from the inception model.
-
bob.learn.tensorflow.network.
inception_resnet_v1_batch_norm
(inputs, dropout_keep_prob=0.8, bottleneck_layer_size=128, reuse=None, scope='InceptionResnetV1', mode='train', trainable_variables=None, weight_decay=1e-05, **kwargs)¶ Creates the Inception Resnet V1 model applying batch not to each Convolutional and FullyConnected layer.
- Parameters
inputs – 4-D tensor of size [batch_size, height, width, 3].
num_classes – number of predicted classes.
is_training – whether is training or not.
dropout_keep_prob (float) – the fraction to keep before final layer.
reuse – whether or not the network and its variables should be reused. To be able to reuse ‘scope’ must be given.
scope – Optional variable_scope.
trainable_variables (
list
) – List of variables to be trainable=True
- Returns
logits – the logits outputs of the model.
end_points – the set of end_points from the inception model.
-
bob.learn.tensorflow.network.
inception_resnet_v2
(inputs, dropout_keep_prob=0.8, bottleneck_layer_size=128, reuse=None, scope='InceptionResnetV2', mode='train', trainable_variables=None, **kwargs)¶ Creates the Inception Resnet V2 model.
- Parameters
inputs – 4-D tensor of size [batch_size, height, width, 3].
num_classes – number of predicted classes.
is_training – whether is training or not.
dropout_keep_prob (float) – the fraction to keep before final layer.
reuse – whether or not the network and its variables should be reused. To be able to reuse ‘scope’ must be given.
scope – Optional variable_scope.
trainable_variables (
list
) – List of variables to be trainable=True
- Returns
logits – the logits outputs of the model.
end_points – the set of end_points from the inception model.
-
bob.learn.tensorflow.network.
inception_resnet_v2_batch_norm
(inputs, dropout_keep_prob=0.8, bottleneck_layer_size=128, reuse=None, scope='InceptionResnetV2', mode='train', trainable_variables=None, weight_decay=5e-05, **kwargs)¶ Creates the Inception Resnet V2 model applying batch not to each Convolutional and FullyConnected layer.
Parameters:
- inputs:
4-D tensor of size [batch_size, height, width, 3].
- num_classes:
number of predicted classes.
- is_training:
whether is training or not.
- dropout_keep_prob: float
the fraction to keep before final layer.
- reuse:
whether or not the network and its variables should be reused. To be able to reuse ‘scope’ must be given.
- scope:
Optional variable_scope.
- trainable_variables: list
List of variables to be trainable=True
Returns:
- logits:
the logits outputs of the model.
- end_points:
the set of end_points from the inception model.
-
bob.learn.tensorflow.network.
light_cnn9
(inputs, seed=10, reuse=False, trainable_variables=None, **kwargs)¶ Creates the graph for the Light CNN-9 in
Wu, Xiang, et al. “A light CNN for deep face representation with noisy labels.” arXiv preprint arXiv:1511.02683 (2015).
-
bob.learn.tensorflow.network.
mlp
(inputs, output_shape, hidden_layers=[10], hidden_activation=<function tanh>, output_activation=None, seed=10, **kwargs)¶ An MLP is a representation of a Multi-Layer Perceptron.
This implementation is feed-forward and fully-connected. The implementation allows setting a global and the output activation functions. References to fully-connected feed-forward networks: Bishop’s Pattern Recognition and Machine Learning, Chapter 5. Figure 5.1 shows what is programmed.
MLPs normally are multi-layered systems, with 1 or more hidden layers.
Parameters
output_shape: number of neurons in the output.
hidden_layers:
list
that contains the amount of hidden layers, where each element is the number of neurons- hidden_activation: Activation function of the hidden layers. Possible values can be seen
- here.
If you set to
None
, the activation will be linear.
output_activation: Activation of the output layer. If you set to None, the activation will be linear
seed:
-
bob.learn.tensorflow.network.
mlp_with_batchnorm_and_dropout
(inputs, fully_connected_layers, mode='train', trainable_variables=None, **kwargs)[source]¶
-
bob.learn.tensorflow.network.
vgg_16
(inputs, reuse=None, mode='train', trainable_variables=None, scope='vgg_16', **kwargs)¶ Oxford Net VGG 16-Layers version E Example from tf-slim
https://raw.githubusercontent.com/tensorflow/models/master/research/slim/nets/vgg.py
Parameters:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
- reuse: whether or not the network and its variables should be reused. To be
able to reuse ‘scope’ must be given.
- mode:
Estimator mode keys
-
bob.learn.tensorflow.network.
vgg_19
(inputs, reuse=None, mode='train', **kwargs)¶ Oxford Net VGG 19-Layers version E Example from tf-slim
https://raw.githubusercontent.com/tensorflow/models/master/research/slim/nets/vgg.py
Parameters:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
- reuse: whether or not the network and its variables should be reused. To be
able to reuse ‘scope’ must be given.
- mode:
Estimator mode keys
The network using keras (same as new_architecture function below):
from tensorflow.python.keras import *
from tensorflow.python.keras.layers import *
simplecnn = Sequential([
Conv2D(32,(3,3),padding='same',use_bias=False, input_shape=(28,28,3)),
BatchNormalization(scale=False),
Activation('relu'),
MaxPool2D(padding='same'),
Conv2D(64,(3,3),padding='same',use_bias=False),
BatchNormalization(scale=False),
Activation('relu'),
MaxPool2D(padding='same'),
Flatten(),
Dense(1024, use_bias=False),
BatchNormalization(scale=False),
Activation('relu'),
Dropout(rate=0.4),
Dense(2),
])
simplecnn.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 28, 28, 32) 864
_________________________________________________________________
batch_normalization_1 (Batch (None, 28, 28, 32) 96
_________________________________________________________________
activation_1 (Activation) (None, 28, 28, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 64) 18432
_________________________________________________________________
batch_normalization_2 (Batch (None, 14, 14, 64) 192
_________________________________________________________________
activation_2 (Activation) (None, 14, 14, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 3136) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 3211264
_________________________________________________________________
batch_normalization_3 (Batch (None, 1024) 3072
_________________________________________________________________
activation_3 (Activation) (None, 1024) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 2050
=================================================================
Total params: 3,235,970
Trainable params: 3,233,730
Non-trainable params: 2,240
_________________________________________________________________
-
bob.learn.tensorflow.network.SimpleCNN.
architecture
(input_layer, mode='train', kernerl_size=(3, 3), n_classes=2, data_format='channels_last', reuse=False, add_batch_norm=False, trainable_variables=None, **kwargs)[source]¶
-
bob.learn.tensorflow.network.SimpleCNN.
base_architecture
(input_layer, mode='train', kernerl_size=(3, 3), data_format='channels_last', add_batch_norm=False, trainable_variables=None, use_bias_with_batch_norm=True, **kwargs)[source]¶
-
bob.learn.tensorflow.network.SimpleCNN.
create_conv_layer
(inputs, mode, data_format, endpoints, number, filters, kernel_size, pool_size, pool_strides, add_batch_norm=False, trainable_variables=None, use_bias_with_batch_norm=True)[source]¶
-
bob.learn.tensorflow.network.SimpleCNN.
model_fn
(features, labels, mode, params=None, config=None)[source]¶ Model function for CNN.
-
bob.learn.tensorflow.network.SimpleCNN.
new_architecture
(input_layer, mode='train', kernerl_size=(3, 3), data_format='channels_last', add_batch_norm=True, trainable_variables=None, use_bias_with_batch_norm=False, reuse=False, **kwargs)[source]¶
-
bob.learn.tensorflow.network.SimpleCNN.
slim_architecture
(input_layer, mode='train', kernerl_size=(3, 3), data_format='channels_last', add_batch_norm=True, trainable_variables=None, use_bias_with_batch_norm=False, reuse=False, **kwargs)[source]¶
-
bob.learn.tensorflow.utils.util.
compute_euclidean_distance
(x, y)[source]¶ Computes the euclidean distance between two tensorflow variables
-
bob.learn.tensorflow.utils.util.
create_mnist_tfrecord
(tfrecords_filename, data, labels, n_samples=6000)[source]¶
-
bob.learn.tensorflow.utils.util.
compute_eer
(data_train, labels_train, data_validation, labels_validation, n_classes)[source]¶
-
bob.learn.tensorflow.utils.util.
compute_accuracy
(data_train, labels_train, data_validation, labels_validation, n_classes)[source]¶
-
bob.learn.tensorflow.utils.util.
debug_embbeding
(image, architecture, embbeding_dim=2, feature_layer='fc3')[source]¶
-
bob.learn.tensorflow.utils.util.
pdist
(A)[source]¶ Compute a pairwise euclidean distance in the same fashion as in scipy.spation.distance.pdist
-
bob.learn.tensorflow.utils.util.
predict_using_tensors
(embedding, labels, num=None)[source]¶ Compute the predictions through exhaustive comparisons between embeddings using tensors
-
bob.learn.tensorflow.utils.util.
compute_embedding_accuracy_tensors
(embedding, labels, num=None)[source]¶ Compute the accuracy in a closed-set
Parameters
- embeddings: tf.Tensor
Set of embeddings
- labels: tf.Tensor
Correspondent labels
-
bob.learn.tensorflow.utils.util.
compute_embedding_accuracy
(embedding, labels)[source]¶ Compute the accuracy in a closed-set
Parameters
- embeddings:
numpy.array
Set of embeddings
- labels:
numpy.array
Correspondent labels
- embeddings:
-
bob.learn.tensorflow.utils.util.
get_available_gpus
()[source]¶ Returns the number of GPU devices that are available.
- Returns
The names of available GPU devices.
- Return type
[str]
-
bob.learn.tensorflow.utils.util.
to_channels_last
(image)[source]¶ Converts the image to channel_last format. This is the same format as in matplotlib, skimage, and etc.
- Parameters
image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [C, H, W].
- Returns
image – The image in […, H, W, C] format.
- Return type
tf.Tensor
- Raises
ValueError – If dim of image is less than 3.
-
bob.learn.tensorflow.utils.util.
to_channels_first
(image)[source]¶ Converts the image to channel_first format. This is the same format as in bob.io.image and bob.io.video.
- Parameters
image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [H, W, C].
- Returns
image – The image in […, C, H, W] format.
- Return type
tf.Tensor
- Raises
ValueError – If dim of image is less than 3.
-
bob.learn.tensorflow.utils.util.
to_skimage
(image)¶ Converts the image to channel_last format. This is the same format as in matplotlib, skimage, and etc.
- Parameters
image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [C, H, W].
- Returns
image – The image in […, H, W, C] format.
- Return type
tf.Tensor
- Raises
ValueError – If dim of image is less than 3.
-
bob.learn.tensorflow.utils.util.
to_matplotlib
(image)¶ Converts the image to channel_last format. This is the same format as in matplotlib, skimage, and etc.
- Parameters
image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [C, H, W].
- Returns
image – The image in […, H, W, C] format.
- Return type
tf.Tensor
- Raises
ValueError – If dim of image is less than 3.
-
bob.learn.tensorflow.utils.util.
to_bob
(image)¶ Converts the image to channel_first format. This is the same format as in bob.io.image and bob.io.video.
- Parameters
image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [H, W, C].
- Returns
image – The image in […, C, H, W] format.
- Return type
tf.Tensor
- Raises
ValueError – If dim of image is less than 3.
-
bob.learn.tensorflow.utils.util.
bytes2human
(n, format='%(value).1f %(symbol)s', symbols='customary')[source]¶ Convert n bytes into a human readable string based on format. From: https://code.activestate.com/recipes/578019-bytes-to-human-human-to- bytes-converter/ Author: Giampaolo Rodola’ <g.rodola [AT] gmail [DOT] com> License: MIT symbols can be either “customary”, “customary_ext”, “iec” or “iec_ext”, see: http://goo.gl/kTQMs
-
bob.learn.tensorflow.utils.util.
random_choice_no_replacement
(one_dim_input, num_indices_to_drop=3, sort=False)[source]¶ Similar to np.random.choice with no replacement. Code from https://stackoverflow.com/a/54755281/1286165
-
bob.learn.tensorflow.style_transfer.
compute_features
(input_image, architecture, checkpoint_dir, target_end_points, preprocess_fn=None)[source]¶ For a given set of end_points, convolve the input image until these points
- Parameters
input_image (
numpy.array
) – Input image in the format WxHxCarchitecture – Pointer to the architecture function
checkpoint_dir (str) – DCNN checkpoint directory
end_points (dict) – Dictionary containing the end point tensors
preprocess_fn – Pointer to a preprocess function
-
bob.learn.tensorflow.style_transfer.
compute_gram
(features)[source]¶ Given a list of features (as numpy.arrays) comput the gram matrices of each pinning the channel as in:
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. “A neural algorithm of artistic style.” arXiv preprint arXiv:1508.06576 (2015).
- Parameters
features (
numpy.array
) – Convolved features in the format NxWxHxC
-
bob.learn.tensorflow.style_transfer.
do_style_transfer
(content_image, style_images, architecture, checkpoint_dir, scopes, content_end_points, style_end_points, preprocess_fn=None, un_preprocess_fn=None, pure_noise=False, iterations=1000, learning_rate=0.1, content_weight=5.0, style_weight=500.0, denoise_weight=500.0, start_from='noise')[source]¶ Trains neural style transfer using the approach presented in:
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. “A neural algorithm of artistic style.” arXiv preprint arXiv:1508.06576 (2015).
- Parameters
content_image (
numpy.array
) – Content image in the Bob format (C x W x H)style_images (
list
) – List of numpy.array (Bob format (C x W x H)) that encodes the stylearchitecture – Point to a function with the base architecture
checkpoint_dir – CNN checkpoint path
scopes – Dictionary containing the mapping scores
content_end_points – List of end_points (from the architecture) for the used to encode the content
style_end_points – List of end_points (from the architecture) for the used to encode the style
preprocess_fn – Preprocess function. Pointer to a function that preprocess the INPUT signal
unpreprocess_fn – Un preprocess function. Pointer to a function that preprocess the OUTPUT signal
pure_noise – If set will save the raw noisy generated image. If not set, the output will be RGB = stylizedYUV.Y, originalYUV.U, originalYUV.V
iterations – Number of iterations to generate the image
learning_rate – Adam learning rate
content_weight – Weight of the content loss
style_weight – Weight of the style loss
denoise_weight – Weight denoising loss
-
class
bob.learn.tensorflow.loss.
CenterLoss
(n_classes, n_features, alpha=0.9, name='center_loss', **kwargs)[source]¶ Bases:
object
Center loss.
-
property
update_ops
¶
-
property
-
class
bob.learn.tensorflow.loss.
PixelWise
(balance_weights=True, n_one_hot_labels=None, label_smoothing=0.5, **kwargs)¶ Bases:
object
A pixel wise loss which is just a cross entropy loss but applied to all pixels
-
class
bob.learn.tensorflow.loss.
VATLoss
(epsilon=8.0, xi=1e-06, num_power_iterations=1, method='vatent', **kwargs)¶ Bases:
object
A class to hold parameters for Virtual Adversarial Training (VAT) Loss and perform it.
-
method
¶ The method for calculating the loss:
vatent
for VAT loss + entropy andvat
for only VAT loss.- Type
-
-
bob.learn.tensorflow.loss.
balanced_sigmoid_cross_entropy_loss_weights
(labels, dtype='float32')[source]¶ Computes weights that normalizes your loss per class.
Labels must be a batch of binary labels. The function takes labels and computes the weights per batch. Weights will be smaller for the class that have more samples in this batch. This is useful if you unbalanced classes in your dataset or batch.
- Parameters
labels (
tf.Tensor
) – Labels of your current input. The shape must be [batch_size] and values must be either 0 or 1.dtype (
tf.dtype
) – The dtype that weights will have. It should be float. Best is to provide logits.dtype as input.
- Returns
Computed weights that will cancel your dataset imbalance per batch.
- Return type
tf.Tensor
Examples
>>> import numpy >>> import tensorflow as tf >>> from bob.learn.tensorflow.loss import balanced_sigmoid_cross_entropy_loss_weights >>> labels = numpy.array([1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, ... 1, 1, 0, 1, 1, 1, 0, 1, 0, 1], dtype="int32") >>> sum(labels), len(labels) (20, 32) >>> session = tf.Session() # Eager execution is also possible check https://www.tensorflow.org/guide/eager >>> session.run(balanced_sigmoid_cross_entropy_loss_weights(labels, dtype='float32')) array([0.8 , 0.8 , 1.3333334, 1.3333334, 1.3333334, 0.8 , 0.8 , 1.3333334, 0.8 , 0.8 , 0.8 , 0.8 , 0.8 , 0.8 , 1.3333334, 0.8 , 1.3333334, 0.8 , 1.3333334, 1.3333334, 0.8 , 1.3333334, 0.8 , 0.8 , 1.3333334, 0.8 , 0.8 , 0.8 , 1.3333334, 0.8 , 1.3333334, 0.8 ], dtype=float32)
You would use it like this:
>>> #weights = balanced_sigmoid_cross_entropy_loss_weights(labels, dtype=logits.dtype) >>> #loss = tf.losses.sigmoid_cross_entropy(logits=logits, labels=labels, weights=weights)
-
bob.learn.tensorflow.loss.
balanced_softmax_cross_entropy_loss_weights
(labels, dtype='float32')[source]¶ Computes weights that normalizes your loss per class.
Labels must be a batch of one-hot encoded labels. The function takes labels and computes the weights per batch. Weights will be smaller for classes that have more samples in this batch. This is useful if you unbalanced classes in your dataset or batch.
- Parameters
labels (
tf.Tensor
) – Labels of your current input. The shape must be [batch_size, n_classes]. If your labels are not one-hot encoded, you can usetf.one_hot
to convert them first before giving them to this function.dtype (
tf.dtype
) – The dtype that weights will have. It should be float. Best is to provide logits.dtype as input.
- Returns
Computed weights that will cancel your dataset imbalance per batch.
- Return type
tf.Tensor
Examples
>>> import numpy >>> import tensorflow as tf >>> from bob.learn.tensorflow.loss import balanced_softmax_cross_entropy_loss_weights >>> labels = numpy.array([[1, 0, 0], ... [1, 0, 0], ... [0, 0, 1], ... [0, 1, 0], ... [0, 0, 1], ... [1, 0, 0], ... [1, 0, 0], ... [0, 0, 1], ... [1, 0, 0], ... [1, 0, 0], ... [1, 0, 0], ... [1, 0, 0], ... [1, 0, 0], ... [1, 0, 0], ... [0, 1, 0], ... [1, 0, 0], ... [0, 1, 0], ... [1, 0, 0], ... [0, 0, 1], ... [0, 0, 1], ... [1, 0, 0], ... [0, 0, 1], ... [1, 0, 0], ... [1, 0, 0], ... [0, 1, 0], ... [1, 0, 0], ... [1, 0, 0], ... [1, 0, 0], ... [0, 1, 0], ... [1, 0, 0], ... [0, 0, 1], ... [1, 0, 0]], dtype="int32") >>> session = tf.Session() # Eager execution is also possible check https://www.tensorflow.org/guide/eager >>> session.run(tf.reduce_sum(labels, axis=0)) array([20, 5, 7], dtype=int32) >>> session.run(balanced_softmax_cross_entropy_loss_weights(labels, dtype='float32')) array([0.53333336, 0.53333336, 1.5238096 , 2.1333334 , 1.5238096 , 0.53333336, 0.53333336, 1.5238096 , 0.53333336, 0.53333336, 0.53333336, 0.53333336, 0.53333336, 0.53333336, 2.1333334 , 0.53333336, 2.1333334 , 0.53333336, 1.5238096 , 1.5238096 , 0.53333336, 1.5238096 , 0.53333336, 0.53333336, 2.1333334 , 0.53333336, 0.53333336, 0.53333336, 2.1333334 , 0.53333336, 1.5238096 , 0.53333336], dtype=float32)
You would use it like this:
>>> #weights = balanced_softmax_cross_entropy_loss_weights(labels, dtype=logits.dtype) >>> #loss = tf.losses.softmax_cross_entropy(logits=logits, labels=labels, weights=weights)
-
bob.learn.tensorflow.loss.
content_loss
(noises, content_features)[source]¶ Implements the content loss from:
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. “A neural algorithm of artistic style.” arXiv preprint arXiv:1508.06576 (2015).
For a given noise signal \(n\), content image \(c\) and convolved with the DCNN \(\phi\) until the layer \(l\) the content loss is defined as:
\(L(n,c) = \sum_{l=?}^{?}({\phi^l(n) - \phi^l(c)})^2\)
-
bob.learn.tensorflow.loss.
contrastive_loss
(left_embedding, right_embedding, labels, contrastive_margin=2.0)¶ Compute the contrastive loss as in
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
\(L = 0.5 * (1-Y) * D^2 + 0.5 * (Y) * {max(0, margin - D)}^2\)
where, 0 are assign for pairs from the same class and 1 from pairs from different classes.
Parameters
- left_feature:
First element of the pair
- right_feature:
Second element of the pair
- labels:
Label of the pair (0 or 1)
- margin:
Contrastive margin
-
bob.learn.tensorflow.loss.
denoising_loss
(noise)[source]¶ Computes the denoising loss as in:
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. “A neural algorithm of artistic style.” arXiv preprint arXiv:1508.06576 (2015).
- Parameters
noise – Input noise
-
bob.learn.tensorflow.loss.
linear_gram_style_loss
(noises, gram_style_features)[source]¶ Implements the style loss from:
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. “A neural algorithm of artistic style.” arXiv preprint arXiv:1508.06576 (2015).
For a given noise signal \(n\), content image \(c\) and convolved with the DCNN \(\phi\) until the layer \(l\) the STYLE loss is defined as
\(L(n,c) = \sum_{l=?}^{?}\frac{({\phi^l(n)^T*\phi^l(n) - \phi^l(c)^T*\phi^l(c)})^2}{N*M}\)
-
bob.learn.tensorflow.loss.
mean_cross_entropy_center_loss
(logits, prelogits, labels, n_classes, alpha=0.9, factor=0.01)¶ Implementation of the CrossEntropy + Center Loss from the paper “A Discriminative Feature Learning Approach for Deep Face Recognition”(http://ydwen.github.io/papers/WenECCV16.pdf)
- Parameters
logits: prelogits: labels: n_classes: Number of classes of your task alpha: Alpha factor ((1-alpha)*centers-prelogits) factor: Weight factor of the center loss
-
bob.learn.tensorflow.loss.
mean_cross_entropy_loss
(logits, labels, add_regularization_losses=True)¶ Simple CrossEntropy loss. Basically it wrapps the function tf.nn.sparse_softmax_cross_entropy_with_logits.
- Parameters
logits: labels: add_regularization_losses: Regulize the loss???
-
bob.learn.tensorflow.loss.
mmd
(x, y)[source]¶ Maximum Mean Discrepancy with Gaussian kernel. See: https://stats.stackexchange.com/a/276618/49433
-
bob.learn.tensorflow.loss.
total_pairwise_confusion
(prelogits, name=None)[source]¶ Total Pairwise Confusion Loss
[1]X. Tu et al., “Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing,” arXiv preprint arXiv:1901.05602, 2019.
-
bob.learn.tensorflow.loss.
triplet_average_loss
(anchor_embedding, positive_embedding, negative_embedding, margin=5.0)¶ Compute the triplet loss as in
Schroff, Florian, Dmitry Kalenichenko, and James Philbin. “Facenet: A unified embedding for face recognition and clustering.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
\(L = sum( |f_a - f_p|^2 - |f_a - f_n|^2 + \lambda)\)
Parameters
- left_feature:
First element of the pair
- right_feature:
Second element of the pair
- label:
Label of the pair (0 or 1)
- margin:
Contrastive margin
-
bob.learn.tensorflow.loss.
triplet_fisher_loss
(anchor_embedding, positive_embedding, negative_embedding)¶
-
bob.learn.tensorflow.loss.
triplet_loss
(anchor_embedding, positive_embedding, negative_embedding, margin=5.0)¶ Compute the triplet loss as in
Schroff, Florian, Dmitry Kalenichenko, and James Philbin. “Facenet: A unified embedding for face recognition and clustering.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
\(L = sum( |f_a - f_p|^2 - |f_a - f_n|^2 + \lambda)\)
Parameters
- left_feature:
First element of the pair
- right_feature:
Second element of the pair
- label:
Label of the pair (0 or 1)
- margin:
Contrastive margin