Python API

Models

bob.learn.tensorflow.models.AlexNet_simplified([name])

A simplified implementation of AlexNet presented in: Y.

bob.learn.tensorflow.models.DeepPixBiS(…)

bob.learn.tensorflow.models.DenseNet(*args, …)

Creating the Densenet Architecture.

bob.learn.tensorflow.models.densenet161([…])

bob.learn.tensorflow.models.MineModel(*args, …)

param is_mine_f

If true, will implement MINE-F (equation 6), otherwise will implement equation 5

Data

bob.learn.tensorflow.data.dataset_using_generator(…)

A generator class which wraps samples so that they can be used with tf.data.Dataset.from_generator

bob.learn.tensorflow.data.dataset_to_tfrecord(…)

Writes a tf.data.Dataset into a TFRecord file.

bob.learn.tensorflow.data.dataset_from_tfrecord(…)

Reads TFRecords and returns a dataset.

Losses

bob.learn.tensorflow.losses.CenterLossLayer(…)

A layer to be added in the model if you want to use CenterLoss

bob.learn.tensorflow.losses.CenterLoss(…)

Center loss.

bob.learn.tensorflow.losses.PixelwiseBinaryCrossentropy([…])

A pixel wise loss which is just a cross entropy loss but applied to all pixels. Appeared in::.

bob.learn.tensorflow.losses.balanced_sigmoid_cross_entropy_loss_weights(labels)

Computes weights that normalizes your loss per class.

bob.learn.tensorflow.losses.balanced_softmax_cross_entropy_loss_weights(labels)

Computes weights that normalizes your loss per class.

Image Utilities

bob.learn.tensorflow.utils.image.to_channels_last(image)

Converts the image to channel_last format.

bob.learn.tensorflow.utils.image.to_channels_first(image)

Converts the image to channel_first format.

bob.learn.tensorflow.utils.image.blocks_tensorflow(…)

Return all non-overlapping blocks of an image using tensorflow operations.

bob.learn.tensorflow.utils.image.tf_repeat(…)

param tensor

A Tensor. 1-D or higher.

bob.learn.tensorflow.utils.image.all_patches(…)

Extracts all patches of an image

Keras Utilities

bob.learn.tensorflow.utils.keras.SequentialLayer(…)

A Layer that does the same thing as tf.keras.Sequential but its variables can be scoped.

bob.learn.tensorflow.utils.keras.keras_channels_index()

bob.learn.tensorflow.utils.keras.keras_model_weights_as_initializers_for_variables(model)

Changes the initialization operations of variables in the model to take the current value as the initial values.

bob.learn.tensorflow.utils.keras.restore_model_variables_from_checkpoint(…)

bob.learn.tensorflow.utils.keras.initialize_model_from_checkpoint(…)

bob.learn.tensorflow.utils.keras.model_summary(model)

Math Utilities

bob.learn.tensorflow.utils.math.gram_matrix(…)

Computes the gram matrix.

bob.learn.tensorflow.utils.math.upper_triangle_and_diagonal(A)

Returns a flat version of upper triangle of a 2D array (including diagonal).

bob.learn.tensorflow.utils.math.upper_triangle(A)

bob.learn.tensorflow.utils.math.pdist(A[, …])

bob.learn.tensorflow.utils.math.cdist(A, B)

bob.learn.tensorflow.utils.math.random_choice_no_replacement(…)

Similar to np.random.choice with no replacement.

Detailed Information

bob.learn.tensorflow.get_config()[source]

Returns a string containing the configuration information.

class bob.learn.tensorflow.data.Generator(samples, reader, multiple_samples=False, shuffle_on_epoch_end=False, **kwargs)

Bases: object

A generator class which wraps samples so that they can be used with tf.data.Dataset.from_generator

epoch

The number of epochs that have been passed so far.

Type

int

multiple_samples

If true, it assumes that the bio database’s samples actually contain multiple samples. This is useful for when you want to for example treat video databases as image databases.

Type

bool, optional

reader

A callable with the signature of data, label, key = reader(sample) which takes a sample and loads it.

Type

object, optional

samples

A list of samples to be given to reader to load the data.

Type

[object]

shuffle_on_epoch_end

If True, it shuffle the samples at the end of each epoch.

Type

bool, optional

property output_shapes

The shapes of the returned samples

property output_types

The types of the returned samples

bob.learn.tensorflow.data.dataset_from_tfrecord(tfrecord, num_parallel_reads=None)[source]

Reads TFRecords and returns a dataset. The TFRecord file must have been created using the dataset_to_tfrecord function.

Parameters
  • tfrecord (str or list) – Path to the TFRecord file. Pass a list if you are sure several tfrecords need the same map function.

  • num_parallel_reads (int) – A tf.int64 scalar representing the number of files to read in parallel. Defaults to reading files sequentially.

Returns

A dataset that contains the data from the TFRecord file.

Return type

tf.data.Dataset

bob.learn.tensorflow.data.dataset_to_tfrecord(dataset, output)[source]

Writes a tf.data.Dataset into a TFRecord file.

Parameters
  • dataset (tf.data.Dataset) – The tf.data.Dataset that you want to write into a TFRecord file.

  • output (str) – Path to the TFRecord file. Besides this file, a .json file is also created. This json file is needed when you want to convert the TFRecord file back into a dataset.

Returns

A tf.Operation that, when run, writes contents of dataset to a file. When running in eager mode, calling this function will write the file. Otherwise, you have to call session.run() on the returned operation.

Return type

tf.Operation

bob.learn.tensorflow.data.dataset_using_generator(samples, reader, **kwargs)[source]

A generator class which wraps samples so that they can be used with tf.data.Dataset.from_generator

Parameters
  • samples ([object]) – A list of samples to be given to reader to load the data.

  • reader (object, optional) – A callable with the signature of data, label, key = reader(sample) which takes a sample and loads it.

  • **kwargs – Extra keyword arguments are passed to Generator

Returns

A tf.data.Dataset

Return type

object

class bob.learn.tensorflow.losses.CenterLoss(centers_layer, alpha=0.9, update_centers=True, name='center_loss', **kwargs)

Bases: tensorflow.python.keras.losses.Loss

Center loss. Introduced in: A Discriminative Feature Learning Approach for Deep Face Recognition https://ydwen.github.io/papers/WenECCV16.pdf

Warning

This loss MUST NOT BE CALLED during evaluation as it will update the centers! This loss only works with sparse labels. This loss must be used with CenterLossLayer embedded into the model

alpha

The moving average coefficient for updating centers in each batch.

Type

float

centers

The variable that keeps track of centers.

centers_layer

The layer that keeps track of centers.

update_centers

Update the centers? Used at training

Type

bool

call(sparse_labels, prelogits)[source]

Invokes the Loss instance.

Parameters
  • y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]

  • y_pred – The predicted values. shape = [batch_size, d0, .. dN]

Returns

Loss values with the shape [batch_size, d0, .. dN-1].

class bob.learn.tensorflow.losses.CenterLossLayer(*args, **kwargs)

Bases: tensorflow.python.keras.engine.base_layer.Layer

A layer to be added in the model if you want to use CenterLoss

centers

The variable that keeps track of centers.

n_classes

Number of classes of the task.

Type

int

n_features

The size of prelogits.

Type

int

call(x)[source]

This is where the layer’s logic lives.

Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments. Currently unused.

Returns

A tensor or list/tuple of tensors.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

class bob.learn.tensorflow.losses.PixelwiseBinaryCrossentropy(balance_weights=True, label_smoothing=0.5, name='pixel_wise_binary_cross_entropy', **kwargs)

Bases: tensorflow.python.keras.losses.Loss

A pixel wise loss which is just a cross entropy loss but applied to all pixels. Appeared in:

@inproceedings{GeorgeICB2019,
    author = {Anjith George, Sebastien Marcel},
    title = {Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection},
    year = {2019},
    booktitle = {ICB 2019},
}
call(labels, logits)[source]

Invokes the Loss instance.

Parameters
  • y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]

  • y_pred – The predicted values. shape = [batch_size, d0, .. dN]

Returns

Loss values with the shape [batch_size, d0, .. dN-1].

bob.learn.tensorflow.losses.balanced_sigmoid_cross_entropy_loss_weights(labels, dtype='float32')[source]

Computes weights that normalizes your loss per class.

Labels must be a batch of binary labels. The function takes labels and computes the weights per batch. Weights will be smaller for the class that have more samples in this batch. This is useful if you unbalanced classes in your dataset or batch.

Parameters
  • labels (tf.Tensor) – Labels of your current input. The shape must be [batch_size] and values must be either 0 or 1.

  • dtype (tf.dtype) – The dtype that weights will have. It should be float. Best is to provide logits.dtype as input.

Returns

Computed weights that will cancel your dataset imbalance per batch.

Return type

tf.Tensor

Examples

>>> import numpy
>>> import tensorflow as tf
>>> from bob.learn.tensorflow.losses import balanced_sigmoid_cross_entropy_loss_weights
>>> labels = numpy.array([1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0,
...                 1, 1, 0, 1, 1, 1, 0, 1, 0, 1], dtype="int32")
>>> sum(labels), len(labels)
(20, 32)
>>> balanced_sigmoid_cross_entropy_loss_weights(labels, dtype='float32').numpy()
array([0.8      , 0.8      , 1.3333334, 1.3333334, 1.3333334, 0.8      ,
       0.8      , 1.3333334, 0.8      , 0.8      , 0.8      , 0.8      ,
       0.8      , 0.8      , 1.3333334, 0.8      , 1.3333334, 0.8      ,
       1.3333334, 1.3333334, 0.8      , 1.3333334, 0.8      , 0.8      ,
       1.3333334, 0.8      , 0.8      , 0.8      , 1.3333334, 0.8      ,
       1.3333334, 0.8      ], dtype=float32)

You would use it like this:

>>> #weights = balanced_sigmoid_cross_entropy_loss_weights(labels, dtype=logits.dtype)
>>> #loss = tf.losses.sigmoid_cross_entropy(logits=logits, labels=labels, weights=weights)
bob.learn.tensorflow.losses.balanced_softmax_cross_entropy_loss_weights(labels, dtype='float32')[source]

Computes weights that normalizes your loss per class.

Labels must be a batch of one-hot encoded labels. The function takes labels and computes the weights per batch. Weights will be smaller for classes that have more samples in this batch. This is useful if you unbalanced classes in your dataset or batch.

Parameters
  • labels (tf.Tensor) – Labels of your current input. The shape must be [batch_size, n_classes]. If your labels are not one-hot encoded, you can use tf.one_hot to convert them first before giving them to this function.

  • dtype (tf.dtype) – The dtype that weights will have. It should be float. Best is to provide logits.dtype as input.

Returns

Computed weights that will cancel your dataset imbalance per batch.

Return type

tf.Tensor

Examples

>>> import numpy
>>> import tensorflow as tf
>>> from bob.learn.tensorflow.losses import balanced_softmax_cross_entropy_loss_weights
>>> labels = numpy.array([[1, 0, 0],
...                 [1, 0, 0],
...                 [0, 0, 1],
...                 [0, 1, 0],
...                 [0, 0, 1],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [0, 0, 1],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [0, 1, 0],
...                 [1, 0, 0],
...                 [0, 1, 0],
...                 [1, 0, 0],
...                 [0, 0, 1],
...                 [0, 0, 1],
...                 [1, 0, 0],
...                 [0, 0, 1],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [0, 1, 0],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [1, 0, 0],
...                 [0, 1, 0],
...                 [1, 0, 0],
...                 [0, 0, 1],
...                 [1, 0, 0]], dtype="int32")
>>> tf.reduce_sum(labels, axis=0).numpy()
array([20,  5,  7], dtype=int32)
>>> balanced_softmax_cross_entropy_loss_weights(labels, dtype='float32').numpy()
array([0.53333336, 0.53333336, 1.5238096 , 2.1333334 , 1.5238096 ,
       0.53333336, 0.53333336, 1.5238096 , 0.53333336, 0.53333336,
       0.53333336, 0.53333336, 0.53333336, 0.53333336, 2.1333334 ,
       0.53333336, 2.1333334 , 0.53333336, 1.5238096 , 1.5238096 ,
       0.53333336, 1.5238096 , 0.53333336, 0.53333336, 2.1333334 ,
       0.53333336, 0.53333336, 0.53333336, 2.1333334 , 0.53333336,
       1.5238096 , 0.53333336], dtype=float32)

You would use it like this:

>>> #weights = balanced_softmax_cross_entropy_loss_weights(labels, dtype=logits.dtype)
>>> #loss = tf.keras.losses.categorical_crossentropy(y_true=labels, y_pred=logits) * weights
bob.learn.tensorflow.models.AlexNet_simplified(name='AlexNet', **kwargs)

A simplified implementation of AlexNet presented in: Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

class bob.learn.tensorflow.models.ArcFaceLayer(*args, **kwargs)

Bases: tensorflow.python.keras.engine.base_layer.Layer

Implements the ArcFace from equation (3) of ArcFace: Additive Angular Margin Loss for Deep Face Recognition

Defined as:

\(s(cos(\theta_i) + m\)

Parameters
  • n_classes (int) – Number of classes

  • m (float) – Margin

  • s (int) – Scale

build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(X, y, training=None)[source]

This is where the layer’s logic lives.

Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments. Currently unused.

Returns

A tensor or list/tuple of tensors.

class bob.learn.tensorflow.models.ArcFaceLayer3Penalties(*args, **kwargs)

Bases: tensorflow.python.keras.engine.base_layer.Layer

Implements the ArcFace loss from equation (4) of ArcFace: Additive Angular Margin Loss for Deep Face Recognition

Defined as:

\(s(cos(m_1\theta_i + m_2) -m_3\)

build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(X, y, training=None)[source]

This is where the layer’s logic lives.

Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments. Currently unused.

Returns

A tensor or list/tuple of tensors.

class bob.learn.tensorflow.models.ArcFaceModel(*args, **kwargs)

Bases: bob.learn.tensorflow.models.EmbeddingValidation

test_step(data)[source]

Test Step

train_step(data)[source]

Train Step

class bob.learn.tensorflow.models.DeepPixBiS(*args, **kwargs)

Bases: tensorflow.python.keras.engine.training.Model

call(x, training=None)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

class bob.learn.tensorflow.models.DenseNet(*args, **kwargs)

Bases: tensorflow.python.keras.engine.training.Model

Creating the Densenet Architecture.

Parameters
  • depth_of_model – number of layers in the model.

  • growth_rate – number of filters to add per conv block.

  • num_of_blocks – number of dense blocks.

  • output_classes – number of output classes.

  • num_layers_in_each_block – number of layers in each block. If -1, then we calculate this by (depth-3)/4. If positive integer, then the it is used as the number of layers per block. If list or tuple, then this list is used directly.

  • data_format – “channels_first” or “channels_last”

  • bottleneck – boolean, to decide which part of conv block to call.

  • compression – reducing the number of inputs(filters) to the transition block.

  • weight_decay – weight decay

  • rate – dropout rate.

  • pool_initial – If True add a 7x7 conv with stride 2 followed by 3x3 maxpool else, do a 3x3 conv with stride 1.

  • include_top – If true, GlobalAveragePooling Layer and Dense layer are included.

call(x, training=None)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

class bob.learn.tensorflow.models.EmbeddingValidation(*args, **kwargs)

Bases: tensorflow.python.keras.engine.training.Model

Use this model if the validation step should validate the accuracy with respect to embeddings.

In this model, the test_step runs the function bob.learn.tensorflow.metrics.embedding_accuracy.accuracy_from_embeddings

compile(**kwargs)[source]

Compile

test_step(data)[source]

Test Step

train_step(data)[source]

Train Step

class bob.learn.tensorflow.models.MineModel(*args, **kwargs)

Bases: tensorflow.python.keras.engine.training.Model

Parameters

is_mine_f (bool) – If true, will implement MINE-F (equation 6), otherwise will implement equation 5

call(inputs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

bob.learn.tensorflow.models.densenet161(weights='imagenet', output_classes=1000, data_format='channels_last', weight_decay=0.0001, depth_of_model=161, growth_rate=48, num_of_blocks=4, num_layers_in_each_block=6, 12, 36, 24, pool_initial=True, **kwargs)[source]
bob.learn.tensorflow.utils.image.to_channels_last(image)[source]

Converts the image to channel_last format. This is the same format as in matplotlib, skimage, and etc.

Parameters

image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [C, H, W].

Returns

image – The image in […, H, W, C] format.

Return type

tf.Tensor

Raises

ValueError – If dim of image is less than 3.

bob.learn.tensorflow.utils.image.to_channels_first(image)[source]

Converts the image to channel_first format. This is the same format as in bob.io.image and bob.io.video.

Parameters

image (tf.Tensor) – At least a 3 dimensional image. If the dimension is more than 3, the last 3 dimensions are assumed to be [H, W, C].

Returns

image – The image in […, C, H, W] format.

Return type

tf.Tensor

Raises

ValueError – If dim of image is less than 3.

bob.learn.tensorflow.utils.image.blocks_tensorflow(images, block_size)[source]

Return all non-overlapping blocks of an image using tensorflow operations.

Parameters
  • images (tf.Tensor) – The input color images. It is assumed that the image has a shape of [?, H, W, C].

  • block_size ((int, int)) – A tuple of two integers indicating the block size.

Returns

  • blocks (tf.Tensor) – All the blocks in the batch dimension. The output will be of size [?, block_size[0], block_size[1], C].

  • n_blocks (int) – The number of blocks that was obtained per image.

bob.learn.tensorflow.utils.image.tf_repeat(tensor, repeats)[source]
Parameters
  • tensor – A Tensor. 1-D or higher.

  • repeats – A list. Number of repeat for each dimension, length must be the same as the number of dimensions in input

Returns

  • A Tensor. Has the same type as input. Has the shape of tensor.shape *

  • repeats

bob.learn.tensorflow.utils.image.all_patches(image, label, key, size)[source]

Extracts all patches of an image

Parameters
  • image – The image should be channels_last format and already batched.

  • label – The label for the image

  • key – The key for the image

  • size ((int, int)) – The height and width of the blocks.

Returns

  • blocks – The non-overlapping blocks of size from image and labels and keys are repeated.

  • label

  • key

class bob.learn.tensorflow.utils.keras.SequentialLayer(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

A Layer that does the same thing as tf.keras.Sequential but its variables can be scoped.

Parameters

layers (list) – List of layers. All layers must be provided at initialization time

call(inputs, training=None, mask=None)[source]

This is where the layer’s logic lives.

Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments. Currently unused.

Returns

A tensor or list/tuple of tensors.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

classmethod from_config(config, custom_objects=None)[source]

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Parameters

config – A Python dictionary, typically the output of get_config.

Returns

A layer instance.

bob.learn.tensorflow.utils.keras.keras_channels_index()[source]
bob.learn.tensorflow.utils.keras.keras_model_weights_as_initializers_for_variables(model)[source]

Changes the initialization operations of variables in the model to take the current value as the initial values. This is useful when you want to restore a pre-trained Keras model inside the model_fn of an estimator.

Parameters

model (object) – A Keras model.

bob.learn.tensorflow.utils.keras.restore_model_variables_from_checkpoint(model, checkpoint, session=None, normalizer=None)[source]
bob.learn.tensorflow.utils.keras.initialize_model_from_checkpoint(model, checkpoint, normalizer=None)[source]
bob.learn.tensorflow.utils.keras.model_summary(model, do_print=False)[source]
bob.learn.tensorflow.utils.math.gram_matrix(input_tensor)[source]

Computes the gram matrix.

Parameters

input_tensor – The input tensor. Usually it’s the activation of a conv layer. The input shape must be BHWC.

Returns

The computed gram matrix as a tensor.

Return type

object

Example

>>> from bob.learn.tensorflow.utils import gram_matrix
>>> gram_matrix(tf.zeros((32, 4, 6, 12))).numpy().shape
(32, 12, 12)
bob.learn.tensorflow.utils.math.upper_triangle_and_diagonal(A)[source]

Returns a flat version of upper triangle of a 2D array (including diagonal).

This function is useful to be applied on gram matrices since they contain duplicate information.

Parameters

A – A two dimensional array.

Returns

The flattened upper triangle of array

Return type

object

Example

>>> from bob.learn.tensorflow.utils import upper_triangle_and_diagonal
>>> A = [
...  [1, 2, 3],
...  [4, 5, 6],
...  [7, 8, 9],
... ]
>>> upper_triangle_and_diagonal(A).numpy()
array([1, 2, 3, 5, 6, 9], dtype=int32)
bob.learn.tensorflow.utils.math.upper_triangle(A)[source]
bob.learn.tensorflow.utils.math.pdist(A, metric='sqeuclidean')[source]
bob.learn.tensorflow.utils.math.cdist(A, B, metric='sqeuclidean')[source]
bob.learn.tensorflow.utils.math.random_choice_no_replacement(one_dim_input, num_indices_to_drop=3, sort=False)[source]

Similar to np.random.choice with no replacement. Code from https://stackoverflow.com/a/54755281/1286165