bob.trainer.MLPBaseTrainer

class bob.trainer.MLPBaseTrainer((object)self, (MLPBaseTrainer)other) → None :

Bases: Boost.Python.instance

The base python class for MLP trainers based on cost derivatives.

You should use this class when you want to create your own MLP trainers and re-use the base infrastructured provided by this class, such as the computation of partial derivatives (using the backward_step() method).

Initializes a new MLPBaseTrainer copying data from another instance

__init__( (object)self, (int)batch_size, (Cost)cost_object) -> None :

Initializes a the MLPBaseTrainer with a batch size and a cost

Using this constructor, you must call initialize(), passing your own machine later on, so to resize the internal buffers correctly. In doubt, always check machine compatibility with an initialized trainer using is_compatible().

Keyword parameters:

batch_size

The size of each batch used for the forward and backward steps, so to speed-up the training

cost_object

An object from a derived class of bob.trainer.Cost that can calculate the cost at every iteration. If you set this to 1, then you are implementing stochastic training.

Note

Good values for batch sizes are tens of samples. This may affect the convergence.

__init__( (object)self, (int)batch_size, (Cost)cost_object, (MLP)machine) -> None :

Initializes a the MLPBaseTrainer with a batch size and a cost

Keyword parameters:

batch_size

The size of each batch used for the forward and backward steps, so to speed-up the training

cost_object

An object from a derived class of bob.trainer.Cost that can calculate the cost at every iteration. If you set this to 1, then you are implementing stochastic training.

Note

Good values for batch sizes are tens of samples. This may affect the convergence.

machine

A bob.machine.MLP object that will be used as a basis for this trainer’s internal properties.
__init__( (object)self, (int)batch_size, (Cost)cost_object, (MLP)machine, (bool)train_biases) -> None :

Initializes a the MLPBaseTrainer with a batch size and a cost

Keyword parameters:

batch_size

The size of each batch used for the forward and backward steps, so to speed-up the training

cost_object

An object from a derived class of bob.trainer.Cost that can calculate the cost at every iteration. If you set this to 1, then you are implementing stochastic training.

Note

Good values for batch sizes are tens of samples. This may affect the convergence.

machine

A bob.machine.MLP object that will be used as a basis for this trainer’s internal properties.

train_biases

A boolean indicating if we should train the biases weights (set it to True) or not (set it to False).
__init__((object)self, (MLPBaseTrainer)other) → None :

Initializes a new MLPBaseTrainer copying data from another instance

__init__( (object)self, (int)batch_size, (Cost)cost_object) -> None :

Initializes a the MLPBaseTrainer with a batch size and a cost

Using this constructor, you must call initialize(), passing your own machine later on, so to resize the internal buffers correctly. In doubt, always check machine compatibility with an initialized trainer using is_compatible().

Keyword parameters:

batch_size

The size of each batch used for the forward and backward steps, so to speed-up the training

cost_object

An object from a derived class of bob.trainer.Cost that can calculate the cost at every iteration. If you set this to 1, then you are implementing stochastic training.

Note

Good values for batch sizes are tens of samples. This may affect the convergence.

__init__( (object)self, (int)batch_size, (Cost)cost_object, (MLP)machine) -> None :

Initializes a the MLPBaseTrainer with a batch size and a cost

Keyword parameters:

batch_size

The size of each batch used for the forward and backward steps, so to speed-up the training

cost_object

An object from a derived class of bob.trainer.Cost that can calculate the cost at every iteration. If you set this to 1, then you are implementing stochastic training.

Note

Good values for batch sizes are tens of samples. This may affect the convergence.

machine

A bob.machine.MLP object that will be used as a basis for this trainer’s internal properties.
__init__( (object)self, (int)batch_size, (Cost)cost_object, (MLP)machine, (bool)train_biases) -> None :

Initializes a the MLPBaseTrainer with a batch size and a cost

Keyword parameters:

batch_size

The size of each batch used for the forward and backward steps, so to speed-up the training

cost_object

An object from a derived class of bob.trainer.Cost that can calculate the cost at every iteration. If you set this to 1, then you are implementing stochastic training.

Note

Good values for batch sizes are tens of samples. This may affect the convergence.

machine

A bob.machine.MLP object that will be used as a basis for this trainer’s internal properties.

train_biases

A boolean indicating if we should train the biases weights (set it to True) or not (set it to False).

Methods

__init__((object)self, (MLPBaseTrainer)other) Initializes a new MLPBaseTrainer copying data from another instance
backward_step((MLPBaseTrainer)self, …) Backwards a batch of data through the MLP and updates the internal buffers (errors and derivatives).
cost((MLPBaseTrainer)self, (object)target) Calculates the cost for a given target
forward_step((MLPBaseTrainer)self, (MLP)mlp, …) Forwards a batch of data through the MLP and updates the internal buffers.
hidden_layers((MLPBaseTrainer)arg1) The number of hidden layers on the target machine.
initialize((MLPBaseTrainer)self, (MLP)mlp) Initialize the training process.
is_compatible((MLPBaseTrainer)self, (MLP)machine) Checks if a given machine is compatible with my inner settings
set_bias_derivative((MLPBaseTrainer)self, …) Sets the cost derivative w.r.t.
set_derivative((MLPBaseTrainer)self, …) Sets the cost derivative w.r.t.
set_error((MLPBaseTrainer)self, …) Sets the error for a given layer in the network.
set_output((MLPBaseTrainer)self, …) Sets the output for a given layer in the network.

Attributes

batch_size How many examples should be fed each time through the network for testing or training.
bias_derivatives The calculated derivatives of the cost w.r.t.
cost_object An object, derived from bob.trainer.Cost (e.g.
derivatives The calculated derivatives of the cost w.r.t.
error The error (a.k.a.
output The outputs of each neuron in the network
train_biases A flag, indicating if this trainer will adjust the biases of the network (True) or not (False).
backward_step((MLPBaseTrainer)self, (MLP)mlp, (object)input, (object)target) → None :

Backwards a batch of data through the MLP and updates the internal buffers (errors and derivatives).

batch_size

How many examples should be fed each time through the network for testing or training. This number reflects the internal sizes of structures setup to accomodate the input and the output of the network.

bias_derivatives

The calculated derivatives of the cost w.r.t. to the specific biases of the network, organized to match the organization of biases of the machine being trained.

cost((MLPBaseTrainer)self, (object)target) → float :

Calculates the cost for a given target

The cost for a given target is defined as the sum of individual costs for every output in the current network, averaged over all the examples.

Note

This variant assumes you have called forward_step() before.

cost( (MLPBaseTrainer)self, (MLP)machine, (object)input, (object)target) -> float :

Calculates the cost for a given target

The cost for a given target is defined as the sum of individual costs for every output in the current network, averaged over all the examples.

Note

This variant will call the forward_step() before calculating the cost. After returning, you can directly call backward_step() to evaluate the derivatives w.r.t. the cost, if you wish to do so.

cost_object

An object, derived from bob.trainer.Cost (e.g. bob.trainer.SquareError or bob.trainer.CrossEntropyLoss), that is used to evaluate the cost (a.k.a. loss) and the derivatives given the input, the target and the MLP structure.

derivatives

The calculated derivatives of the cost w.r.t. to the specific weights of the network, organized to match the organization of weights of the machine being trained.

error

The error (a.k.a. \(\delta\)’s) back-propagated through the network, given an input and a target.

forward_step((MLPBaseTrainer)self, (MLP)mlp, (object)input) → None :

Forwards a batch of data through the MLP and updates the internal buffers.

hidden_layers((MLPBaseTrainer)arg1) → int :

The number of hidden layers on the target machine.

initialize((MLPBaseTrainer)self, (MLP)mlp) → None :

Initialize the training process.

is_compatible((MLPBaseTrainer)self, (MLP)machine) → bool :

Checks if a given machine is compatible with my inner settings

output

The outputs of each neuron in the network

set_bias_derivative((MLPBaseTrainer)self, (object)array, (int)k) → None :

Sets the cost derivative w.r.t. the bias for a given layer.

set_derivative((MLPBaseTrainer)self, (object)array, (int)k) → None :

Sets the cost derivative w.r.t. the weights for a given layer.

set_error((MLPBaseTrainer)self, (object)array, (int)k) → None :

Sets the error for a given layer in the network.

set_output((MLPBaseTrainer)self, (object)array, (int)k) → None :

Sets the output for a given layer in the network.

train_biases

A flag, indicating if this trainer will adjust the biases of the network (True) or not (False).