bob.trainer.CrossEntropyLoss

class bob.trainer.CrossEntropyLoss((object)self, (Activation)actfun) → None :

Bases: bob.trainer._trainer.Cost

Calculates the Cross-Entropy Loss between output and target. The cross entropy loss is defined as follows:

\[J = - y \cdot \log{(\hat{y})} - (1-y) \log{(1-\hat{y})}\]

where \(\hat{y}\) is the output estimated by your machine and \(y\) is the expected output.

Keyword arguments:

actfun
The activation function object used at the last layer. If you set this to bob.machine.LogisticActivation, a mathematical simplification is possible in which backprop_error() can benefit increasing the numerical stability of the training process. The simplification goes as follows:
\[b = \delta \cdot \varphi'(z)\]

But, for the cross-entropy loss:

\[\delta = \frac{\hat{y} - y}{\hat{y}(1 - \hat{y})}\]

and \(\varphi'(z) = \hat{y} - (1 - \hat{y})\), so:

\[b = \hat{y} - y\]
__init__((object)self, (Activation)actfun) → None :

Keyword arguments:

actfun
The activation function object used at the last layer. If you set this to bob.machine.LogisticActivation, a mathematical simplification is possible in which backprop_error() can benefit increasing the numerical stability of the training process. The simplification goes as follows:
\[b = \delta \cdot \varphi'(z)\]

But, for the cross-entropy loss:

\[\delta = \frac{\hat{y} - y}{\hat{y}(1 - \hat{y})}\]

and \(\varphi'(z) = \hat{y} - (1 - \hat{y})\), so:

\[b = \hat{y} - y\]

Methods

__init__((object)self, (Activation)actfun) Keyword arguments:
error((Cost)self, (object)output, …) Computes the back-propagated error for a given MLP output
f((Cost)self, (object)output, …) Computes the cost, given the current output of the linear machine or MLP
f_prime((Cost)self, (object)output, …) Computes the derivative of the cost w.r.t.

Attributes

logistic_activation If set to True, will calculate the error using the simplification explained in the class documentation
__call__((Cost)self, (object)output, (object)target, (object)res) → None :
Computes the cost, given the current output of the linear machine or MLP

and the expected output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray. If the input is a numpy.ndarray, then the output will also be.

Returns the cost

__call__( (Cost)self, (object)output, (object)target) -> object :

Computes the cost, given the current output of the linear machine or MLP and the expected output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray. If the input is a numpy.ndarray, then the output will also be.

Returns the cost

__call__( (Cost)self, (float)output, (float)target) -> float :

Computes the cost, given the current output of the linear machine or MLP and the expected output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray. If the input is a numpy.ndarray, then the output will also be.

Returns the cost

error((Cost)self, (object)output, (object)target, (object)res) → None :
Computes the back-propagated error for a given MLP output

layer, given its activation function and outputs - i.e., the error back-propagated through the last layer neuron up to the synapse connecting the last hidden layer to the output layer.

This entry point allows for optimization in the calculation of the back-propagated errors in cases where there is a possibility of mathematical simplification when using a certain combination of cost-function and activation. For example, using a ML-cost and a logistic activation function.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray.

Returns the calculated error, back-propagated to before the output. If the input is a numpy.ndarray, then the output will also be. neuron.

error( (Cost)self, (object)output, (object)target) -> object :

Computes the back-propagated error for a given MLP output layer, given its activation function and outputs - i.e., the error back-propagated through the last layer neuron up to the synapse connecting the last hidden layer to the output layer.

This entry point allows for optimization in the calculation of the back-propagated errors in cases where there is a possibility of mathematical simplification when using a certain combination of cost-function and activation. For example, using a ML-cost and a logistic activation function.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray.

Returns the calculated error, back-propagated to before the output. If the input is a numpy.ndarray, then the output will also be. neuron.

error( (Cost)self, (float)output, (float)target) -> float :

Computes the back-propagated error for a given MLP output layer, given its activation function and outputs - i.e., the error back-propagated through the last layer neuron up to the synapse connecting the last hidden layer to the output layer.

This entry point allows for optimization in the calculation of the back-propagated errors in cases where there is a possibility of mathematical simplification when using a certain combination of cost-function and activation. For example, using a ML-cost and a logistic activation function.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray.

Returns the calculated error, back-propagated to before the output. If the input is a numpy.ndarray, then the output will also be. neuron.

f((Cost)self, (object)output, (object)target, (object)res) → None :
Computes the cost, given the current output of the linear machine or MLP

and the expected output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray. If the input is a numpy.ndarray, then the output will also be.

Returns the cost

f( (Cost)self, (object)output, (object)target) -> object :

Computes the cost, given the current output of the linear machine or MLP and the expected output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray. If the input is a numpy.ndarray, then the output will also be.

Returns the cost

f( (Cost)self, (float)output, (float)target) -> float :

Computes the cost, given the current output of the linear machine or MLP and the expected output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray. If the input is a numpy.ndarray, then the output will also be.

Returns the cost

f_prime((Cost)self, (object)output, (object)target, (object)res) → None :

Computes the derivative of the cost w.r.t. output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray.

Returns the calculated error. If the input is a numpy.ndarray, then the output will also be.

f_prime( (Cost)self, (object)output, (object)target) -> object :

Computes the derivative of the cost w.r.t. output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray.

Returns the calculated error. If the input is a numpy.ndarray, then the output will also be.

f_prime( (Cost)self, (float)output, (float)target) -> float :

Computes the derivative of the cost w.r.t. output.

Keyword arguments:

output
Real output from the linear machine or MLP
target
Target output you are training to achieve
res (optional)
Where to place the result from the calculation. Only available if the input are numpy.ndarray.

Returns the calculated error. If the input is a numpy.ndarray, then the output will also be.

logistic_activation

If set to True, will calculate the error using the simplification explained in the class documentation