Python API

This section includes information for using the pure Python API of bob.learn.activation.

bob.learn.activation.get_include()[source]

Returns the directory containing the C/C++ API include directives

bob.learn.activation.get_config()[source]

Returns a string containing the configuration information.

class bob.learn.activation.Activation

Bases: object

Base class for activation functors.

Warning

You cannot instantiate an object of this type directly, you must use it through one of the inherited types.

Warning

You cannot create classes in Python that derive from this one and expect them to work fine with the C++ code, as no hook is implemented as of this time to allow for this. You must create a class that inherits from the C++ bob::machine::Activation in C++ and then bind it to Python like we have done for the classes available in these bindings.

f(z[, res]) → array | scalar

Computes the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime(z[, res]) → array | scalar

Computes the derivative of the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime_from_f(a[, res]) → array | scalar

Computes the derivative of the activated value, given the derivative value a, placing results in res (and returning it).

If a is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array a. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

load(f) → None

Loads itself from a bob.io.HDF5File

save(f) → None

Saves itself to a bob.io.HDF5File

unique_identifier() → str

Returns a unique (string) identifier, used by this class in connection with the Activation registry.

bob.learn.activation.HyperbolicTangent

alias of HyperbolicTangentActivation

class bob.learn.activation.Identity

Bases: bob.learn.activation.Activation

Identity() -> new Identity activation functor

Computes f(z) = z as activation function.

f(z[, res]) → array | scalar

Computes the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime(z[, res]) → array | scalar

Computes the derivative of the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime_from_f(a[, res]) → array | scalar

Computes the derivative of the activated value, given the derivative value a, placing results in res (and returning it).

If a is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array a. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

load(f) → None

Loads itself from a bob.io.HDF5File

save(f) → None

Saves itself to a bob.io.HDF5File

unique_identifier() → str

Returns a unique (string) identifier, used by this class in connection with the Activation registry.

class bob.learn.activation.Linear

Bases: bob.learn.activation.Activation

Linear([C=1.0]) -> new linear activation functor

Computes f(z) = C \cdot z as activation function.

The constructor builds a new linear activation function with a given constant. Don’t use this if you just want to set constant to the default value (1.0). In such a case, prefer to use the more efficient IdentityActivation.

C

The multiplication factor for the linear function (read-only)

f(z[, res]) → array | scalar

Computes the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime(z[, res]) → array | scalar

Computes the derivative of the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime_from_f(a[, res]) → array | scalar

Computes the derivative of the activated value, given the derivative value a, placing results in res (and returning it).

If a is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array a. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

load(f) → None

Loads itself from a bob.io.HDF5File

save(f) → None

Saves itself to a bob.io.HDF5File

unique_identifier() → str

Returns a unique (string) identifier, used by this class in connection with the Activation registry.

class bob.learn.activation.Logistic

Bases: bob.learn.activation.Activation

Logistic() -> new Logistic activation functor

Computes f(z) = 1/(1+ e^{-z}) as activation function.

f(z[, res]) → array | scalar

Computes the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime(z[, res]) → array | scalar

Computes the derivative of the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime_from_f(a[, res]) → array | scalar

Computes the derivative of the activated value, given the derivative value a, placing results in res (and returning it).

If a is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array a. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

load(f) → None

Loads itself from a bob.io.HDF5File

save(f) → None

Saves itself to a bob.io.HDF5File

unique_identifier() → str

Returns a unique (string) identifier, used by this class in connection with the Activation registry.

class bob.learn.activation.MultipliedHyperbolicTangent

Bases: bob.learn.activation.Activation

MultipliedHyperbolicTangentActivation([C=1.0, [M=1.0]]) -> new multiplied hyperbolic tangent functor

Computes f(z) = C \cdot \tanh(Mz) as activation function.

Builds a new hyperbolic tangent activation function with a given constant for the inner and outter products. Don’t use this if you just want to set the constants to the default values (1.0). In such a case, prefer to use the more efficient bob.machine.HyperbolicTangentActivation.

C

The outter multiplication factor for the multiplied hyperbolic tangent function (read-only).

M

The inner multiplication factor for the multiplied hyperbolic tangent function (read-only).

f(z[, res]) → array | scalar

Computes the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime(z[, res]) → array | scalar

Computes the derivative of the activated value, given an input array or scalar z, placing results in res (and returning it).

If z is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array z. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

f_prime_from_f(a[, res]) → array | scalar

Computes the derivative of the activated value, given the derivative value a, placing results in res (and returning it).

If a is an array, then you can pass another array in res to store the results and, in this case, we won’t allocate a new one for that purpose. This can be a speed-up in certain scenarios. Note this does not work for scalars as it makes little sense to avoid scalar allocation at this level.

If you decide to pass an array in res, note this array should have the exact same dimensions as the input array a. It is an error otherwise.

Note

This method only accepts 64-bit float arrays as input or output.

load(f) → None

Loads itself from a bob.io.HDF5File

save(f) → None

Saves itself to a bob.io.HDF5File

unique_identifier() → str

Returns a unique (string) identifier, used by this class in connection with the Activation registry.

Previous topic

Bob Activation Functors

Next topic

C++ API

This Page