R-prop training for Multi-Layer Perceptron (MLP)
Algorithms have at least one input and one output. All algorithm endpoints are organized in groups. Groups are used by the platform to indicate which inputs and outputs are synchronized together. The first group is automatically synchronized with the channel defined by the block in which the algorithm is deployed.
Endpoint Name | Data Format | Nature |
---|---|---|
class_id | system/uint64/1 | Input |
image | system/array_2d_floats/1 | Input |
model | tutorial/mlp/1 | Output |
Parameters allow users to change the configuration of an algorithm when scheduling an experiment
Name | Description | Type | Default | Range/Choices |
---|---|---|---|---|
number-of-hidden-units | uint32 | 10 | ||
number-of-iterations | uint32 | 50 | ||
seed | uint32 | 0 |
The code for this algorithm in Python
The ruler at 80 columns indicate suggested POSIX line breaks (for readability).
The editor will automatically enlarge to accomodate the entirety of your input
Use keyboard shortcuts for search/replace and faster editing. For example, use Ctrl-F (PC) or Cmd-F (Mac) to search through this box
This algorithm implements a training procedure for a multi-layer perceptron (MLP), a neural network architecture that has some well-defined characteristics such as a feed-forward structure. This algorithm assumes 0 or 1 hidden layer, and the training procedure is based on R-prop [Ri93].
This implementation relies on the Bob library.
The inputs are:
The output model is the MLP in a Bob-related format.
[Ri93] |
|
Updated | Name | Databases/Protocols | Analyzers | |||
---|---|---|---|---|---|---|
smarcel/tutorial/digit/2/mnist-mlp-nhu10-niter100-seed2001 | mnist/1@idiap | tutorial/multiclass_postperf/2 |
This table shows the number of times this algorithm has been successfully run using the given environment. Note this does not provide sufficient information to evaluate if the algorithm will run when submitted to different conditions.