Details

Type: New Feature

Status: Closed

Priority: Minor

Resolution: Duplicate

Affects Version/s: 0.7

Fix Version/s: 0.9

Component/s: None

Labels:
Description
Implement a multi layer perceptron
 via Matrix Multiplication
 Learning by Backpropagation; implementing tricks by Yann LeCun et al.: "Efficent Backprop"
 arbitrary number of hidden layers (also 0  just the linear model)
 connection between proximate layers only
 different cost and activation functions (different activation function in each layer)
 test of backprop by gradient checking
 normalization of the inputs (storeable) as part of the model
First:
 implementation "stocastic gradient descent" like gradient machine
 simple gradient descent incl. momentum
Later (new jira issues):
 Distributed Batch learning (see below)
 "Stacked (Denoising) Autoencoder"  Feature Learning
 advanced cost minimazation like 2nd order methods, conjugate gradient etc.
Distribution of learning can be done by (batch learning):
1 Partioning of the data in x chunks
2 Learning the weight changes as matrices in each chunk
3 Combining the matrixes and update of the weights  back to 2
Maybe this procedure can be done with random parts of the chunks (distributed quasi online learning).
Batch learning with deltabardelta heuristics for adapting the learning rates.
Issue Links
 is superceded by

MAHOUT1265 Add Multilayer Perceptron
 Closed
Activity
 All
 Comments
 Work Log
 History
 Activity
 Transitions
Although it's not the same (but again a NN) and afaik the learning is sequential, but it's worth to check out the restricted boltzmann machine implementation that has been just submitted to
MAHOUT968