Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
Description
Implementation of a Multilayer Perceptron (Neural Network)
- Learning by Backpropagation
- Distributed Learning
The implementation should be the basis for the long range goals:
- more efficent learning (Adagrad, L-BFGS)
- High efficient distributed Learning
- Autoencoder - Sparse (denoising) Autoencoder
- Deep Learning
—
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.
Different strategies of efficient synchronized weight updates has to be evaluated.
Resources:
Videos:
MLP and Deep Learning Tutorial:
Scientific Papers:
- Google's "Brain" project:
http://research.google.com/archive/large_deep_networks_nips2012.html - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
- http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf