Implementation of a Multilayer Perceptron (Neural Network)
- Learning by Backpropagation
- Distributed Learning
The implementation should be the basis for the long range goals:
- more efficent learning (Adagrad, L-BFGS)
- High efficient distributed Learning
- Autoencoder - Sparse (denoising) Autoencoder
- Deep Learning
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
Therefore the current implementation of the MLP (see
MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.
Different strategies of efficient synchronized weight updates has to be evaluated.
MLP and Deep Learning Tutorial:
- Google's "Brain" project:
- Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf