Uploaded image for project: 'Hama'
  1. Hama
  2. HAMA-681

Multi Layer Perceptron

    XMLWordPrintableJSON

Details

    Description

      Implementation of a Multilayer Perceptron (Neural Network)

      • Learning by Backpropagation
      • Distributed Learning

      The implementation should be the basis for the long range goals:

      • more efficent learning (Adagrad, L-BFGS)
      • High efficient distributed Learning
      • Autoencoder - Sparse (denoising) Autoencoder
      • Deep Learning


      Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute the learning of MLPs.
      Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP Implementation. Then the Hama BSP programming model should be used to realize distributed learning.

      Different strategies of efficient synchronized weight updates has to be evaluated.

      Resources:
      Videos:

      MLP and Deep Learning Tutorial:

      Scientific Papers:

      Attachments

        1. HAMA-681.patch
          70 kB
          Yexi Jiang
        2. HAMA-681.patch
          70 kB
          Yexi Jiang
        3. HAMA-681.patch
          67 kB
          Yexi Jiang
        4. perception.patch
          55 kB
          Yexi Jiang

        Activity

          People

            yxjiang Yexi Jiang
            chrisberlin Christian Herta
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: