Uploaded image for project: 'Singa'
  1. Singa
  2. SINGA-7

Implement shared memory Hogwild algorithm

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None

    Description

      The original Hogwild [1] algorithm works on a multi-core machine with shared memory. There are two ways to implement it in SINGA
      1. Following the worker-server architecture to launch multiple worker groups and one server group. Share the memory space of parameter values among worker groups and the server group. Worker groups compute gradients and the server group updates parameter values.

      2. Using worker-only architecture like Caffe. Share the memory space of parameter values among worker groups. Workers compute gradients and update parameters locally.

      To simplify the implementation, we can firstly restrict the group size to be 1.

      There are also two choices for the frequency of reporting the training/test performance.
      1. based on training iterations
      2. based on training time (e.g., seconds)

      Once the shared memory version is finished, we will extend it to distributed environment.

      [1]B. Recht, C. Re, S. J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, pages 693–701, 2011.

      Attachments

        Activity

          People

            wangwei.cs wangwei
            wangwei.cs wangwei
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: