Details
-
Improvement
-
Status: In Progress
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
-
Sprint 2
Description
Currently, we have a mathematical framework in place for training with distributed SGD in a distributed MNIST LeNet example . This task aims to push this at scale to determine (1) the current behavior of the engine (i.e. does the optimizer actually run this in a distributed fashion, and (2) ways to improve the robustness and performance for this scenario. The distributed SGD framework from this example has already been ported into Caffe2DML, and thus improvements made for this task will directly benefit our efforts towards distributed training of Caffe models (and Keras in the future).
Attachments
Attachments
Issue Links
- is a parent of
-
SYSTEMDS-1774 Improve Parfor parallelism for deep learning
- Closed
- relates to
-
SYSTEMDS-1563 Add a distributed synchronous SGD MNIST LeNet example
- Closed