Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Mini-batch gradient descent is typically the algorithm of choice when training a neural network.
MADlib currently supports IGD, we may have to add extensions to include mini-batch as a solver for MLP. Other modules will continue to use the existing IGD that does not support mini-batching. Later JIRAs will move other modules over one at a time to use the new mini-batch GD.
Related JIRA that will pre-process the input data to be consumed by mini-batch isĀ https://issues.apache.org/jira/browse/MADLIB-1200
Attachments
Issue Links
- links to