Details
-
Umbrella
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
None
-
None
Description
Goal: Improve APIs for optimization
Motivation: There have been several disjoint mentions of improving the optimization APIs to make them more pluggable, extensible, etc. This JIRA is a place to discuss what API changes are necessary for the long term, and to provide links to other relevant JIRAs.
Eventually, I hope this leads to a design doc outlining:
- current issues
- requirements such as supporting many types of objective functions, optimization algorithms, and parameters to those algorithms
- ideal API
- breakdown of smaller JIRAs needed to achieve that API
Attachments
Issue Links
- blocks
-
SPARK-6849 The constructor of GradientDescent should be public
- Resolved
-
SPARK-6682 Deprecate static train and use builder instead for Scala/Java
- Closed
- is cloned by
-
SPARK-18303 CLONE - Improving MLlib optimization APIs
- Resolved
- is related to
-
SPARK-2372 Grouped Optimization/Learning
- Resolved
-
SPARK-2505 Weighted Regularizer
- Closed
-
SPARK-1486 Support multi-model training in MLlib
- Resolved
-
SPARK-1227 Diagnostics for Classification&Regression
- Resolved
-
SPARK-4526 Gradient should be added batch computing interface
- Resolved
-
SPARK-5362 Gradient and Optimizer to support generic output (instead of label) and data batches
- Resolved
-
SPARK-1457 Change APIs for training algorithms to take optimizer as parameter
- Resolved
-
SPARK-11696 MLLIB:Optimization - Extend optimizer output for GradientDescent and LBFGS
- Closed
- relates to
-
SPARK-17136 Design optimizer interface for ML algorithms
- Resolved