In Spark 2.0, we are going to have 4 ML models in SparkR: GLMs, k-means, naive Bayes, and AFT survival regression. Users can fit models, get summary, and make predictions. However, they cannot save/load the models yet.
ML models in SparkR are wrappers around ML pipelines. So it should be straightforward to implement model persistence. We need to think more about the API. R uses save/load for objects and datasets (also objects). It is possible to overload save for ML models, e.g., save.NaiveBayesWrapper. But I'm not sure whether load can be overloaded easily. I propose the following API:
model <- glm(formula, data = df)
ml.save(model, path, mode = "overwrite")
model2 <- ml.load(path)
We defined wrappers as S4 classes. So `ml.save` is an S4 method and ml.load is a S3 method (correct me if I'm wrong).
- is related to
-
SPARK-6725 Model export/import for Pipeline API (Scala)
-
- Resolved
-
- relates to
-
SPARK-14831 Make ML APIs in SparkR consistent
-
- Resolved
-
1.
|
NaiveBayes model persistence in SparkR |
|
Resolved | Yanbo Liang | |
2.
|
AFTSurvivalRegression model persistence in SparkR |
|
Resolved | Yanbo Liang | |
3.
|
K-means model persistence in SparkR |
|
Resolved | Gayathri Murali | |
4.
|
GLMs model persistence in SparkR |
|
Resolved | Gayathri Murali |