Description
This is an umbrella JIRA for evaluation metrics in Python. They should be defined under `pyspark.mllib.evaluation`. We should try wrapping Scala's implementation instead of implement them in Python.
Attachments
Issue Links
- Is contained by
-
SPARK-7536 Audit MLlib Python API for 1.4
- Resolved
- relates to
-
SPARK-6254 MLlib Python API parity check at 1.3 release
- Closed
1.
|
Add BinaryClassificationMetrics in PySpark/MLlib | Resolved | Xiangrui Meng | |
2.
|
Add MulticlassMetrics in PySpark/MLlib | Resolved | Yanbo Liang | |
3.
|
Add RankingMetrics in PySpark/MLlib | Resolved | Yanbo Liang | |
4.
|
Add RegressionMetrics in PySpark/MLlib | Resolved | Yanbo Liang | |
5.
|
Add MultilabelMetrics in PySpark/MLlib | Resolved | Yanbo Liang |