Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-19053

Supporting multiple evaluation metrics in DataFrame-based API: discussion

    XMLWordPrintableJSON

    Details

    • Type: Brainstorming
    • Status: In Progress
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: ML
    • Labels:
      None

      Description

      This JIRA is to discuss supporting the computation of multiple evaluation metrics efficiently in the DataFrame-based API for MLlib.

      In the RDD-based API, RegressionMetrics and other *Metrics classes support efficient computation of multiple metrics.

      In the DataFrame-based API, there are a few options:

      • model/result summaries (e.g., LogisticRegressionSummary): These currently provide the desired functionality, but they require a model and do not let users compute metrics manually from DataFrames of predictions and true labels.
      • Evaluator classes (e.g., RegressionEvaluator): These only support computing a single metric in one pass over the data, but they do not require a model.
      • new class analogous to Metrics: We could introduce a class analogous to Metrics. Model/result summaries could use this internally as a replacement for spark.mllib Metrics classes, or they could (maybe) inherit from these classes.

      Thoughts?

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                josephkb Joseph K. Bradley
              • Votes:
                0 Vote for this issue
                Watchers:
                7 Start watching this issue

                Dates

                • Created:
                  Updated: