Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-2012

PySpark StatCounter with numpy arrays

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 1.0.0
    • 1.1.0
    • PySpark
    • None

    Description

      In Spark 0.9, the PySpark version of StatCounter worked with an RDD of numpy arrays just as with an RDD of scalars, which was very useful (e.g. for computing stats on a set of vectors in ML analyses). In 1.0.0 this broke because the added functionality for computing the minimum and maximum, as implemented, doesn't work on arrays.

      I have a PR ready that re-enables this functionality by having StatCounter use the numpy element-wise functions "maximum" and "minimum", which work on both numpy arrays and scalars (and I've added new tests for this capability).

      However, I realize this adds a dependency on NumPy outside of MLLib. If that's not ok, maybe it'd be worth adding this functionality as a util within PySpark MLLib?

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            freeman-lab Jeremy Freeman
            freeman-lab Jeremy Freeman
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment