Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-7685

Handle high imbalanced data and apply weights to different samples in Logistic Regression

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • None
    • 1.6.0
    • ML
    • None

    Description

      In fraud detection dataset, almost all the samples are negative while only couple of them are positive. This type of high imbalanced data will bias the models toward negative resulting poor performance. In python-scikit, they provide a correction allowing users to Over-/undersample the samples of each class according to the given weights. In auto mode, selects weights inversely proportional to class frequencies in the training set. This can be done in a more efficient way by multiplying the weights into loss and gradient instead of doing actual over/undersampling in the training dataset which is very expensive.

      http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html

      On the other hand, some of the training data maybe more important like the training samples from tenure users while the training samples from new users maybe less important. We should be able to provide another "weight: Double" information in the LabeledPoint to weight them differently in the learning algorithm.

      Attachments

        Issue Links

          Activity

            People

              dbtsai DB Tsai
              dbtsai DB Tsai
              Joseph K. Bradley Joseph K. Bradley
              Votes:
              3 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: