Description
`bin/spark-submit examples/src/main/python/ml/simple_text_classification_pipeline.py`
Row(id=4, text=u'spark i j k', words=[u'spark', u'i', u'j', u'k'], features=SparseVector(262144, {105: 1.0, 106: 1.0, 107: 1.0, 62173: 1.0}), rawPrediction=DenseVector([0.1629, -0.1629]), probability=DenseVector([0.5406, 0.4594]), prediction=0.0) Row(id=5, text=u'l m n', words=[u'l', u'm', u'n'], features=SparseVector(262144, {108: 1.0, 109: 1.0, 110: 1.0}), rawPrediction=DenseVector([2.6407, -2.6407]), probability=DenseVector([0.9334, 0.0666]), prediction=0.0) Row(id=6, text=u'mapreduce spark', words=[u'mapreduce', u'spark'], features=SparseVector(262144, {62173: 1.0, 140738: 1.0}), rawPrediction=DenseVector([1.2651, -1.2651]), probability=DenseVector([0.7799, 0.2201]), prediction=0.0) Row(id=7, text=u'apache hadoop', words=[u'apache', u'hadoop'], features=SparseVector(262144, {128334: 1.0, 134181: 1.0}), rawPrediction=DenseVector([3.7429, -3.7429]), probability=DenseVector([0.9769, 0.0231]), prediction=0.0)
In Scala
$ bin/run-example ml.SimpleTextClassificationPipeline (4, spark i j k) --> prob=[0.5406433544851436,0.45935664551485655], prediction=0.0 (5, l m n) --> prob=[0.9334382627383263,0.06656173726167364], prediction=0.0 (6, mapreduce spark) --> prob=[0.7799076868203896,0.22009231317961045], prediction=0.0 (7, apache hadoop) --> prob=[0.9768636139518304,0.023136386048169616], prediction=0.0
All predictions are 0, while some should be one based on the probability. It seems to be an issue with regularization.
Attachments
Issue Links
- links to