Uploaded image for project: 'PredictionIO (Retired)'
  1. PredictionIO (Retired)
  2. PIO-118

ClassCastException from NullWritable to Text in ESEventsUtil

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.12.0-incubating
    • Core
    • None

    Description

      Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.Text
          at org.apache.predictionio.data.storage.elasticsearch.ESEventsUtil$.getOptStringCol$1(ESEventsUtil.scala:58)
          at org.apache.predictionio.data.storage.elasticsearch.ESEventsUtil$.resultToEvent(ESEventsUtil.scala:68)
          at org.apache.predictionio.data.storage.elasticsearch.ESPEvents$$anonfun$5.apply(ESPEvents.scala:89)
          at org.apache.predictionio.data.storage.elasticsearch.ESPEvents$$anonfun$5.apply(ESPEvents.scala:87)
          at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
          at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
          at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
          at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
          at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
          at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
          at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
          at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
          at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
          at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
          at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
          at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
          at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
          at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
          at org.apache.spark.scheduler.Task.run(Task.scala:99)
          at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          ... 1 more
      

      Attachments

        Issue Links

          Activity

            People

              shinsuke Shinsuke Sugaya
              shinsuke Shinsuke Sugaya
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: