Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-4814

Enable assertions in SBT, Maven tests / AssertionError from Hive's LazyBinaryInteger

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.1.0
    • 1.1.2, 1.2.2, 1.3.0
    • Spark Core, SQL
    • None

    Description

      Follow up to SPARK-4159, wherein we noticed that Java tests weren't running in Maven, in part because a Java test actually fails with AssertionError. That code/test was fixed in SPARK-4850.

      The reason it wasn't caught by SBT tests was that they don't run with assertions on, and Maven's surefire does.

      Turning on assertions in the SBT build is trivial, adding one line:

          javaOptions in Test += "-ea",
      

      This reveals a test failure in Scala test suites though:

      [info] - alter_merge_2 *** FAILED *** (1 second, 305 milliseconds)
      [info]   Failed to execute query using catalyst:
      [info]   Error: Job aborted due to stage failure: Task 1 in stage 551.0 failed 1 times, most recent failure: Lost task 1.0 in stage 551.0 (TID 1532, localhost): java.lang.AssertionError
      [info]   	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryInteger.init(LazyBinaryInteger.java:51)
      [info]   	at org.apache.hadoop.hive.serde2.columnar.ColumnarStructBase$FieldInfo.uncheckedGetField(ColumnarStructBase.java:110)
      [info]   	at org.apache.hadoop.hive.serde2.columnar.ColumnarStructBase.getField(ColumnarStructBase.java:171)
      [info]   	at org.apache.hadoop.hive.serde2.objectinspector.ColumnarStructObjectInspector.getStructFieldData(ColumnarStructObjectInspector.java:166)
      [info]   	at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:318)
      [info]   	at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:314)
      [info]   	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      [info]   	at org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$6.apply(Aggregate.scala:132)
      [info]   	at org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$6.apply(Aggregate.scala:128)
      [info]   	at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:615)
      [info]   	at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:615)
      [info]   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      [info]   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
      [info]   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
      [info]   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      [info]   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
      [info]   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
      [info]   	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      [info]   	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      [info]   	at org.apache.spark.scheduler.Task.run(Task.scala:56)
      [info]   	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:195)
      [info]   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      [info]   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      [info]   	at java.lang.Thread.run(Thread.java:745)
      

      The items for this JIRA are therefore:

      • Enable assertions in SBT
      • Fix this failure
      • Figure out why Maven scalatest didn't trigger it - may need assertions explicitly turned on too.

      Attachments

        Issue Links

          Activity

            People

              srowen Sean R. Owen
              srowen Sean R. Owen
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: