Uploaded image for project: 'Phoenix'
  1. Phoenix
  2. PHOENIX-3540

Fix Time data type in Phoenix Spark integration

Agile BoardAttach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 4.10.0
    • None
    • None

    Description

      2016-12-13 07:56:07,773|INFO|MainThread|machine.py:145 - run()|2016-12-13 07:56:07,773 DEBUG [main] repl.SparkILoop$SparkILoopInterpreter: Invoking: public static java.lang.String $line20.$eval.$print()
      2016-12-13 07:56:07,805|INFO|MainThread|machine.py:145 - run()|org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ctr-e77-1481596162056-0246-01-000003.hwx.site): java.lang.ClassCastException: java.sql.Time cannot be cast to java.sql.Timestamp
      2016-12-13 07:56:07,805|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.catalyst.CatalystTypeConverters$TimestampConverter$.toCatalystImpl(CatalystTypeConverters.scala:313)
      2016-12-13 07:56:07,805|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
      2016-12-13 07:56:07,805|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2016-12-13 07:56:07,806|INFO|MainThread|machine.py:145 - run()|at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1112)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1277)
      2016-12-13 07:56:07,807|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1091)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.scheduler.Task.run(Task.scala:89)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|at java.lang.Thread.run(Thread.java:745)
      2016-12-13 07:56:07,808|INFO|MainThread|machine.py:145 - run()|
      

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            ankit@apache.org Ankit Singhal
            speleato Sergio Peleato
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment