Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20859

SQL Loader does not recognize multidimensional columns in postgresql (like integer[]][])

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 2.1.1
    • None
    • SQL

    Description

      The fix in SPARK-14536 is not accepting columns like integer[][] (multidimensional arrays)

      To reproduce this error:

      1) Create a SQL table in postgresql

      CREATE TABLE arrays_test
      (
        eid integer NOT NULL,
        simple integer[],
        multi integer[][]
      );
      

      2) Insert a row like this one:

      insert into arrays_test (eid, simple, multi)
      values
      (1, '{1, 1}', NULL);
      

      3) Execute a SPQL query like this one and observe how it works:

      from pyspark import SparkConf
      from pyspark import SparkContext
      from pyspark.sql import SQLContext
      
      master = "spark://spark211:7077"  # local is OK too
      conf = (
          SparkConf()
              .setMaster(master)
              .setAppName("Connection Test 5")
              .set("spark.jars.packages", "org.postgresql:postgresql:9.4.1212")   ## This one works ok
              .set("spark.driver.memory", "2G")
              .set("spark.executor.memory", "2G")
              .set("spark.driver.cores", "10")
      )
      
      sc = SparkContext(conf=conf)
      # sc.setLogLevel("ALL")
      
      print "====>", 1
      print(sc)
      
      sqlContext = SQLContext(sc)
      
      print "====>", 2
      print sqlContext
      
      url = "postgresql://localhost:5432/test"   # change properly
      url = 'jdbc:'+url
      properties = {'user': 'user', 'password': 'password'}   # change user and password if needed
      
      df = sqlContext.read.format("jdbc"). \
          option("url", url). \
          option("driver", "org.postgresql.Driver"). \
          option("useUnicode", "true"). \
          option("continueBatchOnError","true"). \
          option("useSSL", "false"). \
          option("user", "user"). \
          option("password", "password"). \
          option("dbtable", "arrays_test"). \
          option("partitionColumn", "eid"). \
          option("lowerBound", "1000015"). \
          option("upperBound", "6026289"). \
          option("numPartitions", "100"). \
          load()
      
      print "====>", 3
      
      df.registerTempTable("arrays_test")
      df = sqlContext.sql("SELECT * FROM arrays_test limit 5")
      
      
      print "====>", 4
      print df.collect()
      
      

      4) Observe how it works.

      5) Now, to reproduce the error, insert a multi dimensional array into the SQL table:

      insert into arrays_test (eid, simple, multi)
      values
      (2, '{1, 1}', '{{1, 1},{2, 2}}');
      

      6) Execute step 3) again.

      7) Observe the exception

      17/05/23 15:23:38 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
      Traceback (most recent call last):
        File "/home/pablo/develop/physiosigns/livebetter/modelling2/modelling2/scripts/runSparkTest2.py", line 65, in <module>
          print df.collect()
        File "/home/pablo/myProgs/virt-pablo/local/lib/python2.7/site-packages/pyspark/sql/dataframe.py", line 391, in collect
          port = self._jdf.collectToPython()
        File "/home/pablo/myProgs/virt-pablo/local/lib/python2.7/site-packages/py4j/java_gateway.py", line 1133, in __call__
          answer, self.gateway_client, self.target_id, self.name)
        File "/home/pablo/myProgs/virt-pablo/local/lib/python2.7/site-packages/pyspark/sql/utils.py", line 63, in deco
          return f(*a, **kw)
        File "/home/pablo/myProgs/virt-pablo/local/lib/python2.7/site-packages/py4j/protocol.py", line 319, in get_return_value
          format(target_id, ".", name), value)
      py4j.protocol.Py4JJavaError: An error occurred while calling o49.collectToPython.
      : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 172.17.0.58, executor 0): java.lang.ClassCastException: [Ljava.lang.Integer; cannot be cast to java.lang.Integer
      	at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
      	at org.apache.spark.sql.catalyst.util.GenericArrayData.getInt(GenericArrayData.scala:62)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
      	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
      	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
      	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
      	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
      	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
      	at org.apache.spark.scheduler.Task.run(Task.scala:99)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:748)
      
      Driver stacktrace:
      	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
      	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
      	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
      	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
      	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
      	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
      	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
      	at scala.Option.foreach(Option.scala:257)
      	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
      	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
      	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
      	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
      	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
      	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
      	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
      	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
      	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
      	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:333)
      	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
      	at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply$mcI$sp(Dataset.scala:2768)
      	at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2765)
      	at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:2765)
      	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
      	at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2788)
      	at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:2765)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
      	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
      	at py4j.Gateway.invoke(Gateway.java:280)
      	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
      	at py4j.commands.CallCommand.execute(CallCommand.java:79)
      	at py4j.GatewayConnection.run(GatewayConnection.java:214)
      	at java.lang.Thread.run(Thread.java:748)
      Caused by: java.lang.ClassCastException: [Ljava.lang.Integer; cannot be cast to java.lang.Integer
      	at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
      	at org.apache.spark.sql.catalyst.util.GenericArrayData.getInt(GenericArrayData.scala:62)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
      	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
      	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
      	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
      	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
      	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
      	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
      	at org.apache.spark.scheduler.Task.run(Task.scala:99)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	... 1 more
      
      

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              pabloa Pablo Alcaraz
              Votes:
              1 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: