Description
Repro script for PySpark, deployed via spark-ec2 at git revision 9d5ecf8205b924dc8a3c13fed68beb78cc5c7553:
from pyspark.sql import SQLContext sqlContext = SQLContext(sc) raw = sc.parallelize([ """ { "name": "Nick", "history": { "countries": [] } } """ ]) profiles = sqlContext.jsonRDD(raw) profiles.registerAsTable("profiles") profiles.printSchema() sqlContext.sql("SELECT name FROM profiles;").collect() # works fine sqlContext.sql("SELECT history FROM profiles;").collect() # raises exception
Attempting to select the top-level struct that has a nested list value yields the following error:
14/07/06 00:10:26 INFO scheduler.TaskSetManager: Loss was due to net.razorvine.pickle.PickleException: couldn't introspect javabean: java.lang.IllegalArgumentException: wrong number of arguments [duplicate 3] 14/07/06 00:10:26 ERROR scheduler.TaskSetManager: Task 26.0:15 failed 4 times; aborting job 14/07/06 00:10:26 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 26.0, whose tasks have all completed, from pool 14/07/06 00:10:26 INFO scheduler.TaskSchedulerImpl: Cancelling stage 26 14/07/06 00:10:26 INFO scheduler.DAGScheduler: Failed to run collect at <stdin>:1 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/spark/python/pyspark/rdd.py", line 649, in collect bytesInJava = self._jrdd.collect().iterator() File "/root/spark/python/lib/py4j-0.8.1-src.zip/py4j/java_gateway.py", line 537, in __call__ File "/root/spark/python/lib/py4j-0.8.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o286.collect. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 26.0:15 failed 4 times, most recent failure: Exception failure in TID 394 on host ip-10-183-59-125.ec2.internal: net.razorvine.pickle.PickleException: couldn't introspect javabean: java.lang.IllegalArgumentException: wrong number of arguments net.razorvine.pickle.Pickler.put_javabean(Pickler.java:603) net.razorvine.pickle.Pickler.dispatch(Pickler.java:299) net.razorvine.pickle.Pickler.save(Pickler.java:125) net.razorvine.pickle.Pickler.put_map(Pickler.java:322) net.razorvine.pickle.Pickler.dispatch(Pickler.java:286) net.razorvine.pickle.Pickler.save(Pickler.java:125) net.razorvine.pickle.Pickler.put_map(Pickler.java:322) net.razorvine.pickle.Pickler.dispatch(Pickler.java:286) net.razorvine.pickle.Pickler.save(Pickler.java:125) net.razorvine.pickle.Pickler.put_arrayOfObjects(Pickler.java:392) net.razorvine.pickle.Pickler.dispatch(Pickler.java:195) net.razorvine.pickle.Pickler.save(Pickler.java:125) net.razorvine.pickle.Pickler.dump(Pickler.java:95) net.razorvine.pickle.Pickler.dumps(Pickler.java:80) org.apache.spark.sql.SchemaRDD$$anonfun$javaToPython$1$$anonfun$apply$3.apply(SchemaRDD.scala:385) org.apache.spark.sql.SchemaRDD$$anonfun$javaToPython$1$$anonfun$apply$3.apply(SchemaRDD.scala:385) scala.collection.Iterator$$anon$11.next(Iterator.scala:328) org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:317) org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:203) org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:178) org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:178) org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1213) org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:177) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1041) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1025) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1023) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1023) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:631) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:631) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:631) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1226) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
This error persists regardless of whether the list value is empty or consists of base types or structs.
Attachments
Issue Links
- links to