Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-14229

PySpark DataFrame.rdd's can't be saved to an arbitrary Hadoop OutputFormat

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Won't Fix
    • 1.6.1
    • None
    • None

    Description

      I am able to save data to MongoDB from any RDD... provided that RDD does not belong to a DataFrame. If I use DataFrame.rdd, it is not possible to save via saveAsNewAPIHadoopFile whatsoever. I have tested that this applies to saving to MongoDB, BSON Files, and ElasticSearch.

      I get the following error when I try to save to a HadoopFile:

      config =

      {"mongo.output.uri": "mongodb://localhost:27017/agile_data_science.on_time_performance"}

      n [3]: on_time_dataframe.rdd.saveAsNewAPIHadoopFile(
      ...: path='file://unused',
      ...: outputFormatClass='com.mongodb.hadoop.MongoOutputFormat',
      ...: keyClass='org.apache.hadoop.io.Text',
      ...: valueClass='org.apache.hadoop.io.MapWritable',
      ...: conf=config
      ...: )
      16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 62.7 KB, free 147.3 KB)
      16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 20.4 KB, free 167.7 KB)
      16/03/28 19:59:57 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:61301 (size: 20.4 KB, free: 511.1 MB)
      16/03/28 19:59:57 INFO spark.SparkContext: Created broadcast 1 from javaToPython at NativeMethodAccessorImpl.java:-2
      16/03/28 19:59:57 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
      16/03/28 19:59:57 INFO parquet.ParquetRelation: Reading Parquet file(s) from file:/Users/rjurney/Software/Agile_Data_Code_2/data/on_time_performance.parquet/part-r-00000-32089f1b-5447-4a75-b008-4fd0a0a8b846.gz.parquet
      16/03/28 19:59:57 INFO spark.SparkContext: Starting job: take at SerDeUtil.scala:231
      16/03/28 19:59:57 INFO scheduler.DAGScheduler: Got job 1 (take at SerDeUtil.scala:231) with 1 output partitions
      16/03/28 19:59:57 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (take at SerDeUtil.scala:231)
      16/03/28 19:59:57 INFO scheduler.DAGScheduler: Parents of final stage: List()
      16/03/28 19:59:57 INFO scheduler.DAGScheduler: Missing parents: List()
      16/03/28 19:59:57 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[6] at mapPartitions at SerDeUtil.scala:146), which has no missing parents
      16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 14.9 KB, free 182.6 KB)
      16/03/28 19:59:57 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 7.5 KB, free 190.1 KB)
      16/03/28 19:59:57 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:61301 (size: 7.5 KB, free: 511.1 MB)
      16/03/28 19:59:57 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
      16/03/28 19:59:57 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[6] at mapPartitions at SerDeUtil.scala:146)
      16/03/28 19:59:57 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
      16/03/28 19:59:57 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 8, localhost, partition 0,PROCESS_LOCAL, 2739 bytes)
      16/03/28 19:59:57 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 8)
      16/03/28 19:59:58 INFO parquet.ParquetRelation$$anonfun$buildInternalScan$1$$anon$1: Input split: ParquetInputSplit

      {part: file:/Users/rjurney/Software/Agile_Data_Code_2/data/on_time_performance.parquet/part-r-00000-32089f1b-5447-4a75-b008-4fd0a0a8b846.gz.parquet start: 0 end: 134217728 length: 134217728 hosts: []}

      16/03/28 19:59:59 INFO compress.CodecPool: Got brand-new decompressor [.gz]
      16/03/28 19:59:59 ERROR executor.Executor: Exception in task 0.0 in stage 1.0 (TID 8)
      net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
      at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
      at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
      at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
      at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
      at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
      at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
      at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
      at scala.collection.AbstractIterator.to(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
      at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
      at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      Traceback (most recent call last):
      File "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/daemon.py", line 157, in manager
      File "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/daemon.py", line 61, in worker
      File "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/worker.py", line 136, in main
      if read_int(infile) == SpecialLengths.END_OF_STREAM:
      File "/Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 545, in read_int
      raise EOFError
      EOFError
      16/03/28 19:59:59 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 8, localhost): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
      at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
      at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
      at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
      at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
      at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
      at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
      at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
      at scala.collection.AbstractIterator.to(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
      at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
      at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)

      16/03/28 19:59:59 ERROR scheduler.TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
      16/03/28 19:59:59 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
      16/03/28 19:59:59 INFO scheduler.TaskSchedulerImpl: Cancelling stage 1
      16/03/28 19:59:59 INFO scheduler.DAGScheduler: ResultStage 1 (take at SerDeUtil.scala:231) failed in 1.683 s
      16/03/28 19:59:59 INFO scheduler.DAGScheduler: Job 1 failed: take at SerDeUtil.scala:231, took 1.703169 s
      ---------------------------------------------------------------------------
      Py4JJavaError Traceback (most recent call last)
      <ipython-input-3-c91c1bc7b72a> in <module>()
      4 keyClass='org.apache.hadoop.io.Text',
      5 valueClass='org.apache.hadoop.io.MapWritable',
      ----> 6 conf=config
      7 )

      /Users/rjurney/Software/Agile_Data_Code_2/spark/python/pyspark/rdd.pyc in saveAsNewAPIHadoopFile(self, path, outputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf)
      1372 outputFormatClass,
      1373 keyClass, valueClass,
      -> 1374 keyConverter, valueConverter, jconf)
      1375
      1376 def saveAsHadoopDataset(self, conf, keyConverter=None, valueConverter=None):

      /Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in _call_(self, *args)
      811 answer = self.gateway_client.send_command(command)
      812 return_value = get_return_value(
      --> 813 answer, self.gateway_client, self.target_id, self.name)
      814
      815 for temp_arg in temp_args:

      /Users/rjurney/Software/Agile_Data_Code_2/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
      43 def deco(*a, **kw):
      44 try:
      ---> 45 return f(*a, **kw)
      46 except py4j.protocol.Py4JJavaError as e:
      47 s = e.java_exception.toString()

      /Users/rjurney/Software/Agile_Data_Code_2/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
      306 raise Py4JJavaError(
      307 "An error occurred while calling

      {0} {1} {2}

      .\n".
      --> 308 format(target_id, ".", name), value)
      309 else:
      310 raise Py4JError(

      Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.
      : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 8, localhost): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
      at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
      at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
      at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
      at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
      at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
      at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
      at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
      at scala.collection.AbstractIterator.to(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
      at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
      at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)

      Driver stacktrace:
      at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
      at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
      at scala.Option.foreach(Option.scala:236)
      at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
      at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
      at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
      at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
      at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
      at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
      at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
      at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
      at org.apache.spark.api.python.SerDeUtil$.pythonToPairRDD(SerDeUtil.scala:231)
      at org.apache.spark.api.python.PythonRDD$.saveAsNewAPIHadoopFile(PythonRDD.scala:775)
      at org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile(PythonRDD.scala)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:497)
      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
      at py4j.Gateway.invoke(Gateway.java:259)
      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
      at py4j.commands.CallCommand.execute(CallCommand.java:79)
      at py4j.GatewayConnection.run(GatewayConnection.java:209)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
      at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
      at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
      at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
      at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
      at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:150)
      at org.apache.spark.api.python.SerDeUtil$$anonfun$pythonToJava$1$$anonfun$apply$1.apply(SerDeUtil.scala:149)
      at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
      at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
      at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
      at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
      at scala.collection.AbstractIterator.to(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
      at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
      at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1328)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      ... 1 more

      Attachments

        Activity

          People

            Unassigned Unassigned
            russell.jurney Russell Jurney
            Matei Alexandru Zaharia Matei Alexandru Zaharia
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: