Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13082

sqlCtx.real.json() doesn't work with PythonRDD

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.6.0
    • 1.6.1, 2.0.0
    • PySpark
    • None
    • Tested on macosx 10.10 using Spark 1.6

    Description

      This code works without problem:

      sqlCtx.read.json(sqlCtx.range(10).toJSON())

      but these ones fail with the traceback below:

      sqlCtx.read.json(sc.parallelize(['

      {"id":1}

      ']*10))
      sqlCtx.read.json(sqlCtx.range(10).toJSON().pipe("cat"))
      sqlCtx.read.json(sqlCtx.range(10).toJSON().map(lambda x: x))

      ---------------------------------------------------------------------------
      Py4JJavaError Traceback (most recent call last)
      <ipython-input-93-91a986fee7f9> in <module>()
      ----> 1 sqlCtx.read.json(sqlCtx.range(10).toJSON().map(lambda x: x))

      /usr/local/Cellar/apache-spark/1.6.0/libexec/python/pyspark/sql/readwriter.pyc in json(self, path, schema)
      178 return self._df(self._jreader.json(self._sqlContext._sc._jvm.PythonUtils.toSeq(path)))
      179 elif isinstance(path, RDD):
      --> 180 return self._df(self._jreader.json(path._jrdd))
      181 else:
      182 raise TypeError("path can be only string or RDD")

      /usr/local/Cellar/apache-spark/1.6.0/libexec/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in _call_(self, *args)
      811 answer = self.gateway_client.send_command(command)
      812 return_value = get_return_value(
      --> 813 answer, self.gateway_client, self.target_id, self.name)
      814
      815 for temp_arg in temp_args:

      /usr/local/Cellar/apache-spark/1.6.0/libexec/python/pyspark/sql/utils.pyc in deco(*a, **kw)
      43 def deco(*a, **kw):
      44 try:
      ---> 45 return f(*a, **kw)
      46 except py4j.protocol.Py4JJavaError as e:
      47 s = e.java_exception.toString()

      /usr/local/Cellar/apache-spark/1.6.0/libexec/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
      306 raise Py4JJavaError(
      307 "An error occurred while calling

      {0} {1} {2}

      .\n".
      --> 308 format(target_id, ".", name), value)
      309 else:
      310 raise Py4JError(

      Py4JJavaError: An error occurred while calling o961.json.
      : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 55.0 failed 1 times, most recent failure: Lost task 0.0 in stage 55.0 (TID 149, localhost): java.lang.ClassCastException: [B cannot be cast to java.lang.String
      at org.apache.spark.sql.execution.datasources.json.InferSchema$$anonfun$1$$anonfun$apply$1.apply(InferSchema.scala:53)
      at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
      at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)
      at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply(RDD.scala:1121)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply(RDD.scala:1121)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1122)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1122)
      at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)

      Driver stacktrace:
      at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
      at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
      at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
      at scala.Option.foreach(Option.scala:236)
      at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
      at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
      at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
      at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
      at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
      at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
      at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1136)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
      at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1113)
      at org.apache.spark.sql.execution.datasources.json.InferSchema$.infer(InferSchema.scala:65)
      at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:114)
      at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:109)
      at scala.Option.getOrElse(Option.scala:120)
      at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema$lzycompute(JSONRelation.scala:109)
      at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema(JSONRelation.scala:108)
      at org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:636)
      at org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:635)
      at org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:37)
      at org.apache.spark.sql.SQLContext.baseRelationToDataFrame(SQLContext.scala:442)
      at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:288)
      at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:275)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:497)
      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
      at py4j.Gateway.invoke(Gateway.java:259)
      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
      at py4j.commands.CallCommand.execute(CallCommand.java:79)
      at py4j.GatewayConnection.run(GatewayConnection.java:209)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.String
      at org.apache.spark.sql.execution.datasources.json.InferSchema$$anonfun$1$$anonfun$apply$1.apply(InferSchema.scala:53)
      at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
      at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
      at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)
      at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply(RDD.scala:1121)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$23.apply(RDD.scala:1121)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1122)
      at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1122)
      at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      at org.apache.spark.scheduler.Task.run(Task.scala:89)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      ... 1 more

      This seems related to SPARK-9964

      Attachments

        Issue Links

          Activity

            People

              zsxwing Shixiong Zhu
              glehmann Gaƫtan Lehmann
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: