Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-1560

PySpark SQL depends on Java 7 only jars

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • None
    • 1.0.0
    • SQL
    • None

    Description

      We need to republish the pickler built with java 7. Details below:

      14/04/19 12:31:29 INFO rdd.HadoopRDD: Input split: file:/Users/ceteri/opt/spark-branch-1.0/examples/src/main/resources/people.txt:0+16
      Exception in thread "Local computation of job 1" java.lang.UnsupportedClassVersionError: net/razorvine/pickle/Unpickler : Unsupported major.minor version 51.0
      	at java.lang.ClassLoader.defineClass1(Native Method)
      	at java.lang.ClassLoader.defineClassCond(ClassLoader.java:637)
      	at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
      	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
      	at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
      	at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
      	at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
      	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
      	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
      	at org.apache.spark.api.python.PythonRDD$$anonfun$pythonToJavaMap$1.apply(PythonRDD.scala:295)
      	at org.apache.spark.api.python.PythonRDD$$anonfun$pythonToJavaMap$1.apply(PythonRDD.scala:294)
      	at org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:518)
      	at org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:518)
      	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:243)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:234)
      	at org.apache.spark.scheduler.DAGScheduler.runLocallyWithinThread(DAGScheduler.scala:700)
      	at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:685)
      

      Attachments

        Activity

          People

            ahirreddy Ahir Reddy
            marmbrus Michael Armbrust
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: