Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24018

Spark-without-hadoop package fails to create or read parquet files with snappy compression

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.3.0
    • 2.3.2
    • Deploy
    • None

    Description

      On a brand-new installation of Spark 2.3.0 with a user-provided hadoop-2.8.3, Spark fails to read or write dataframes in parquet format with snappy compression.

      This is due to an incompatibility between the snappy-java version that is required by parquet (parquet is provided in Spark jars but snappy isn't) and the version that is available from hadoop-2.8.3.

       

      Steps to reproduce:

      • Download and extract hadoop-2.8.3
      • Download and extract spark-2.3.0-without-hadoop
      • export JAVA_HOME, HADOOP_HOME, SPARK_HOME, PATH
      • Following instructions from https://spark.apache.org/docs/latest/hadoop-provided.html, set SPARK_DIST_CLASSPATH=$(hadoop classpath) in spark-env.sh
      • Start a spark-shell, enter the following:

       

      import spark.implicits._
      val df = List(1, 2, 3, 4).toDF
      df.write
        .format("parquet")
        .option("compression", "snappy")
        .mode("overwrite")
        .save("test.parquet")
      

       

       

      This fails with the following:

      java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
          at org.xerial.snappy.SnappyNative.maxCompressedLength(Native Method)
          at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:316)
          at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
          at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
          at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
          at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
          at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
          at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
          at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
          at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
          at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
          at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
          at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
          at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
          at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
          at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
          at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109)
          at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
          at java.lang.Thread.run(Thread.java:748)

       

        Downloading snappy-java-1.1.2.6.jar and placing it in Sparks's jar folder solves the issue.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              jeanfrancisroy Jean-Francis Roy
              Votes:
              2 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: