Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-44111 Prepare Apache Spark 4.0.0
  3. SPARK-46752

Use default ORC compression in data source benchmarks

    XMLWordPrintableJSON

Details

    Description

      $ git grep OrcCompressionCodec | grep Benchmark
      sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/BuiltInDataSourceWriteBenchmark.scala:import org.apache.spark.sql.execution.datasources.orc.OrcCompressionCodec
      sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/BuiltInDataSourceWriteBenchmark.scala:      OrcCompressionCodec.SNAPPY.lowerCaseName())
      sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/DataSourceReadBenchmark.scala:import org.apache.spark.sql.execution.datasources.orc.OrcCompressionCodec
      sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/DataSourceReadBenchmark.scala:      OrcCompressionCodec.SNAPPY.lowerCaseName()).orc(dir)
      sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/FilterPushdownBenchmark.scala:import org.apache.spark.sql.execution.datasources.orc.OrcCompressionCodec
      sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/FilterPushdownBenchmark.scala:      .setIfMissing("orc.compression", OrcCompressionCodec.SNAPPY.lowerCaseName())
      

      Attachments

        Issue Links

          Activity

            People

              dongjoon Dongjoon Hyun
              dongjoon Dongjoon Hyun
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: