Uploaded image for project: 'Apache Sedona'
  1. Apache Sedona
  2. SEDONA-224

java.lang.NoSuchMethodError when loading GeoParquet files using Spark 3.0.x ~ 3.2.x

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.3.0, 1.3.1
    • 1.4.0
    • None

    Description

      spark.read.format("geoparquet").load("/path/to/geoparquet.parquet") does not work on Spark 3.0.x ~ 3.2.x, it raises an java.lang.NoSuchMethodError:

      spark.read.format("geoparquet").load("/path/to/example1.parquet")
      22/12/29 15:53:44 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
      java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter$.$lessinit$greater$default$3()Z
      	at org.apache.spark.sql.execution.datasources.parquet.GeoParquetToSparkSchemaConverter.<init>(GeoParquetSchemaConverter.scala:48)
      	at org.apache.spark.sql.execution.datasources.parquet.GeoParquetFileFormat$.$anonfun$mergeSchemasInParallel$1(GeoParquetFileFormat.scala:265)
      	at org.apache.spark.sql.execution.datasources.parquet.GeoParquetFileFormat$.$anonfun$mergeSchemasInParallel$1$adapted(GeoParquetFileFormat.scala:261)
      	at org.apache.spark.sql.execution.datasources.parquet.GeoSchemaMergeUtils$.$anonfun$mergeSchemasInParallel$2(GeoSchemaMergeUtils.scala:69)
      	at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:863)
      	at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:863)
      	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
      	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
      	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
      	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
      	at org.apache.spark.scheduler.Task.run(Task.scala:131)
      	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
      	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:750)
      

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              kontinuation Kristin Cowalcijk
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: