Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.3.0, 1.3.1
-
None
Description
GeoParquet reader works fine when loading one of the example parquet files in local mode:
scala> spark.read.format("geoparquet").load("/path/to/example1.parquet").printSchema root |-- pop_est: long (nullable = true) |-- continent: string (nullable = true) |-- name: string (nullable = true) |-- iso_a3: string (nullable = true) |-- gdp_md_est: double (nullable = true) |-- geometry: geometry (nullable = true)
When running the same code in standalone cluster mode, the type of the geometry column is binary instead of geometry:
scala> spark.read.format("geoparquet").load("/path/to/example1.parquet").printSchema root |-- pop_est: long (nullable = true) |-- continent: string (nullable = true) |-- name: string (nullable = true) |-- iso_a3: string (nullable = true) |-- gdp_md_est: double (nullable = true) |-- geometry: binary (nullable = true)
Attachments
Issue Links
- is related to
-
SEDONA-207 Faster serialization/deserialization of geometry objects
- Resolved
- relates to
-
SEDONA-224 java.lang.NoSuchMethodError when loading GeoParquet files using Spark 3.0.x ~ 3.2.x
- Resolved
-
SEDONA-225 Cannot count dataframes loaded from GeoParquet files
- Resolved