Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
3.1.2, 3.2.0
-
None
-
None
Description
The code below does not work on both Spark 3.1 and Spark 3.2.
Part of the issue is the fact that the fileSchema has logicalTypeAnnotation == null (https://github.com/apache/spark/blob/5013171fd36e6221a540c801cb7fd9e298a6b5ba/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java#L92) which makes isUnsignedTypeMatched return false always:
I am not sure even if logicalTypeAnnotation would not be null if isUnsignedTypeMatched is supposed to return true for this use case.
Python repro:
import os from pyspark.sql.functions import * from pyspark.sql import SparkSession from pyspark.sql.types import * spark = SparkSession.builder \ .config("spark.hadoop.fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") \ .config("spark.hadoop.fs.AbstractFileSystem.s3.impl", "org.apache.hadoop.fs.s3a.S3A") \ .getOrCreate() df = spark.createDataFrame([(1,2),(2,3)],StructType([StructField("id",IntegerType(),True),StructField("id2",IntegerType(),True)])).select("id") df.write.mode("overwrite").parquet("s3://bucket/test") df=spark.read.schema(StructType([StructField("id",LongType(),True)])).parquet("s3://bucket/test") df.show(1, False)
Attachments
Issue Links
- Blocked
-
SPARK-35461 Error when reading dictionary-encoded Parquet int column when read schema is bigint
- Open