XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.3.0
    • 3.3.0
    • SQL
    • None

    Description

      As per https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#timestamp, Parquet supports both TIMESTAMP_NTZ and TIMESTAMP_LTZ (Spark's current default timestamp type):

      • A TIMESTAMP with isAdjustedToUTC=true => TIMESTAMP_LTZ
      • A TIMESTAMP with isAdjustedToUTC=false => TIMESTAMP_NTZ

      In Spark 3.1 or prior, the Parquet writer follows the definition and sets the field `isAdjustedToUTC` as `true`, while the Parquet reader doesn’t respect the `isAdjustedToUTC` flag and convert any Parquet Timestamp type as TIMESTAMP_LTZ.

      Since 3.2, with the support of timestamp without time zone type:

      • Parquet writer follows the definition and sets the field `isAdjustedToUTC` as `false` on writing TIMESTAMP_NTZ.
      • Parquet reader
        • For schema inference, Spark converts the Parquet timestamp type to the corresponding catalyst timestamp type according to the timestamp annotation flag `isAdjustedToUTC`.
        • If merge schema is enabled in schema inference and some of the files are inferred as TIMESTAMP_NTZ while the others are TIMESTAMP_LTZ, the result type is TIMESTAMP_LTZ which is considered as the “wider” type
        • If a column of a user-provided schema is TIMESTAMP_LTZ and the column was written as TIMESTAMP_NTZ type, Spark allows the read operation.
        • If a column of a user-provided schema is TIMESTAMP_NTZ and the column was written as TIMESTAMP_LTZ type, the read operation is not allowed since the TIMESTAMP_NTZ is considered as narrower than TIMESTAMP_LTZ.

      Attachments

        Activity

          People

            Gengliang.Wang Gengliang Wang
            Gengliang.Wang Gengliang Wang
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: