Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26990

Difference in handling of mixed-case partition column names after SPARK-26188

Rank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.4.1
    • 2.4.1, 3.0.0
    • SQL
    • None

    Description

      I noticed that the PR for SPARK-26188 changed how mixed-cased partition columns are handled when the user provides a schema.

      Say I have this file structure (note that each instance of `pS` is mixed case):

      bash-3.2$ find partitioned5 -type d
      partitioned5
      partitioned5/pi=2
      partitioned5/pi=2/pS=foo
      partitioned5/pi=2/pS=bar
      partitioned5/pi=1
      partitioned5/pi=1/pS=foo
      partitioned5/pi=1/pS=bar
      bash-3.2$
      

      If I load the file with a user-provided schema in 2.4 (before the PR was committed) or 2.3, I see:

      scala> val df = spark.read.schema("intField int, pi int, ps string").parquet("partitioned5")
      df: org.apache.spark.sql.DataFrame = [intField: int, pi: int ... 1 more field]
      scala> df.printSchema
      root
       |-- intField: integer (nullable = true)
       |-- pi: integer (nullable = true)
       |-- ps: string (nullable = true)
      scala>
      

      However, using 2.4 after the PR was committed. I see:

      scala> val df = spark.read.schema("intField int, pi int, ps string").parquet("partitioned5")
      df: org.apache.spark.sql.DataFrame = [intField: int, pi: int ... 1 more field]
      scala> df.printSchema
      root
       |-- intField: integer (nullable = true)
       |-- pi: integer (nullable = true)
       |-- pS: string (nullable = true)
      scala>
      

      Spark is picking up the mixed-case column name pS from the directory name, not the lower-case ps from my specified schema.

      In all tests, spark.sql.caseSensitive is set to the default (false).

      Not sure is this is an bug, but it is a difference.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Gengliang.Wang Gengliang Wang
            bersprockets Bruce Robbins
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment