Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21085

Failed to read the partitioned table created by Spark 2.1

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 2.2.0
    • Component/s: SQL
    • Labels:
      None
    • Target Version/s:

      Description

      Spark 2.2 is unable to read the partitioned table created by Spark 2.1 when the table schema does not put the partitioning column at the end of the schema.

      assert(partitionFields.map(_.name) == partitionColumnNames)
      

      The codes are from the following files:

      https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala#L234-L236

      When reading the table metadata from the metastore, we also need to reorder the columns.

        Attachments

          Activity

            People

            • Assignee:
              smilegator Xiao Li
              Reporter:
              smilegator Xiao Li
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: