Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-6245

Automatically downgrade table version of metadata table

Attach filesAttach ScreenshotAdd voteVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • 1.1.0
    • None
    • 2

    Description

      the user has to manually downgrade the metadata table from version 5 to 4 (e.g., from 0.13.0 release to 0.11.1 release), when the metadata table is enabled, along with the data table.  Otherwise, if the data table is downgraded from version 5 to 4 only, the following exception is thrown when trying to write to the table using an old release (e.g., 0.11.1).  It would be good to automatically downgrade MDT as well.

      org.apache.hudi.exception.HoodieException: Unknown versionCode:5                
        at org.apache.hudi.common.table.HoodieTableVersion.lambda$versionFromCode$1(HoodieTableVersion.java:58)
        at java.util.Optional.orElseThrow(Optional.java:290)
        at org.apache.hudi.common.table.HoodieTableVersion.versionFromCode(HoodieTableVersion.java:58)
        at org.apache.hudi.common.table.HoodieTableConfig.getTableVersion(HoodieTableConfig.java:465)
        at org.apache.hudi.table.upgrade.UpgradeDowngrade.needsUpgradeOrDowngrade(UpgradeDowngrade.java:66)
        at org.apache.hudi.client.BaseHoodieWriteClient.tryUpgrade(BaseHoodieWriteClient.java:1554)
        at org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1457)
        at org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1490)
        at org.apache.hudi.client.SparkRDDWriteClient.upsertPreppedRecords(SparkRDDWriteClient.java:166)
        at org.apache.hudi.metadata.SparkHoodieBackedTableMetadataWriter.commit(SparkHoodieBackedTableMetadataWriter.java:166)
        at org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.processAndCommit(HoodieBackedTableMetadataWriter.java:803)
        at org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.update(HoodieBackedTableMetadataWriter.java:870)
        at org.apache.hudi.client.BaseHoodieWriteClient.lambda$writeTableMetadata$0(BaseHoodieWriteClient.java:334)
        at org.apache.hudi.common.util.Option.ifPresent(Option.java:97)
        at org.apache.hudi.client.BaseHoodieWriteClient.writeTableMetadata(BaseHoodieWriteClient.java:334)
        at org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:269)
        at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:234)
        at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
        at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:651)
        at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:315)
        at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:171)

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            guoyihua Ethan Guo (this is the old account; please use "yihua")
            guoyihua Ethan Guo (this is the old account; please use "yihua")

            Dates

              Created:
              Updated:

              Slack

                Issue deployment