Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-1658 [UMBRELLA] Spark Sql Support For Hudi
  3. HUDI-2251

[SQL] Fix Exception Cause By Table Name Case Sensitivity For Append Mode Write

    XMLWordPrintableJSON

    Details

      Description

      When write a table name with uppercase to theĀ  hoodie.properties and then write data by spark sql, a exception will throw out.

      org.apache.hudi.exception.HoodieException: hoodie table with name hudi_17Gb_ext1 already exists at s3a://siva-test-bucket-june-16/hudi_testing/gh_arch_dump/hudi_5
      	at org.apache.hudi.HoodieSparkSqlWriter$.handleSaveModes(HoodieSparkSqlWriter.scala:424)
      	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:116)
      	at org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.executeUpsert(MergeIntoHoodieTableCommand.scala:265)
      	at org.apache.spark.sql.hudi.command.MergeIntoHoodieTableCommand.run(MergeIntoHoodieTableCommand.scala:151)
      	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
      	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
      	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
      	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
      	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
      	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
      	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
      	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
      	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
      	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
      	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
      	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
      	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
      	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                pzw2018 pengzhiwei
                Reporter:
                pzw2018 pengzhiwei
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: