Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
None
Description
create a table and set hoodie.datasource.write.operation upsert
when I use sql to delete, the delete operation key will be overwrite by hoodie.datasource.write.operation from table or env
withSparkConf(sparkSession, hoodieCatalogTable.catalogProperties) { Map( "path" -> path, RECORDKEY_FIELD.key -> hoodieCatalogTable.primaryKeys.mkString(","), TBL_NAME.key -> tableConfig.getTableName, HIVE_STYLE_PARTITIONING.key -> tableConfig.getHiveStylePartitioningEnable, URL_ENCODE_PARTITIONING.key -> tableConfig.getUrlEncodePartitioning, KEYGENERATOR_CLASS_NAME.key -> classOf[SqlKeyGenerator].getCanonicalName, SqlKeyGenerator.ORIGIN_KEYGEN_CLASS_NAME -> tableConfig.getKeyGeneratorClassName, OPERATION.key -> DataSourceWriteOptions.DELETE_OPERATION_OPT_VAL, PARTITIONPATH_FIELD.key -> tableConfig.getPartitionFieldProp, HiveSyncConfig.HIVE_SYNC_MODE.key -> HiveSyncMode.HMS.name(), HiveSyncConfig.HIVE_SUPPORT_TIMESTAMP_TYPE.key -> "true", HoodieWriteConfig.DELETE_PARALLELISM_VALUE.key -> "200", SqlKeyGenerator.PARTITION_SCHEMA -> partitionSchema.toDDL ) }
Attachments
Issue Links
- links to