Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Invalid
-
None
-
None
Description
环境:
hudi: 0.10.1
flink:1.13.2
flink cluster setup in standalone mode, and use the below flink sql to launch the flink job
#create the cow sink table
CREATE TABLE t1(
uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
name VARCHAR(10),
age INT,
ts TIMESTAMP(3),
`partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
WITH (
'connector' = 'hudi',
'path' = '/user/hive/warehouse/hudi.db/t1',
'write.tasks' = '1',
'table.type' = 'COPY_ON_WRITE'
);
#write data first
INSERT INTO t1 VALUES ('id1','Danny',20,TIMESTAMP '1970-01-01 00:00:01','par1')
#write data in twice
INSERT INTO t1 VALUES ('id1','Danny',20,TIMESTAMP '1970-01-01 00:00:01','par1')
the first time the flink job is finished successfully, but it failed in twice, and the below exception occured
org.apache.hudi.common.fs.HoodieWrapperFileSystem cannot be cast to org.apache.hudi.common.fs.HoodieWrapperFileSystem
I search issues, and found issue#3885 mentioned same problem, But the root casuse is different.My problem is involed by hadoop fs cache. The HoodieWrapperFileSystem is cached in fs cache when it create by first flink job in flink taskmanger.When I launched the second flink job to write data, it used cached HoodieWrapperFileSystem and whose childFlinkClassload is different with this flink job.So the exception occured.
Attachments
Issue Links
- links to