We are evaluating Hudi to use for our near real-time ingestion needs, compared to other solutions (Delta/Iceberg). We've picked Hudi because pre-installed with Amazon EMR by AWS. However, adopting it is blocking on this issue with concurrent small batch (of 256 files) write jobs (to the same S3 path).
Using Livy we're triggering Spark jobs writing Hudi tables over S3, on EMR with EMRFS active. Paths are using the "s3://" prefix and EMRFS is active. We're writing Spark SQL datasets promoted up from RDDs. The "hoodie.consistency.check.enabled" is set to true. Spark serializer is Kryo. Hoodie version is 0.5.0-incubating.
Both on COW and MOR tables some of the submitted jobs are failing with the below exception:
The jobs are sent in concurrent batches of 256 files, over the same S3 path, in total some 8k files for 6 hours of our data.
Writing happens with the following code (basePath is an S3 bucket):