Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.3.5
Description
Currently the spark integration with PathOutputCommitters rejects attempt to instantiate them if dynamic partitioning is enabled. That is because the spark partitioning code assumes that
- file rename works as a fast and safe commit algorithm
- the working directory is in the same FS as the final directory
Assumption 1 doesn't hold on s3a, and #2 isn't true for the staging committers.
The new abfs/gcs manifest committer and the target stores do meet both requirements. So we no longer need to reject the operation, provided the spark side binding-code can can identify when all is good.
Proposed: add a new hasCapability() probe which, if, a committer implements StreamCapabilities can be used to see if the committer will work. ManifestCommitter will declare that it holds. As the API has existed since 2.10, it will be immediately available.
spark's PathOutputCommitProtocol to query the committer in setupCommitter, and fail if dynamicPartitionOverwrite is requested but not available.
BindingParquetOutputCommitter to implement and forward StreamCapabilities.hasCapability.
Attachments
Issue Links
- is related to
-
SPARK-40034 PathOutputCommitters to work with dynamic partition overwrite
- Resolved
- relates to
-
MAPREDUCE-7341 Add a task-manifest output committer for Azure and GCS
- Resolved
- links to