Details
-
Improvement
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
0.9.0
Description
Even as you can argue that RFC-15/consolidated metadata, removes the need for deleting partial files written due to spark task failures/stage retries. It will still leave extra files inside the table (and users will pay for it every month) and we need the marker mechanism to be able to delete these partial files.
Here we explore if we can improve the current marker file mechanism, that creates one marker file per data file written, by
Delegating the createMarker() call to the driver/timeline server, and have it create marker metadata into a single file handle, that is flushed for durability guarantees
P.S: I was tempted to think Spark listener mechanism can help us deal with failed tasks, but it has no guarantees. the writer job could die without deleting a partial file. i.e it can improve things, but cant provide guarantees
Attachments
Issue Links
- links to