Details
Description
The HDFSBackedStateStoreProvider when using the default CheckpointFileManager is leaving '.crc' files behind. There's a .crc file created for each `atomicFile` operation of the CheckpointFileManager.
Over time, the number of files becomes very large. It makes the state store file system constantly increase in size and, in our case, deteriorates the file system performance.
Here's a sample of one of our spark storage volumes after 2 days of execution (4 stateful streaming jobs, each on a different sub-dir):
Total files in PVC (used for checkpoints and state store) $find . | wc -l 431796 # .crc files $find . -name "*.crc" | wc -l 418053
With each .crc file taking one storage block, the used storage runs into the GBs of data.
These jobs are running on Kubernetes. Our shared storage provider, GlusterFS, shows serious performance deterioration with this large number of files:
DEBUG HDFSBackedStateStoreProvider: fetchFiles() took 29164ms
Attachments
Issue Links
- causes
-
SPARK-28712 spark structured stream with kafka don't really delete temp files in spark standalone cluster
- Closed
- is related to
-
SPARK-17475 HDFSMetadataLog should not leak CRC files
- Resolved
- links to