Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
Currently, while cleaning the log files, the Retention job goes into OOM and fails when the no of log files is too many.
Retention job while fetching all the dataset versions, loads the file status all at once into the memory, resulting in this issue.
Thus, the Retention job should avoid loading all data into memory, and use an iterator-based approach. This will load only limited file status into memory and making the retention job pipeline more robust to OOM errors