Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
Right now, for simplicity, entire split of decompressed buffers is locked in cache, in case some buffers are shared between RGs, to avoid dealing with situations where we uncompress some data, pass it on to processor for RG N, then processor processes and unlocks it, and before we can pass it on for RG N+1 it's evicted.
However, if split is too big, and cache is small, or many splits are processed at the same time, this can result in a deadlock as entire cache is locked. We need to improve locking to be more granular and probably also try to avoid deadlocks in general (bypass cache?)