Details
-
New Feature
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
Reviewed
-
Default compaction policy has been changed to a new policy that will explore more groups of files and is more strict about enforcing the size ratio requirements.
Description
Some workloads that are not as stable can have compactions that are too large or too small using the current storefile selection algorithm.
Currently:
- Find the first file that Size(fi) <= Sum(0, i-1, FileSize(fx))
- Ensure that there are the min number of files (if there aren't then bail out)
- If there are too many files keep the larger ones.
I would propose something like:
- Find all sets of storefiles where every file satisfies
- FileSize(fi) <= Sum(0, i-1, FileSize(fx))
- Num files in set =< max
- Num Files in set >= min
- Then pick the set of files that maximizes ((# storefiles in set) / Sum(FileSize(fx)))
The thinking is that the above algorithm is pretty easy reason about, all files satisfy the ratio, and should rewrite the least amount of data to get the biggest impact in seeks.
Attachments
Attachments
Issue Links
- relates to
-
HBASE-8283 Backport HBASE-7842 Add compaction policy that explores more storefile groups to 0.94
- Closed