Type: New Feature
Affects Version/s: None
Release Note:Default compaction policy has been changed to a new policy that will explore more groups of files and is more strict about enforcing the size ratio requirements.
Some workloads that are not as stable can have compactions that are too large or too small using the current storefile selection algorithm.
- Find the first file that Size(fi) <= Sum(0, i-1, FileSize(fx))
- Ensure that there are the min number of files (if there aren't then bail out)
- If there are too many files keep the larger ones.
I would propose something like:
- Find all sets of storefiles where every file satisfies
- FileSize(fi) <= Sum(0, i-1, FileSize(fx))
- Num files in set =< max
- Num Files in set >= min
- Then pick the set of files that maximizes ((# storefiles in set) / Sum(FileSize(fx)))
The thinking is that the above algorithm is pretty easy reason about, all files satisfy the ratio, and should rewrite the least amount of data to get the biggest impact in seeks.
|Status||Resolved [ 5 ]||Closed [ 6 ]|
|Status||Patch Available [ 10002 ]||Resolved [ 5 ]|
|Hadoop Flags||Reviewed [ 10343 ]|
|Release Note||Default compaction policy has been changed to a new policy that will explore more groups of files and is more strict about enforcing the size ratio requirements.|
|Fix Version/s||0.95.1 [ 12324288 ]|
|Fix Version/s||0.98.0 [ 12323143 ]|
|Resolution||Fixed [ 1 ]|
|Attachment||HBASE-7842-1.patch [ 12572221 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
|Affects Version/s||0.96.0 [ 12320040 ]|