I have a busy region, and there are 43 StoreFiles (>compactionThreshold=8) in this region.
Now, I stopped the client and stopped putting new data into it. I expect these StoreFiles to be compacted later.
But, almost one day later, these 43 StoreFiles are still there.
(Note: in my hbase instance, I disabled the major compaction.)
It seems the minor compaction does not be started continuiously to compact remaining storefiles.
And I checked the code, it is true.
After more test, a obvious issue/problem is, the complete of a minor compaction does not check if current storefiles need more minor compaction.
I think this may be a bug or leak.
Try this test:
1. Put many data to a region, then there are 30 storefiles accumulated, because the backend compaction cannot catch up with the fast puts. (hbase.hstore.compactionThreshold=8, base.hstore.compaction.max=12)
2. Then stop put.
3. Then, these 30 storefiles are still there for a long time, (no automatic minor compaction)
4. Submit a compaction on this region, then, only 12 files are compaction, now, we have 19 storefiles. The minor compaction stopped.
I think, when a minor compaction complete, it should check if the number of storefiles still many, if so, another minor compaction should start continuiously.
|Field||Original Value||New Value|
|Summary||Minor compaction does not be started continuiously to compact remaining storefiles (>compactionThreshold)||Minor compaction needs to check if still over compactionThreshold after compacting|
|Fix Version/s||0.92.0 [ 12314223 ]|
|Assignee||Nicolas Spiegelberg [ nspiegelberg ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
|Status||Patch Available [ 10002 ]||Resolved [ 5 ]|
|Resolution||Fixed [ 1 ]|
|Status||Resolved [ 5 ]||Closed [ 6 ]|