-
Type:
Sub-task
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: 3.0.0-alpha-1, 2.3.0
-
Component/s: None
-
Labels:None
-
Hadoop Flags:Reviewed
gAddress the comment in HBASE-22387:
The ByteBuffAllocator#getFreeBufferCount will be O(N) complexity, because the buffers here is an ConcurrentLinkedQueue. It's worth file an issue for this.
Also I think we should use the allcated bytes instead of allocation number to evaluate the heap allocation percent , so that we can decide whether the ByteBuffer is too small and whether will have higher GC pressure. Assume the case: the buffer size is 64KB, and each time we have a block with 65KB, then it will have one heap allocation (1KB) and one pool allocation (64KB), if only consider the allocation num, then the heap allocation ratio will be 1 / (1 + 1) = 50%, but if consider the allocation bytes, the allocation ratio will be 1KB / 65KB = 1.5%.
If the heap allocation percent is less than hbase.ipc.server.reservoir.minimal.allocating.size / hbase.ipc.server.allocator.buffer.size, then the allocator works fine, otherwise it's overload.
- links to