Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
A busy, internal smart user was having their logs filled with messages like this:
BucketCache: Failed allocation for c326ba1d8a134b4487539239fce60a99_0; org.apache.hadoop.hbase.io.hfile.bucket.BucketAllocatorException: Allocation too big size=1113254; adjust BucketCache sizes hbase.bucketcache.bucket.sizes to accomodate if size seems reasonable and you want it cached.
.. and wasn't sure how to address them.
Had to explain what the above is about (we don't have a bucket of a size into which we could put this object).
Need to doc that c326ba1d8a134b4487539239fce60a99_0 is a filename and an offset. Need to doc how to look at this object that we are trying to cache using hfile tool.
Then had to describe how you'd add a bucket that was big enough. Below I add 786432 and 1048576 beyond the default bucket sizes.
<property>
<name>hbase.bucketcache.bucket.sizes</name>
<value>4096, 8192, 16384, 32768, 40960, 49152, 57344, 65536, 98304, 131072, 196608, 262144, 393216, 524288, 786432, 1048576</value>
</property>
The operator asked some questions about how can I tell rates at which Cell sizes are being served.
Also, we were looking at the bucket numbers in the bucket L2 tab and it became obvious that bucket sizes, accesses, and frees should be summary stats, not per bucket instance. TODO.
Anyway, this issue is about doc'ing how to deal w/ the above. We'll see it more when bucket cache is on always as we'd like to do in 2.0 hbase.