On a production cluster, with a complex iterator tree, a large value (~350M) was causing a 4G tserver to fail with out-of-memory.
There were several factors contributing to the problem:
- a bug: the query should not have been looking to the big data
- complex iterator tree, causing many copies of the data to be held at the same time
- RFile doubles the buffer it uses to load values, and continues to use that large buffer for future values
This ticket is for the last point. If we know we're not even going to look at the value, we can read past it without storing it in memory. It is surprising that skipping past a large value would cause the server to run out of memory, especially since it should fit into memory enough times to be returned to the caller.
The provided iterators inside core/org/apache/accumulo/iterators should be revisited to ensure that they properly set the seekColumnFamilies where necessary, specifically the IntersectingIterator.