This bug is rather easy to get if the TimeoutMonitor is on, else I think it's still possible to hit it if a region fails to open for more obscure reasons like HDFS errors.
Consider a region that just went through distributed splitting and that's now being opened by a new RS. The first thing it does is to read the recovery files and put the edits in the MemStores. If this process takes a long time, the master will move that region away. At that point the edits are still accounted for in the global MemStore size but they are dropped when the HRegion gets cleaned up. It's completely invisible until the MemStoreFlusher needs to force flush a region and that none of them have edits:
2012-03-21 00:33:39,303 DEBUG org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush thread woke up because memory above low water=5.9g
2012-03-21 00:33:39,303 ERROR org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Cache flusher failed for entry null
The null here is a region. In my case I had so many edits in the MemStore during recovery that I'm over the low barrier although in fact I'm at 0. It happened yesterday and it still printing this out.
To fix this we need to be able to decrease the global MemStore size when the region can't open.