The following sequence of events is possible to occur in HRegion's batchMutate() call:
1. caller attempts to call HRegion.batchMutate() with a batch of N>1 records
2. batchMutate acquires region lock in startRegionOperation, then calls doMiniBatchMutation()
3. doMiniBatchMutation acquires one row lock
4. Region closes
5. doMiniBatchMutation attempts to acquire second row lock.
When this happens, the lock acquisition will also attempt to acquire the region lock, which fails (because the region is closing). At this stage, doMiniBatchMutation will stop writing further, BUT it WILL write data for the rows whose locks have already been acquired, and advance the index in MiniBatchOperationInProgress. Then, after it terminates successfully, batchMutate() will loop around a second time, and attempt AGAIN to acquire the region closing lock. When that happens, a NotServingRegionException is thrown back to the caller.
Thus, we have a race condition where partial data can be written when a region server is closing.
The main problem stems from the location of startRegionOperation() calls in batchMutate and doMiniBatchMutation():
1. batchMutate() reacquires the region lock with each iteration of the loop, which can cause some successful writes to occur, but then fail on others
2. getRowLock() attempts to acquire the region lock once for each row, which allows doMiniBatchMutation to terminate early; this forces batchMutate() to use multiple iterations and results in condition 1 being hit.
There appears to be two parts to the solution as well:
1. open an internal path so that doMiniBatchMutation() can acquire row locks without checking for region closure. This will have the added benefit of a significant performance improvement during large batch mutations.
2. move the startRegionOperation() out of the loop in batchMutate() so that multiple iterations of doMiniBatchMutation will not cause the operation to fail.