Details
Description
I'm inspecting an interesting case where in a production cluster, some regionservers ends up accumulating hundreds of WAL files, even with force flushes going on due to max logs. This happened multiple times on the cluster, but not on other clusters. The cluster has periodic memstore flusher disabled, however, this still does not explain why the force flush of regions due to max limit is not working. I think the periodic memstore flusher just masks the underlying problem, which is why we do not see this in other clusters.
The problem starts like this:
2016-09-21 17:49:18,272 INFO [regionserver//10.2.0.55:16020.logRoller] wal.FSHLog: Too many wals: logs=33, maxlogs=32; forcing flush of 1 regions(s): d4cf39dc40ea79f5da4d0cf66d03cb1f 2016-09-21 17:49:18,273 WARN [regionserver//10.2.0.55:16020.logRoller] regionserver.LogRoller: Failed to schedule flush of d4cf39dc40ea79f5da4d0cf66d03cb1f, region=null, requester=null
then, it continues until the RS is restarted:
2016-09-23 17:43:49,356 INFO [regionserver//10.2.0.55:16020.logRoller] wal.FSHLog: Too many wals: logs=721, maxlogs=32; forcing flush of 1 regions(s): d4cf39dc40ea79f5da4d0cf66d03cb1f 2016-09-23 17:43:49,357 WARN [regionserver//10.2.0.55:16020.logRoller] regionserver.LogRoller: Failed to schedule flush of d4cf39dc40ea79f5da4d0cf66d03cb1f, region=null, requester=null
The problem is that region d4cf39dc40ea79f5da4d0cf66d03cb1f is already split some time ago, and was able to flush its data and split without any problems. However, the FSHLog still thinks that there is some unflushed data for this region.
Attachments
Attachments
Issue Links
- is related to
-
HBASE-23181 Blocked WAL archive: "LogRoller: Failed to schedule flush of XXXX, because it is not online on us"
- Resolved
- relates to
-
HBASE-16820 BulkLoad mvcc visibility only works accidentally
- Resolved
-
HBASE-23157 WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used
- Resolved