Description
ACCUMULO-2264 revealed a problem in how the tserver handles conflicts between its settings and the restrictions of the underlying fs.
In the case of ACCUMULO-2264, if the tserver is configured with a wal block size less than that allowed by HDFS the tserver sits in an infinite loop.
The tserver should probably be checking for the minimum blocksize (the property is dfs.namenode.fs-limits.min-block-size) and then either issuing a WARN/ERROR to the client and using the minimum or failing loudly and refusing to start. I favor the latter.
Attachments
Issue Links
- relates to
-
ACCUMULO-2264 KilledTabletServerSplitTest fails on Hadoop2
- Resolved
-
HDFS-4305 Add a configurable limit on number of blocks per file, and min block size
- Closed