HDFS-457 introduced a improvement which allows datanode to continue if a volume for replica storage fails. Previously a datanode resigned if any volume failed.
Current implementation shuts DataNode down completely when one of the configured volumes of the storage fails.
This is rather wasteful behavior because it decreases utilization (good storage becomes unavailable) and imposes extra load on the system (replication of the blocks from the good volumes). These problems will become even more prominent when we move to mixed (heterogeneous) clusters with many more volumes per Data Node.
I suggest following additional tests for this improvement.
#1 Test successive volume failures ( Minimum 4 volumes )
#2 Test if each volume failure reports reduction in available DFS space and remaining space.
#3 Test if failure of all volumes on a data nodes leads to the data node failure.
#4 Test if correcting failed storage disk brings updates and increments available DFS space.
- is blocked by
HDFS-1161 Make DN minimum valid volumes configurable
- is depended upon by
HDFS-556 Provide info on failed volumes in the web ui