Details
-
Sub-task
-
Status: Resolved
-
Minor
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java. Our prototype has spotted the following throw statement whose exception class and error message indicate different error conditions.
Version: Hadoop-3.1.2
File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
Line: 294-297
throw new DiskErrorException("Invalid value configured for " + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated + ". Value configured is either less than maxVolumeFailureLimit or greater than " + "to the number of configured volumes (" + volsConfigured + ").");
A DiskErrorException means an error has occurred when the process is interacting with the disk, e.g., in org.apache.hadoop.util.DiskChecker.checkDirInternal() we have the following code (lines 97-98):
throw new DiskErrorException("Cannot create directory: " + dir.toString());
However, the error message of the first exception indicates that dfs.datanode.failed.volumes.tolerated is configured incorrectly, which means there is nothing wrong with the disk (yet). This mismatch could be a problem. For example, the callers trying to handle other DiskErrorException may accidentally (and incorrectly) handle the configuration error.