Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Bug
-
2.7.1
-
None
-
None
-
None
Description
The datanode start failed and exited when I reused configured for dfs.datanode.failed.volumes.tolerated as 5 from my another cluster but actually the new cluster only have one datadir path. And this leaded the Invalid volume failure config value and threw DiskErrorException, so the datanode shutdown. The info is below:
2016-04-07 09:34:45,358 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage for block pool: BP-1239160341-xx.xx.xx.xx-1459929303126 : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /home/data/hdfs/data/current/BP-1239160341-xx.xx.xx.xx-1459929303126 2016-04-07 09:34:45,358 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to /xx.xx.xx.xx:9000. Exiting. java.io.IOException: All specified directories are failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1361) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801) at java.lang.Thread.run(Thread.java:745) 2016-04-07 09:34:45,358 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to /xx.xx.xx.xx:9000. Exiting. org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid volume failure config value: 5 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:281) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1374) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1326) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:316) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:801) at java.lang.Thread.run(Thread.java:745) 2016-04-07 09:34:45,358 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to /xx.xx.xx.xx:9000 2016-04-07 09:34:45,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to /xx.xx.xx.xx:9000 2016-04-07 09:34:45,460 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned) 2016-04-07 09:34:47,460 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2016-04-07 09:34:47,462 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 2016-04-07 09:34:47,463 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
IMO, this will let users feel bad because I only configured a value incorrectly. Instead of, we can give a warn info for this and reset this value to the default value. It will be a better way for this case.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-10279 Improve validation of the configured number of tolerated failed volumes
- Resolved