Description
This sub-task will be a place holder for all the disk fail inplace issues related to Datanode.
Attachments
Issue Links
- incorporates
-
HDFS-1592 Datanode startup doesn't honor volumes.tolerated
-
- Closed
-
-
HDFS-1692 In secure mode, Datanode process doesn't exit when disks fail.
-
- Closed
-
-
HDFS-1847 Datanodes should decomission themselves on volume failure
-
- Open
-
-
HDFS-1848 Datanodes should shutdown when a critical volume fails
-
- Open
-
-
HDFS-1849 Respect failed.volumes.tolerated on startup
-
- Closed
-
- is part of
-
HADOOP-7123 Hadoop Disk Fail Inplace
-
- Open
-
- is related to
-
HADOOP-7431 Test DiskChecker's functionality in identifying bad directories (Part 2 of testing DiskChecker)
-
- Closed
-
-
HDFS-2111 Add tests for ensuring that the DN will start with a few bad data directories (Part 1 of testing DiskChecker)
-
- Closed
-
- relates to
-
HDFS-1940 Datanode can have more than one copy of same block when a failed disk is coming back in datanode
-
- Open
-
-
HADOOP-7040 DiskChecker:mkdirsWithExistsCheck swallows FileNotFoundException.
-
- Closed
-
-
HDFS-1934 Fix NullPointerException when File.listFiles() API returns null
-
- Closed
-
-
HADOOP-7322 Adding a util method in FileUtil for JDK File.listFiles
-
- Closed
-
-
HADOOP-7342 Add an utility API in FileUtil for JDK File.list
-
- Closed
-
-
HDFS-2019 Fix all the places where Java method File.list is used with FileUtil.list API
-
- Closed
-
-
HDFS-664 Add a way to efficiently replace a disk in a live datanode
-
- Resolved
-
-
HDFS-1362 Provide volume management functionality for DataNode
-
- Closed
-