Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java. Our prototype has spotted a few throw statements whose exception class does not accurately describe why they are thrown. This can be dangerous since it makes correctly handling them challenging. For example, in an old bug, HDFS-8224, throwing a general IOException makes it difficult to perform data recovery specifically when a metadata file is corrupted.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-16300 ebugs automated bug checker is reporting exception issues
- Open
1.
|
PeerCache.close() throws a RuntimeException when it is interrupted | Resolved | Unassigned | |
2.
|
DatasetVolumeChecker() throws A DiskErrorException when the configuration has wrong values | Resolved | Unassigned | |
3.
|
StorageLocationChecker methods throw DiskErrorExceptions when the configuration has wrong values | Resolved | Unassigned | |
4.
|
FsDatasetImpl() throws a DiskErrorException when the configuration has wrong values, results in unnecessary retry | Resolved | Unassigned | |
5.
|
DataNode.startDataNode() throws a DiskErrorException when the configuration has wrong values | Resolved | Unassigned | |
6.
|
FSDirectory.resolveDotInodesPath() throws FileNotFoundException when the path is malformed | Resolved | Unassigned | |
7.
|
FSImageHandler.getPath() throws a FileNotFoundException when the path is malformed | Resolved | Unassigned | |
8.
|
BlockReceiver.receiveBlock() throws an IOException when interrupted | Resolved | Unassigned |