Details
Description
In downstream project, I saw the following code:
FSDataInputStream inputStream = hdfs.open(new Path(path)); ... if (options.getRecoverFailedOpen() && dfs != null && e.getMessage().toLowerCase() .startsWith("cannot obtain block length for")) {
The above tightly depends on the following in DFSInputStream#readBlockLength
throw new IOException("Cannot obtain block length for " + locatedblock);
The check based on string matching is brittle in production deployment.
After discussing with stevel@apache.org, better approach is to introduce specialized IOException, e.g. CannotObtainBlockLengthException so that downstream project doesn't have to rely on string matching.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-11711 DN should not delete the block On "Too many open files" Exception
- Resolved