Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13511

Provide specialized exception when block length cannot be obtained

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 3.2.0, 3.1.1, 2.9.3, 2.10.1
    • None
    • None
    • Reviewed

    Description

      In downstream project, I saw the following code:

              FSDataInputStream inputStream = hdfs.open(new Path(path));
      ...
              if (options.getRecoverFailedOpen() && dfs != null && e.getMessage().toLowerCase()
                  .startsWith("cannot obtain block length for")) {
      

      The above tightly depends on the following in DFSInputStream#readBlockLength

          throw new IOException("Cannot obtain block length for " + locatedblock);
      

      The check based on string matching is brittle in production deployment.

      After discussing with stevel@apache.org, better approach is to introduce specialized IOException, e.g. CannotObtainBlockLengthException so that downstream project doesn't have to rely on string matching.

      Attachments

        1. HDFS-13511.003.patch
          4 kB
          Gabor Bota
        2. HDFS-13511.002.patch
          4 kB
          Gabor Bota
        3. HDFS-13511.001.patch
          4 kB
          Gabor Bota

        Issue Links

          Activity

            People

              gabor.bota Gabor Bota
              yuzhihong@gmail.com Ted Yu
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: