Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13511

Provide specialized exception when block length cannot be obtained

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.2.0, 3.1.1
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      In downstream project, I saw the following code:

              FSDataInputStream inputStream = hdfs.open(new Path(path));
      ...
              if (options.getRecoverFailedOpen() && dfs != null && e.getMessage().toLowerCase()
                  .startsWith("cannot obtain block length for")) {
      

      The above tightly depends on the following in DFSInputStream#readBlockLength

          throw new IOException("Cannot obtain block length for " + locatedblock);
      

      The check based on string matching is brittle in production deployment.

      After discussing with Steve Loughran, better approach is to introduce specialized IOException, e.g. CannotObtainBlockLengthException so that downstream project doesn't have to rely on string matching.

        Attachments

        1. HDFS-13511.001.patch
          4 kB
          Gabor Bota
        2. HDFS-13511.002.patch
          4 kB
          Gabor Bota
        3. HDFS-13511.003.patch
          4 kB
          Gabor Bota

          Issue Links

            Activity

              People

              • Assignee:
                gabor.bota Gabor Bota
                Reporter:
                yuzhihong@gmail.com Ted Yu
              • Votes:
                0 Vote for this issue
                Watchers:
                8 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: