Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12619

Do not catch and throw unchecked exceptions if IBRs fail to process

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.8.0, 2.7.3, 3.0.0-alpha1
    • 2.9.0, 2.8.3, 3.0.0
    • namenode
    • None
    • Reviewed

    Description

      HDFS-9198 added the following code

      BlockManager#processIncrementalBlockReport
      public void processIncrementalBlockReport(final DatanodeID nodeID,
            final StorageReceivedDeletedBlocks srdb) throws IOException {
          ...
          try {
            processIncrementalBlockReport(node, srdb);
          } catch (Exception ex) {
            node.setForceRegistration(true);
            throw ex;
          }
        }
      

      In Apache Hadoop 2.7.x ~ 3.0, the code snippet is accepted by Java compiler. However, when I attempted to backport it to a CDH5.3 release (based on Apache Hadoop 2.5.0), the compiler complains the exception is unhandled, because the method defines it throws IOException instead of Exception.

      While the code compiles for Apache Hadoop 2.7.x ~ 3.0, I feel it is not a good practice to catch an unchecked exception and then rethrow it. How about rewriting it with a finally block and a conditional variable?

      Attachments

        1. HDFS-12619.001.patch
          1 kB
          Wei-Chiu Chuang

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            weichiu Wei-Chiu Chuang
            weichiu Wei-Chiu Chuang
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Issue deployment