Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12619

Do not catch and throw unchecked exceptions if IBRs fail to process

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.8.0, 2.7.3, 3.0.0-alpha1
    • Fix Version/s: 2.9.0, 2.8.3, 3.0.0
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      HDFS-9198 added the following code

      BlockManager#processIncrementalBlockReport
      public void processIncrementalBlockReport(final DatanodeID nodeID,
            final StorageReceivedDeletedBlocks srdb) throws IOException {
          ...
          try {
            processIncrementalBlockReport(node, srdb);
          } catch (Exception ex) {
            node.setForceRegistration(true);
            throw ex;
          }
        }
      

      In Apache Hadoop 2.7.x ~ 3.0, the code snippet is accepted by Java compiler. However, when I attempted to backport it to a CDH5.3 release (based on Apache Hadoop 2.5.0), the compiler complains the exception is unhandled, because the method defines it throws IOException instead of Exception.

      While the code compiles for Apache Hadoop 2.7.x ~ 3.0, I feel it is not a good practice to catch an unchecked exception and then rethrow it. How about rewriting it with a finally block and a conditional variable?

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                jojochuang Wei-Chiu Chuang
                Reporter:
                jojochuang Wei-Chiu Chuang
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: