Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.0
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      There are repeated codes for creating log/error messages in BlockReceiver. Also, some comment in the codes are incorrect, e.g.

      private int numTargets;     // number of downstream datanodes including myself
      

      but the count indeed excludes the current datanode.

      1. h1833_20110413.patch
        18 kB
        Tsz Wo Nicholas Sze
      2. h1833_20110412.patch
        17 kB
        Tsz Wo Nicholas Sze

        Issue Links

          Activity

          Hide
          Tsz Wo Nicholas Sze added a comment -

          This is a followup on HDFS-1789.

          Show
          Tsz Wo Nicholas Sze added a comment - This is a followup on HDFS-1789 .
          Hide
          Tsz Wo Nicholas Sze added a comment -

          h1833_20110412.patch: codes from h1789_20110325b.patch + fixing fault injection tests.

          Show
          Tsz Wo Nicholas Sze added a comment - h1833_20110412.patch: codes from h1789_20110325b.patch + fixing fault injection tests.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12476189/h1833_20110412.patch
          against trunk revision 1091619.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.fs.TestHDFSFileContextMainOperations
          org.apache.hadoop.hdfs.TestDecommission

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/355//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/355//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/355//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12476189/h1833_20110412.patch against trunk revision 1091619. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.fs.TestHDFSFileContextMainOperations org.apache.hadoop.hdfs.TestDecommission -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/355//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/355//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/355//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12476189/h1833_20110412.patch
          against trunk revision 1091619.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.fs.TestHDFSFileContextMainOperations
          org.apache.hadoop.hdfs.server.datanode.TestBlockReport
          org.apache.hadoop.hdfs.TestFileAppend4
          org.apache.hadoop.hdfs.TestFileConcurrentReader
          org.apache.hadoop.hdfs.TestLargeBlock
          org.apache.hadoop.hdfs.TestWriteConfigurationToDFS

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/353//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/353//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/353//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12476189/h1833_20110412.patch against trunk revision 1091619. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.fs.TestHDFSFileContextMainOperations org.apache.hadoop.hdfs.server.datanode.TestBlockReport org.apache.hadoop.hdfs.TestFileAppend4 org.apache.hadoop.hdfs.TestFileConcurrentReader org.apache.hadoop.hdfs.TestLargeBlock org.apache.hadoop.hdfs.TestWriteConfigurationToDFS -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/353//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/353//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/353//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          I'm a bit confused by the semantics of this new method:

          private boolean isDownstreamLengthZero() {
            return downstreams != null && downstreams.length == 0;
          }
          

          It seems like it means "isLastPacketResponder()". However, it appears that a "downstreams == null" is not the same as a 0-length array?

          Later there is new code for "downstreams != null && downstreams.length > 0". Why not use "!isDownstreamLengthZero()"?

          Would it maybe make sense to ensure downstreams is initialized to an empty array even when null is passed in, and then use downstreams.length?

          Show
          Daryn Sharp added a comment - I'm a bit confused by the semantics of this new method: private boolean isDownstreamLengthZero() { return downstreams != null && downstreams.length == 0; } It seems like it means "isLastPacketResponder()". However, it appears that a "downstreams == null" is not the same as a 0-length array? Later there is new code for "downstreams != null && downstreams.length > 0". Why not use "!isDownstreamLengthZero()"? Would it maybe make sense to ensure downstreams is initialized to an empty array even when null is passed in, and then use downstreams.length?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Hi Daryn, downstreams is the replacement of numTargets, where the values of numTargets could be -1, 0 or >0, where the value -1 is used for verifying check sum locally but not write pipe line. In the patch, downstreams == null is for the case numTargets == -1.

          I got similar questions from Jitendra earlier. It probably is still confusing. Let me think about how to make it more clear.

          Show
          Tsz Wo Nicholas Sze added a comment - Hi Daryn, downstreams is the replacement of numTargets , where the values of numTargets could be -1, 0 or >0, where the value -1 is used for verifying check sum locally but not write pipe line. In the patch, downstreams == null is for the case numTargets == -1 . I got similar questions from Jitendra earlier. It probably is still confusing. Let me think about how to make it more clear.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          h1833_20110413.patch: added PacketResponderType.

          Show
          Tsz Wo Nicholas Sze added a comment - h1833_20110413.patch: added PacketResponderType.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12476286/h1833_20110413.patch
          against trunk revision 1091874.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestFileConcurrentReader

          -1 contrib tests. The patch failed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/365//testReport/
          Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/365//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/365//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12476286/h1833_20110413.patch against trunk revision 1091874. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestFileConcurrentReader -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/365//testReport/ Findbugs warnings: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/365//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/365//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          I'm still trying to completely wrap my head around the logic, but here's a few questions.

          1) "UNKNOWN" is misspelled in PipelineAck.UNKOWN_SEQNO

          2) Should this be checking for type == PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE? Otherwise this will try to read a packet when there is no pipeline.

          if (type != PacketResponderType.LAST_IN_PIPELINE && !mirrorError) {
            // read an ack from downstream datanode
            ack.readFields(downstreamIn);
          

          3) Thread.interrupted() seems to be unnecessarily invoked multiple times. Once in the IOException handler, and then immediately thereafter. Perhaps the call should be removed in the exception handler? Or at a minimum flip these operands to avoid the second call?

          if (Thread.interrupted() || isInterrupted) {
          

          4) The "continue" appears to be equivalent to performing a "break" because the loop condition requests running to be true. This isn't a big deal, but would a break be more obvious?

          LOG.info(myString + ": Thread is interrupted.");
          running = false;
          continue;
          

          5) Should this:

          short ackLen = type == PacketResponderType.LAST_IN_PIPELINE? 0 : ack.getNumOfReplies();
          

          be:
          short ackLen = type == PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE ? ack.getNumOfReplies() : 0;

          Show
          Daryn Sharp added a comment - I'm still trying to completely wrap my head around the logic, but here's a few questions. 1) "UNKNOWN" is misspelled in PipelineAck.UNKOWN_SEQNO 2) Should this be checking for type == PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE? Otherwise this will try to read a packet when there is no pipeline. if (type != PacketResponderType.LAST_IN_PIPELINE && !mirrorError) { // read an ack from downstream datanode ack.readFields(downstreamIn); 3) Thread.interrupted() seems to be unnecessarily invoked multiple times. Once in the IOException handler, and then immediately thereafter. Perhaps the call should be removed in the exception handler? Or at a minimum flip these operands to avoid the second call? if ( Thread .interrupted() || isInterrupted) { 4) The "continue" appears to be equivalent to performing a "break" because the loop condition requests running to be true. This isn't a big deal, but would a break be more obvious? LOG.info(myString + ": Thread is interrupted." ); running = false ; continue ; 5) Should this: short ackLen = type == PacketResponderType.LAST_IN_PIPELINE? 0 : ack.getNumOfReplies(); be: short ackLen = type == PacketResponderType.HAS_DOWNSTREAM_IN_PIPELINE ? ack.getNumOfReplies() : 0;
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Thanks Daryn for the review.

          1) PipelineAck is declared in DataTransferProtocol. Changing it requires a protocol change. Let do it later when we are changing DataTransferProtocol.

          2) The original code is numTargets != 0. numTargets could be -1 or >0. So checking type != PacketResponderType.LAST_IN_PIPELINE is correct.

          3) I think it is necessary. It was introduced earlier for some bug fixes.

          4) You are right. I did not study the loop structure. Let do this further improvement later.

          5) The original code is

          short ackLen = numTargets == 0 ? 0 : ack.getNumOfReplies();
          

          So type == PacketResponderType.LAST_IN_PIPELINE is correct.

          Show
          Tsz Wo Nicholas Sze added a comment - Thanks Daryn for the review. 1) PipelineAck is declared in DataTransferProtocol . Changing it requires a protocol change. Let do it later when we are changing DataTransferProtocol . 2) The original code is numTargets != 0 . numTargets could be -1 or >0. So checking type != PacketResponderType.LAST_IN_PIPELINE is correct. 3) I think it is necessary. It was introduced earlier for some bug fixes. 4) You are right. I did not study the loop structure. Let do this further improvement later. 5) The original code is short ackLen = numTargets == 0 ? 0 : ack.getNumOfReplies(); So type == PacketResponderType.LAST_IN_PIPELINE is correct.
          Hide
          Daryn Sharp added a comment -

          +1 Per our discussion, some of the existing logic appears dubious, but your changes do preserve the original logic.

          Show
          Daryn Sharp added a comment - +1 Per our discussion, some of the existing logic appears dubious, but your changes do preserve the original logic.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I have committed this.

          Show
          Tsz Wo Nicholas Sze added a comment - I have committed this.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #592 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/592/)
          HDFS-1833. Reduce repeated string constructions and unnecessary fields, and fix comments in BlockReceiver.PacketResponder.

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #592 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/592/ ) HDFS-1833 . Reduce repeated string constructions and unnecessary fields, and fix comments in BlockReceiver.PacketResponder.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #643 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/643/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #643 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/643/ )

            People

            • Assignee:
              Tsz Wo Nicholas Sze
              Reporter:
              Tsz Wo Nicholas Sze
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development