Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9618

Fix mismatch between log level and guard in BlockManager#computeRecoveryWorkForBlocks

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.8.0
    • Fix Version/s: 2.9.0, 3.0.0-alpha1
    • Component/s: namenode
    • Labels:
      None
    • Target Version/s:

      Description

      Debug log message is constructed when Logger#isInfoEnabled.

      1. HDFS-9618.001.patch
        0.9 kB
        Masatake Iwasaki
      2. HDFS-9618.002.patch
        1 kB
        Masatake Iwasaki

        Issue Links

          Activity

          Hide
          iwasakims Masatake Iwasaki added a comment -
              if (blockLog.isInfoEnabled()) {
                // log which blocks have been scheduled for replication
                for(BlockRecoveryWork rw : recovWork){
                  DatanodeStorageInfo[] targets = rw.getTargets();
                  if (targets != null && targets.length != 0) {
                    StringBuilder targetList = new StringBuilder("datanode(s)");
                    for (DatanodeStorageInfo target : targets) {
                      targetList.append(' ');
                      targetList.append(target.getDatanodeDescriptor());
                    }
                    blockLog.debug("BLOCK* ask {} to replicate {} to {}", rw.getSrcNodes(),
                        rw.getBlock(), targetList);
                  }
                }
              }
          
          Show
          iwasakims Masatake Iwasaki added a comment - if (blockLog.isInfoEnabled()) { // log which blocks have been scheduled for replication for (BlockRecoveryWork rw : recovWork){ DatanodeStorageInfo[] targets = rw.getTargets(); if (targets != null && targets.length != 0) { StringBuilder targetList = new StringBuilder( "datanode(s)" ); for (DatanodeStorageInfo target : targets) { targetList.append(' '); targetList.append(target.getDatanodeDescriptor()); } blockLog.debug( "BLOCK* ask {} to replicate {} to {}" , rw.getSrcNodes(), rw.getBlock(), targetList); } } }
          Hide
          iwasakims Masatake Iwasaki added a comment -

          The log level had been info but it seemed to be changed to debug in EC branch (6b6a63bb).

          Show
          iwasakims Masatake Iwasaki added a comment - The log level had been info but it seemed to be changed to debug in EC branch (6b6a63bb).
          Hide
          drankye Kai Zheng added a comment -

          Good catch! The pattern to use logger.isInfoEnabled shouldn't be used for no reason. I guess the case in question uses the condition blockLog.isInfoEnabled() to decide to compose and write the log message or not for performance consideration. Then is there any reason for the following block? Better to change it by the way in the fix.

              if (blockLog.isDebugEnabled()) {
                blockLog.debug("BLOCK* neededReplications = {} pendingReplications = {}",
                    neededReplications.size(), pendingReplications.size());
              }
          
          Show
          drankye Kai Zheng added a comment - Good catch! The pattern to use logger.isInfoEnabled shouldn't be used for no reason. I guess the case in question uses the condition blockLog.isInfoEnabled() to decide to compose and write the log message or not for performance consideration. Then is there any reason for the following block? Better to change it by the way in the fix. if (blockLog.isDebugEnabled()) { blockLog.debug( "BLOCK* neededReplications = {} pendingReplications = {}" , neededReplications.size(), pendingReplications.size()); }
          Hide
          iwasakims Masatake Iwasaki added a comment -

          This was wrong. The log level was changed by HDFS-6860.

          Show
          iwasakims Masatake Iwasaki added a comment - This was wrong. The log level was changed by HDFS-6860 .
          Hide
          iwasakims Masatake Iwasaki added a comment -

          Thanks for the comment, Kai Zheng. I attached 001.

          Then is there any reason for the following block?

          The reason seems to be that UnderReplicatedBlocks#size is not just a accessor but it do some aggregation. I left the part as is in the attached patch.

          Show
          iwasakims Masatake Iwasaki added a comment - Thanks for the comment, Kai Zheng . I attached 001. Then is there any reason for the following block? The reason seems to be that UnderReplicatedBlocks#size is not just a accessor but it do some aggregation. I left the part as is in the attached patch.
          Hide
          liuml07 Mingliang Liu added a comment -

          Thanks for working on this, Masatake Iwasakii].

          Calling neededReplications.size() or pendingReplications.size() seems to have low overhead so the guard can be removed, as Kai Zheng proposed. The code that logs which blocks have been scheduled for replication should be kept as we iterate all recovery work and its targets.

          Show
          liuml07 Mingliang Liu added a comment - Thanks for working on this, Masatake Iwasaki i]. Calling neededReplications.size() or pendingReplications.size() seems to have low overhead so the guard can be removed, as Kai Zheng proposed. The code that logs which blocks have been scheduled for replication should be kept as we iterate all recovery work and its targets.
          Hide
          liuml07 Mingliang Liu added a comment -

          We have 5 levels of priority queues and the aggregation should be fast. Leaving it as it-is is may be better though.

          Show
          liuml07 Mingliang Liu added a comment - We have 5 levels of priority queues and the aggregation should be fast. Leaving it as it-is is may be better though.
          Hide
          liuml07 Mingliang Liu added a comment -

          +1 (non-binding)

          Show
          liuml07 Mingliang Liu added a comment - +1 (non-binding)
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 8m 8s trunk passed
          +1 compile 0m 46s trunk passed with JDK v1.8.0_66
          +1 compile 0m 44s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 20s trunk passed
          +1 mvnsite 0m 57s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 2m 6s trunk passed
          +1 javadoc 1m 12s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 54s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 49s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.8.0_66
          +1 javac 0m 40s the patch passed
          +1 compile 0m 42s the patch passed with JDK v1.7.0_91
          +1 javac 0m 42s the patch passed
          +1 checkstyle 0m 19s the patch passed
          +1 mvnsite 0m 53s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 14s the patch passed
          +1 javadoc 1m 9s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 51s the patch passed with JDK v1.7.0_91
          -1 unit 57m 56s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          +1 unit 54m 36s hadoop-hdfs in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 21s Patch does not generate ASF License warnings.
          140m 42s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.namenode.TestFileTruncate
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780805/HDFS-9618.001.patch
          JIRA Issue HDFS-9618
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux a3ad2e04f169 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / b993668
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/14046/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14046/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14046/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Max memory used 75MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14046/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 8m 8s trunk passed +1 compile 0m 46s trunk passed with JDK v1.8.0_66 +1 compile 0m 44s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 20s trunk passed +1 mvnsite 0m 57s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 2m 6s trunk passed +1 javadoc 1m 12s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 54s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 49s the patch passed +1 compile 0m 40s the patch passed with JDK v1.8.0_66 +1 javac 0m 40s the patch passed +1 compile 0m 42s the patch passed with JDK v1.7.0_91 +1 javac 0m 42s the patch passed +1 checkstyle 0m 19s the patch passed +1 mvnsite 0m 53s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 14s the patch passed +1 javadoc 1m 9s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 51s the patch passed with JDK v1.7.0_91 -1 unit 57m 56s hadoop-hdfs in the patch failed with JDK v1.8.0_66. +1 unit 54m 36s hadoop-hdfs in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 21s Patch does not generate ASF License warnings. 140m 42s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.namenode.TestFileTruncate   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780805/HDFS-9618.001.patch JIRA Issue HDFS-9618 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux a3ad2e04f169 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b993668 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/14046/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/14046/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14046/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14046/console This message was automatically generated.
          Hide
          ajisakaa Akira Ajisaka added a comment -

          Mostly looks good to me. Would you join the following two if statements together? I'm +1 if that is addressed.

              if (blockLog.isDebugEnabled()) {
                // log which blocks have been scheduled for replication
          (snip)
                    blockLog.debug("BLOCK* ask {} to replicate {} to {}", rw.getSrcNodes(),
                        rw.getBlock(), targetList);
              }
              if (blockLog.isDebugEnabled()) {
                blockLog.debug("BLOCK* neededReplications = {} pendingReplications = {}",
                    neededReplications.size(), pendingReplications.size());
              }
          
          Show
          ajisakaa Akira Ajisaka added a comment - Mostly looks good to me. Would you join the following two if statements together? I'm +1 if that is addressed. if (blockLog.isDebugEnabled()) { // log which blocks have been scheduled for replication (snip) blockLog.debug( "BLOCK* ask {} to replicate {} to {}" , rw.getSrcNodes(), rw.getBlock(), targetList); } if (blockLog.isDebugEnabled()) { blockLog.debug( "BLOCK* neededReplications = {} pendingReplications = {}" , neededReplications.size(), pendingReplications.size()); }
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 31s trunk passed
          +1 compile 0m 38s trunk passed with JDK v1.8.0_66
          +1 compile 0m 40s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 17s trunk passed
          +1 mvnsite 0m 51s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 51s trunk passed
          +1 javadoc 1m 5s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 45s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 44s the patch passed
          +1 compile 0m 36s the patch passed with JDK v1.8.0_66
          +1 javac 0m 36s the patch passed
          +1 compile 0m 38s the patch passed with JDK v1.7.0_91
          +1 javac 0m 38s the patch passed
          +1 checkstyle 0m 17s the patch passed
          +1 mvnsite 0m 48s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 5s the patch passed
          +1 javadoc 1m 1s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 42s the patch passed with JDK v1.7.0_91
          +1 unit 51m 18s hadoop-hdfs in the patch passed with JDK v1.8.0_66.
          +1 unit 49m 24s hadoop-hdfs in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 19s Patch does not generate ASF License warnings.
          126m 8s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783729/HDFS-9618.002.patch
          JIRA Issue HDFS-9618
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 2be68cb2e391 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / f3427d3
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14196/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Max memory used 77MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14196/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 31s trunk passed +1 compile 0m 38s trunk passed with JDK v1.8.0_66 +1 compile 0m 40s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 51s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 51s trunk passed +1 javadoc 1m 5s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 45s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 44s the patch passed +1 compile 0m 36s the patch passed with JDK v1.8.0_66 +1 javac 0m 36s the patch passed +1 compile 0m 38s the patch passed with JDK v1.7.0_91 +1 javac 0m 38s the patch passed +1 checkstyle 0m 17s the patch passed +1 mvnsite 0m 48s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 5s the patch passed +1 javadoc 1m 1s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 42s the patch passed with JDK v1.7.0_91 +1 unit 51m 18s hadoop-hdfs in the patch passed with JDK v1.8.0_66. +1 unit 49m 24s hadoop-hdfs in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 126m 8s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783729/HDFS-9618.002.patch JIRA Issue HDFS-9618 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 2be68cb2e391 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / f3427d3 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/14196/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Max memory used 77MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14196/console This message was automatically generated.
          Hide
          iwasakims Masatake Iwasaki added a comment -

          Committed this to trunk and branch-2. Thanks, Kai, Mingliang and Akira.

          Show
          iwasakims Masatake Iwasaki added a comment - Committed this to trunk and branch-2. Thanks, Kai, Mingliang and Akira.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9160 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9160/)
          HDFS-9618. Fix mismatch between log level and guard in (iwasakims: rev ae9c61ff0a90b070a5b7b2e7160d726e92c8eacf)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9160 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9160/ ) HDFS-9618 . Fix mismatch between log level and guard in (iwasakims: rev ae9c61ff0a90b070a5b7b2e7160d726e92c8eacf) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          MehranHassani Mehran Hassani added a comment -

          Correct me if I am wrong but I think self4j checks for all the log levels before making the string so doing IsLevelEnabled() is unnecessary when you use it.

          Show
          MehranHassani Mehran Hassani added a comment - Correct me if I am wrong but I think self4j checks for all the log levels before making the string so doing IsLevelEnabled() is unnecessary when you use it.
          Hide
          ajisakaa Akira Ajisaka added a comment -

          In the following code, isDebugEnabled is necessary because we can avoid creating targetList when the log level is not debug or trace.

          BlockManager.java
              if (blockLog.isDebugEnabled()) {
                // log which blocks have been scheduled for reconstruction
                for(BlockReconstructionWork rw : reconWork){
                  DatanodeStorageInfo[] targets = rw.getTargets();
                  if (targets != null && targets.length != 0) {
                    StringBuilder targetList = new StringBuilder("datanode(s)");
                    for (DatanodeStorageInfo target : targets) {
                      targetList.append(' ');
                      targetList.append(target.getDatanodeDescriptor());
                    }
                    blockLog.debug("BLOCK* ask {} to replicate {} to {}", rw.getSrcNodes(),
                        rw.getBlock(), targetList);
                  }
                }
          
          Show
          ajisakaa Akira Ajisaka added a comment - In the following code, isDebugEnabled is necessary because we can avoid creating targetList when the log level is not debug or trace. BlockManager.java if (blockLog.isDebugEnabled()) { // log which blocks have been scheduled for reconstruction for (BlockReconstructionWork rw : reconWork){ DatanodeStorageInfo[] targets = rw.getTargets(); if (targets != null && targets.length != 0) { StringBuilder targetList = new StringBuilder( "datanode(s)" ); for (DatanodeStorageInfo target : targets) { targetList.append(' '); targetList.append(target.getDatanodeDescriptor()); } blockLog.debug( "BLOCK* ask {} to replicate {} to {}" , rw.getSrcNodes(), rw.getBlock(), targetList); } }
          Hide
          MehranHassani Mehran Hassani added a comment -

          That make sense , Thanks!

          Show
          MehranHassani Mehran Hassani added a comment - That make sense , Thanks!

            People

            • Assignee:
              iwasakims Masatake Iwasaki
              Reporter:
              iwasakims Masatake Iwasaki
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development