Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1320

Add LOG.isDebugEnabled() guard for each LOG.debug("...")

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.22.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Each LOG.debug("...") should be executed only if LOG.isDebugEnabled() is true, in some cases it's expensive to construct the string that is being printed to log. It's much easier to always use LOG.isDebugEnabled() because it's easier to check (rather than in each case reason wheather it's neccessary or not).

      1. HDFS-1320-0.22.patch
        74 kB
        Erik Steffl
      2. HDFS-1320-0.22-1.patch
        99 kB
        Erik Steffl
      3. HDFS-1320-0.22-2.patch
        99 kB
        Erik Steffl
      4. HDFS-1320-0.22-3.patch
        99 kB
        Erik Steffl

        Issue Links

          Activity

          Hide
          ryan rawson added a comment -

          does the JVM not optimize for this case in the fast-path?

          Show
          ryan rawson added a comment - does the JVM not optimize for this case in the fast-path?
          Hide
          Erik Steffl added a comment -

          Re fast-path optimization: we don't know and JVM optimizations depends on user setup (user determines which java command line options are used). Optimizations also vary across platforms and versions.

          Just to make sure I did try code like the following:

          LOG.debug("debug log" + a.logSomething());

          a.logSomething() is always called, even though debug is not enabled (and debug log message does not get into log).

          It's possible that in different runtime it's optimized away somehow but I don't think we can rely on it.

          Do you have any specific thoughts on how it would be optimized away?

          Show
          Erik Steffl added a comment - Re fast-path optimization: we don't know and JVM optimizations depends on user setup (user determines which java command line options are used). Optimizations also vary across platforms and versions. Just to make sure I did try code like the following: LOG.debug("debug log" + a.logSomething()); a.logSomething() is always called, even though debug is not enabled (and debug log message does not get into log). It's possible that in different runtime it's optimized away somehow but I don't think we can rely on it. Do you have any specific thoughts on how it would be optimized away?
          Hide
          Konstantin Shvachko added a comment -
          1. You missed NameNode.stateChangeLog. debug() is called for it in many place:
            NameNode, UnderReplicatedBlocks, FSDirectory, BlockManager, INodeDirectory.
          2. BlockPlacementPolicyDefault.isGoodTarget() debugs without using isDebugEnabled().
            I'd also prefer if the local variable logr was replaced explicitly by FSNamesystem.LOG.
          3. Could you please remove unused import of DFSUtil (introduced by somebody else) in NameNode.java and DataNode.java.
          4. In DFSClient could you please remove unused import of BlockTokenIdentifier.
          5. The same in DFSOutputStream for FileStatus.
          6. I would not bother adding isDebugEnabled() into tests. The performance is not so important there. Besides, they are supposed to run in debug mode, so it is just adding more code in this case.
          Show
          Konstantin Shvachko added a comment - You missed NameNode.stateChangeLog . debug() is called for it in many place: NameNode, UnderReplicatedBlocks, FSDirectory, BlockManager, INodeDirectory. BlockPlacementPolicyDefault.isGoodTarget() debugs without using isDebugEnabled() . I'd also prefer if the local variable logr was replaced explicitly by FSNamesystem.LOG . Could you please remove unused import of DFSUtil (introduced by somebody else) in NameNode.java and DataNode.java. In DFSClient could you please remove unused import of BlockTokenIdentifier . The same in DFSOutputStream for FileStatus . I would not bother adding isDebugEnabled() into tests. The performance is not so important there. Besides, they are supposed to run in debug mode, so it is just adding more code in this case.
          Hide
          Erik Steffl added a comment -

          Patch HDFS-1320-0.22-2.patch fixes problems mentioned in the review:

          1. All files you mentioned and few others are now patched (fixed my script that searches for calls to .debug() with no isDebugEnabled()).

          2. BlockPlacementPolicyDefault.java: logr replaced by FSNamesystem.LOG

          3. DFSUtil is already removed from both NameNode.java and DataNode.java

          4. DFSClient is already removed from BlockTokenIdentifier

          5. Removed FileStatus import from DFSOutputStream

          6. They are already there, think I'll leave them there for consistency

          Show
          Erik Steffl added a comment - Patch HDFS-1320 -0.22-2.patch fixes problems mentioned in the review: 1. All files you mentioned and few others are now patched (fixed my script that searches for calls to .debug() with no isDebugEnabled()). 2. BlockPlacementPolicyDefault.java: logr replaced by FSNamesystem.LOG 3. DFSUtil is already removed from both NameNode.java and DataNode.java 4. DFSClient is already removed from BlockTokenIdentifier 5. Removed FileStatus import from DFSOutputStream 6. They are already there, think I'll leave them there for consistency
          Hide
          Konstantin Shvachko added a comment -

          +1 the patch looks good.

          Show
          Konstantin Shvachko added a comment - +1 the patch looks good.
          Hide
          Jakob Homan added a comment -

          There's quite a lot of discussion as to the merit of this approach going on in HADOOP-6884. This patch shouldn't be committed until consensus is reached there.

          Show
          Jakob Homan added a comment - There's quite a lot of discussion as to the merit of this approach going on in HADOOP-6884 . This patch shouldn't be committed until consensus is reached there.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > does the JVM not optimize for this case in the fast-path?

          Hi Ryan, from the benchmark results here, it does not seem JVM optimized this. I think JVM cannot do anything in general since parameter evaluation may have side-effect. It is hard for the JVM to determine whether it is safe to skip those instructions.

          Show
          Tsz Wo Nicholas Sze added a comment - > does the JVM not optimize for this case in the fast-path? Hi Ryan, from the benchmark results here , it does not seem JVM optimized this. I think JVM cannot do anything in general since parameter evaluation may have side-effect. It is hard for the JVM to determine whether it is safe to skip those instructions.
          Hide
          Erik Steffl added a comment -

          HDFS-1320-0.22-3.patch is an update after some conflicting changes on trunk.

          Given that Hudsonis not working at the moment I ran 'ant test-patch' and and 'ant test' myself, the results are below.

          ant test-patch results:

          [exec] There appear to be 97 release audit warnings before the patch and 97 release audit warnings after applying the patch.
          [exec]
          [exec]
          [exec]
          [exec]
          [exec] -1 overall.
          [exec]
          [exec] +1 @author. The patch does not contain any @author tags.
          [exec]
          [exec] +1 tests included. The patch appears to include 28 new or modified tests.
          [exec]
          [exec] -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          [exec]
          [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
          [exec]
          [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
          [exec]
          [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings.
          [exec]
          [exec]

          The javadoc warning is unrelated to patch: [exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/src/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java:40: warning - Tag @see: reference not found: org.apache.hadoop.hdfs.server.datanode.metrics.DataNodeStatisticsMBean

          ant test results:

          BUILD FAILED │ [exec]
          /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:709: The following error occurred while executing this line:│ [exec]
          /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:477: The following error occurred while executing this line:│ [exec] There appear to be 1 javadoc warnings generated by the patched build.
          /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/src/test/aop/build/aop.xml:229: The following error occurred while exe│ [exec]
          /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:667: The following error occurred while executing this line:│ [exec]
          /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:624: The following error occurred while executing this line:│ [exec] ======================================================================
          /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:692: Tests failed! │ [exec] ======================================================================

          The failures are unrelated to this patch (two failures and one error).

          Error:

          Build log:

          [junit] Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
          [junit] at org.junit.Assert.fail(Assert.java:91)
          [junit] at org.junit.Assert.failNotEquals(Assert.java:645)
          [junit] at org.junit.Assert.assertEquals(Assert.java:126)
          [junit] at org.junit.Assert.assertEquals(Assert.java:470)
          [junit] at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:105)
          [junit] at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:88)
          [junit] at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29)
          [junit] at org.mockito.internal.MockHandler.handle(MockHandler.java:95)
          [junit] at org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
          [junit] at org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol$$EnhancerByMockitoWithCGLIB$$4e50a34e.getReplicaVisibleLength(<generated>)
          [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          [junit] at java.lang.reflect.Method.invoke(Method.java:597)
          [junit] at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:346)
          [junit] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1378)
          [junit] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1374)
          [junit] at java.security.AccessController.doPrivileged(Native Method)
          [junit] at javax.security.auth.Subject.doAs(Subject.java:396)
          [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
          [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1372)
          [junit] )
          [junit] Tests run: 4, Failures: 0, Errors: 1, Time elapsed: 1.286 sec

          Details in build/test/TEST-org.apache.hadoop.hdfs.security.token.block.TestBlockToken.txt:

          2010-08-27 16:43:39,167 INFO ipc.Server (Server.java:run(1386)) - IPC Server handler 1 on 58724, call getReplicaVisibleLength(blk_-108_0) from 127.0.1.1:47663: error: java.io.IOException: java.lang.AssertionError: Only one BlockTokenId
          java.io.IOException: java.lang.AssertionError: Only one BlockTokenIdentifier expected expected:<1> but was:<0>
          at org.junit.Assert.fail(Assert.java:91)
          at org.junit.Assert.failNotEquals(Assert.java:645)
          at org.junit.Assert.assertEquals(Assert.java:126)
          at org.junit.Assert.assertEquals(Assert.java:470)
          at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:105)
          at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:88)
          at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29)
          at org.mockito.internal.MockHandler.handle(MockHandler.java:95)
          at org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
          at org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol$$EnhancerByMockitoWithCGLIB$$4e50a34e.getReplicaVisibleLength(<generated>)
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:597)
          at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:346)
          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1378)
          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1374)
          at java.security.AccessController.doPrivileged(Native Method)
          at javax.security.auth.Subject.doAs(Subject.java:396)
          at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1372)

          Two failures:

          Build log:

          [junit] Running org.apache.hadoop.hdfs.TestFiHFlush
          [junit] Tests run: 9, Failures: 2, Errors: 0, Time elapsed: 40.849 sec
          [junit] Test org.apache.hadoop.hdfs.TestFiHFlush FAILED

          Details in build-fi/test/TEST-org.apache.hadoop.hdfs.TestFiHFlush.txt

          2010-08-27 17:31:59,606 INFO datanode.DataNode (FSDataset.java:registerMBean(1757)) - Registered FSDatasetStatusMBean
          2010-08-27 17:31:59,608 WARN datanode.DataNode (DataNode.java:registerMXBean(503)) - Failed to register NameNode MXBean
          javax.management.InstanceAlreadyExistsException: HadoopInfo:type=DataNodeInfo
          at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
          ...

          2010-08-27 17:32:00,390 INFO datanode.DataNode (FSDataset.java:registerMBean(1757)) - Registered FSDatasetStatusMBean
          2010-08-27 17:32:00,391 WARN datanode.DataNode (DataNode.java:registerMXBean(503)) - Failed to register NameNode MXBean
          javax.management.InstanceAlreadyExistsException: HadoopInfo:type=DataNodeInfo
          at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
          at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
          at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
          ...

          Show
          Erik Steffl added a comment - HDFS-1320 -0.22-3.patch is an update after some conflicting changes on trunk. Given that Hudsonis not working at the moment I ran 'ant test-patch' and and 'ant test' myself, the results are below. ant test-patch results: [exec] There appear to be 97 release audit warnings before the patch and 97 release audit warnings after applying the patch. [exec] [exec] [exec] [exec] [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 28 new or modified tests. [exec] [exec] -1 javadoc. The javadoc tool appears to have generated 1 warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] The javadoc warning is unrelated to patch: [exec] [javadoc] /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/src/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java:40: warning - Tag @see: reference not found: org.apache.hadoop.hdfs.server.datanode.metrics.DataNodeStatisticsMBean ant test results: BUILD FAILED │ [exec] /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:709: The following error occurred while executing this line:│ [exec] /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:477: The following error occurred while executing this line:│ [exec] There appear to be 1 javadoc warnings generated by the patched build. /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/src/test/aop/build/aop.xml:229: The following error occurred while exe│ [exec] /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:667: The following error occurred while executing this line:│ [exec] /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:624: The following error occurred while executing this line:│ [exec] ====================================================================== /home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:692: Tests failed! │ [exec] ====================================================================== The failures are unrelated to this patch (two failures and one error). Error: Build log: [junit] Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken [junit] at org.junit.Assert.fail(Assert.java:91) [junit] at org.junit.Assert.failNotEquals(Assert.java:645) [junit] at org.junit.Assert.assertEquals(Assert.java:126) [junit] at org.junit.Assert.assertEquals(Assert.java:470) [junit] at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:105) [junit] at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:88) [junit] at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29) [junit] at org.mockito.internal.MockHandler.handle(MockHandler.java:95) [junit] at org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47) [junit] at org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol$$EnhancerByMockitoWithCGLIB$$4e50a34e.getReplicaVisibleLength(<generated>) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:346) [junit] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1378) [junit] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1374) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at javax.security.auth.Subject.doAs(Subject.java:396) [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) [junit] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1372) [junit] ) [junit] Tests run: 4, Failures: 0, Errors: 1, Time elapsed: 1.286 sec Details in build/test/TEST-org.apache.hadoop.hdfs.security.token.block.TestBlockToken.txt: 2010-08-27 16:43:39,167 INFO ipc.Server (Server.java:run(1386)) - IPC Server handler 1 on 58724, call getReplicaVisibleLength(blk_-108_0) from 127.0.1.1:47663: error: java.io.IOException: java.lang.AssertionError: Only one BlockTokenId java.io.IOException: java.lang.AssertionError: Only one BlockTokenIdentifier expected expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.failNotEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:126) at org.junit.Assert.assertEquals(Assert.java:470) at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:105) at org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:88) at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29) at org.mockito.internal.MockHandler.handle(MockHandler.java:95) at org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47) at org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol$$EnhancerByMockitoWithCGLIB$$4e50a34e.getReplicaVisibleLength(<generated>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:346) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1378) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1374) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1372) Two failures: Build log: [junit] Running org.apache.hadoop.hdfs.TestFiHFlush [junit] Tests run: 9, Failures: 2, Errors: 0, Time elapsed: 40.849 sec [junit] Test org.apache.hadoop.hdfs.TestFiHFlush FAILED Details in build-fi/test/TEST-org.apache.hadoop.hdfs.TestFiHFlush.txt 2010-08-27 17:31:59,606 INFO datanode.DataNode (FSDataset.java:registerMBean(1757)) - Registered FSDatasetStatusMBean 2010-08-27 17:31:59,608 WARN datanode.DataNode (DataNode.java:registerMXBean(503)) - Failed to register NameNode MXBean javax.management.InstanceAlreadyExistsException: HadoopInfo:type=DataNodeInfo at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) ... 2010-08-27 17:32:00,390 INFO datanode.DataNode (FSDataset.java:registerMBean(1757)) - Registered FSDatasetStatusMBean 2010-08-27 17:32:00,391 WARN datanode.DataNode (DataNode.java:registerMXBean(503)) - Failed to register NameNode MXBean javax.management.InstanceAlreadyExistsException: HadoopInfo:type=DataNodeInfo at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) ...
          Hide
          Tsz Wo Nicholas Sze added a comment -

          +1 the new patch looks good.

          I have committed this. Thanks, Erik!

          Show
          Tsz Wo Nicholas Sze added a comment - +1 the new patch looks good. I have committed this. Thanks, Erik!
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #376 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/376/)
          HDFS-1320. Add LOG.isDebugEnabled() guard for each LOG.debug(..). Contributed by Erik Steffl

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #376 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/376/ ) HDFS-1320 . Add LOG.isDebugEnabled() guard for each LOG.debug(..). Contributed by Erik Steffl
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #389 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/389/)
          HDFS-1320. Improve the error messages when using hftp://.

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #389 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/389/ ) HDFS-1320 . Improve the error messages when using hftp://.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #390 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/390/)
          Fix a typo for my last commit: HDFS-1320 should be HDFS-1383 in CHANGES.txt

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #390 (See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/390/ ) Fix a typo for my last commit: HDFS-1320 should be HDFS-1383 in CHANGES.txt

            People

            • Assignee:
              Erik Steffl
              Reporter:
              Erik Steffl
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development