HBase
  1. HBase
  2. HBASE-10029

Proxy created by HFileSystem#createReorderingProxy() should properly close when connecting to HA namenode

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.96.0
    • Fix Version/s: 0.98.0
    • Component/s: hadoop2
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Proxy to HA namenode with QJM created from org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider should close properly.

      Mail Archive

      13/11/26 09:55:55 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
      java.lang.IllegalArgumentException: object is not an instance of declaring class
              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
              at java.lang.reflect.Method.invoke(Method.java:597)
              at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
              at $Proxy16.close(Unknown Source)
              at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
              at org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
              at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
              at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
              at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
              at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
              at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
      13/11/26 09:55:55 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' failed, org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not Closeable or does not provide closeable invocation handler class $Proxy16
      org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not Closeable or does not provide closeable invocation handler class $Proxy16
              at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
              at org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
              at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
              at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
              at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
              at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
              at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
      
      1. 10029-v1.txt
        2 kB
        Ted Yu
      2. 10029-hbase-hadoop-master-fphd9.out
        45 kB
        Ted Yu
      3. 10029-v2.txt
        2 kB
        Ted Yu
      4. 10029-v3.txt
        2 kB
        Ted Yu
      5. 10029.addendum
        0.8 kB
        Ted Yu

        Issue Links

          Activity

          Hide
          Ted Yu added a comment -

          Looking at FailoverProxyProvider doesn't reveal the whole picture.

          We should consider whether the proxy it returns implements Closeable.

          Show
          Ted Yu added a comment - Looking at FailoverProxyProvider doesn't reveal the whole picture. We should consider whether the proxy it returns implements Closeable.
          Hide
          Jimmy Xiang added a comment -

          Closeable in branch 1 has one version: https://github.com/apache/hadoop-common/blob/branch-1/src/core/org/apache/hadoop/io/Closeable.java. If we support Hadoop 1.0.4 and later only, it's safe to switch to java.io.Closeable from Closeable. The problem for this issue is because ConfiguredFailoverProxyProvider (https://github.com/apache/hadoop-common/blob/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/FailoverProxyProvider.java) implements java.io.Closeable instead of Closeable which has been deprecated for quite some time.

          Show
          Jimmy Xiang added a comment - Closeable in branch 1 has one version: https://github.com/apache/hadoop-common/blob/branch-1/src/core/org/apache/hadoop/io/Closeable.java . If we support Hadoop 1.0.4 and later only, it's safe to switch to java.io.Closeable from Closeable. The problem for this issue is because ConfiguredFailoverProxyProvider ( https://github.com/apache/hadoop-common/blob/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/FailoverProxyProvider.java ) implements java.io.Closeable instead of Closeable which has been deprecated for quite some time.
          Hide
          Ted Yu added a comment - - edited

          Just saw the new comments.

          Did this get committed w/o consideration

          Integration happened at 11:42
          Jimmy's comment came at 12:01

          I did consider his suggestion after seeing the suggestion.
          There're 2 aspects:
          1. org.apache.hadoop.io.Closeable may not be deprecated in all hadoop releases
          2. The following check (line 620 in stopProxy()) should pass - org.apache.hadoop.io.Closeable extends java.io.Closeable :

                if (proxy instanceof Closeable) {
          

          So I am not clear why changing Closeable to java.io.Closeable would solve the problem.

          Show
          Ted Yu added a comment - - edited Just saw the new comments. Did this get committed w/o consideration Integration happened at 11:42 Jimmy's comment came at 12:01 I did consider his suggestion after seeing the suggestion. There're 2 aspects: 1. org.apache.hadoop.io.Closeable may not be deprecated in all hadoop releases 2. The following check (line 620 in stopProxy()) should pass - org.apache.hadoop.io.Closeable extends java.io.Closeable : if (proxy instanceof Closeable) { So I am not clear why changing Closeable to java.io.Closeable would solve the problem.
          Hide
          stack added a comment -

          I will revert tomorrow if less-hacky alternative suggested above continues unaddressed.

          Show
          stack added a comment - I will revert tomorrow if less-hacky alternative suggested above continues unaddressed.
          Hide
          stack added a comment -

          Did this get committed w/o consideration of the reasonable Jimmy Xiang suggestion – especially if allows us save on yet more reflection hackery.

          Show
          stack added a comment - Did this get committed w/o consideration of the reasonable Jimmy Xiang suggestion – especially if allows us save on yet more reflection hackery.
          Hide
          Andrew Purtell added a comment -

          Shouldn't this issue be resolved?

          Show
          Andrew Purtell added a comment - Shouldn't this issue be resolved?
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #853 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/853/)
          HBASE-10029 Addendum checks for args against null (tedyu: rev 1545796)

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
            HBASE-10029 Proxy created by HFileSystem#createReorderingProxy() should properly close when connecting to HA namenode (tedyu: rev 1545792)
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #853 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/853/ ) HBASE-10029 Addendum checks for args against null (tedyu: rev 1545796) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java HBASE-10029 Proxy created by HFileSystem#createReorderingProxy() should properly close when connecting to HA namenode (tedyu: rev 1545792) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4699 (See https://builds.apache.org/job/HBase-TRUNK/4699/)
          HBASE-10029 Addendum checks for args against null (tedyu: rev 1545796)

          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
            HBASE-10029 Proxy created by HFileSystem#createReorderingProxy() should properly close when connecting to HA namenode (tedyu: rev 1545792)
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4699 (See https://builds.apache.org/job/HBase-TRUNK/4699/ ) HBASE-10029 Addendum checks for args against null (tedyu: rev 1545796) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java HBASE-10029 Proxy created by HFileSystem#createReorderingProxy() should properly close when connecting to HA namenode (tedyu: rev 1545792) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
          Hide
          Ted Yu added a comment -

          @Nicolas:
          The statement w.r.t. ConfiguredFailoverProxyProvider was not accurate. I changed it in description.

          Show
          Ted Yu added a comment - @Nicolas: The statement w.r.t. ConfiguredFailoverProxyProvider was not accurate. I changed it in description.
          Hide
          Jimmy Xiang added a comment -

          Since we support hdfs-1, not older version any more, I was wondering if we can use java.io.Closeable instead of org.apache.hadoop.io.Closeable in HFileSystem. If this works, it will be a much cleaner fix. Just suggesting.

          Show
          Jimmy Xiang added a comment - Since we support hdfs-1, not older version any more, I was wondering if we can use java.io.Closeable instead of org.apache.hadoop.io.Closeable in HFileSystem. If this works, it will be a much cleaner fix. Just suggesting.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I cannot follow well on this problem:

          > ,,, ConfiguredFailoverProxyProvider should implement Closeable.

          ConfiguredFailoverProxyProvider<T> implements FailoverProxyProvider<T> where FailoverProxyProvider<T> extends Closeable. So doesn't ConfiguredFailoverProxyProvider actually implement Closeable?

          For the "RPC.stopProxy called on non proxy" error, could you apply the following in order to see what is the proxy class?

          Index: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
          ===================================================================
          --- hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java	(revision 1545796)
          +++ hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java	(working copy)
          @@ -634,7 +634,7 @@
               } catch (IOException e) {
                 LOG.error("Closing proxy or invocation handler caused exception", e);
               } catch (IllegalArgumentException e) {
          -      LOG.error("RPC.stopProxy called on non proxy.", e);
          +      LOG.error("RPC.stopProxy called on non proxy: class=" + proxy.getClass(), e);
               }
               
               // If you see this error on a mock object in a unit test you're
          
          Show
          Tsz Wo Nicholas Sze added a comment - I cannot follow well on this problem: > ,,, ConfiguredFailoverProxyProvider should implement Closeable. ConfiguredFailoverProxyProvider<T> implements FailoverProxyProvider<T> where FailoverProxyProvider<T> extends Closeable. So doesn't ConfiguredFailoverProxyProvider actually implement Closeable? For the "RPC.stopProxy called on non proxy" error, could you apply the following in order to see what is the proxy class? Index: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java =================================================================== --- hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java (revision 1545796) +++ hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java (working copy) @@ -634,7 +634,7 @@ } catch (IOException e) { LOG.error( "Closing proxy or invocation handler caused exception" , e); } catch (IllegalArgumentException e) { - LOG.error( "RPC.stopProxy called on non proxy." , e); + LOG.error( "RPC.stopProxy called on non proxy: class=" + proxy.getClass(), e); } // If you see this error on a mock object in a unit test you're
          Hide
          Ted Yu added a comment -

          Thanks for the reminder.
          Here is addendum.

          Show
          Ted Yu added a comment - Thanks for the reminder. Here is addendum.
          Hide
          Jimmy Xiang added a comment -

          Could args be null?

          Show
          Jimmy Xiang added a comment - Could args be null?
          Hide
          Ted Yu added a comment -

          I ran TestHFileOutputFormat once locally and it passed.
          The test failure appeared in build #7999 as well - so unrelated to my patch.

          I plan to integrate to trunk first. Once build is green, I will integrate to 0.96 as well.

          Show
          Ted Yu added a comment - I ran TestHFileOutputFormat once locally and it passed. The test failure appeared in build #7999 as well - so unrelated to my patch. I plan to integrate to trunk first. Once build is green, I will integrate to 0.96 as well.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12615842/10029-v2.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12615842/10029-v2.txt against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7998//console This message is automatically generated.
          Hide
          Nicolas Liochon added a comment -

          I suggest we add test case for HA Namenode in separate issue.

          I agree.

          + if ("close".equals(method.getName()) && args.length == 0) {

          Nit: reverting the test: if (args.length == 0 && "close".equals(method.getName())) {
          should be more efficient (it saves a dereference).

          +1

          Show
          Nicolas Liochon added a comment - I suggest we add test case for HA Namenode in separate issue. I agree. + if ("close".equals(method.getName()) && args.length == 0) { Nit: reverting the test: if (args.length == 0 && "close".equals(method.getName())) { should be more efficient (it saves a dereference). +1
          Hide
          Ted Yu added a comment -

          We don't know how to run the mini cluster with the namenode ha, right?

          The above requires non-trivial effort.
          The MiniDFSCluster in hadoop-2 depends on MiniDFSNNTopology parameter to tell whether the cluster has HA.
          However, MiniDFSNNTopology doesn't exist in hadoop-1

          The first step would be to enrich shim layer, possibly through hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/HadoopShims.java , so that HA config can be specified.

          I suggest we add test case for HA Namenode in separate issue.

          Show
          Ted Yu added a comment - We don't know how to run the mini cluster with the namenode ha, right? The above requires non-trivial effort. The MiniDFSCluster in hadoop-2 depends on MiniDFSNNTopology parameter to tell whether the cluster has HA. However, MiniDFSNNTopology doesn't exist in hadoop-1 The first step would be to enrich shim layer, possibly through hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/HadoopShims.java , so that HA config can be specified. I suggest we add test case for HA Namenode in separate issue.
          Hide
          Ted Yu added a comment -

          Patch v2 adds check for args length.

          Show
          Ted Yu added a comment - Patch v2 adds check for args length.
          Hide
          Ted Yu added a comment -

          Master log from Henry where the last line shows that RPC.stopProxy() was called and there was no exception.

          Show
          Ted Yu added a comment - Master log from Henry where the last line shows that RPC.stopProxy() was called and there was no exception.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12615795/10029-v1.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12615795/10029-v1.txt against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7994//console This message is automatically generated.
          Hide
          Nicolas Liochon added a comment -

          I think that should work. We should check for the number of argument as well (and that will save some String comparison).
          It would be great to have a test case... We don't know how to run the mini cluster with the namenode ha, right?

          Show
          Nicolas Liochon added a comment - I think that should work. We should check for the number of argument as well (and that will save some String comparison). It would be great to have a test case... We don't know how to run the mini cluster with the namenode ha, right?
          Hide
          Ted Yu added a comment -

          Reopen.

          If my suggestion is not accepted, we can close again.

          Show
          Ted Yu added a comment - Reopen. If my suggestion is not accepted, we can close again.
          Hide
          Ted Yu added a comment - - edited

          I wish there is something we can do HBase side ...
          We should check whether the return value from Proxy.newProxyInstance() really implements Closeable.
          If we call close() on the return value from Proxy.newProxyInstance() and catch exception, we should be able to tell.
          When the test fails, we can either log a big warning or bail out.

          This would help people track down the root cause faster than seeing exception when server is shutdown.

          Show
          Ted Yu added a comment - - edited I wish there is something we can do HBase side ... We should check whether the return value from Proxy.newProxyInstance() really implements Closeable. If we call close() on the return value from Proxy.newProxyInstance() and catch exception, we should be able to tell. When the test fails, we can either log a big warning or bail out. This would help people track down the root cause faster than seeing exception when server is shutdown.
          Hide
          Henry Hung added a comment -

          Ted Yu, is it because i'm using jdk 1.6.0_37?

          Show
          Henry Hung added a comment - Ted Yu , is it because i'm using jdk 1.6.0_37?
          Hide
          Liang Xie added a comment -

          Ted Yu, do you think it's a hbase issue other than hdfs issue? if so, you can rollback this jira's "Status"

          Show
          Liang Xie added a comment - Ted Yu , do you think it's a hbase issue other than hdfs issue? if so, you can rollback this jira's "Status"
          Hide
          Ted Yu added a comment - - edited

          From createReorderingProxy():

              return (ClientProtocol) Proxy.newProxyInstance
                  (cp.getClass().getClassLoader(),
                      new Class[]{ClientProtocol.class, Closeable.class},
          

          What I don't understand is why IllegalArgumentException wasn't thrown from Proxy.newProxyInstance() in createReorderingProxy() if the proxy doesn't implement Closeable.
          See line 369 here:
          http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/lang/reflect/Proxy.java#Proxy.getProxyClass%28java.lang.ClassLoader%2Cjava.lang.Class%5B%5D%29

          Show
          Ted Yu added a comment - - edited From createReorderingProxy(): return (ClientProtocol) Proxy.newProxyInstance (cp.getClass().getClassLoader(), new Class []{ClientProtocol.class, Closeable.class}, What I don't understand is why IllegalArgumentException wasn't thrown from Proxy.newProxyInstance() in createReorderingProxy() if the proxy doesn't implement Closeable. See line 369 here: http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/lang/reflect/Proxy.java#Proxy.getProxyClass%28java.lang.ClassLoader%2Cjava.lang.Class%5B%5D%29
          Hide
          Liang Xie added a comment -

          let me try

          Show
          Liang Xie added a comment - let me try
          Hide
          Henry Hung added a comment -

          sorry, but how i close this issue?

          Show
          Henry Hung added a comment - sorry, but how i close this issue?
          Hide
          Henry Hung added a comment -

          ok, i already create new issue HDFS-5566

          Show
          Henry Hung added a comment - ok, i already create new issue HDFS-5566
          Hide
          Liang Xie added a comment -

          hmmm... seems it's not a hbase issue, but related with hdfs project, please close this jira and create a new hdfs issue if you'd like.
          ps: please give more detailed info, e.g. hdfs version and related config.

          Show
          Liang Xie added a comment - hmmm... seems it's not a hbase issue, but related with hdfs project, please close this jira and create a new hdfs issue if you'd like. ps: please give more detailed info, e.g. hdfs version and related config.

            People

            • Assignee:
              Ted Yu
              Reporter:
              Henry Hung
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development