Details

    • Type: New Feature
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.94.2
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    1. 6505-0.94.txt
      14 kB
      Lars Hofhansl
    2. 6505-trunk.txt
      15 kB
      Lars Hofhansl
    3. 6505-v4.txt
      15 kB
      Lars Hofhansl
    4. 6505-v3.txt
      13 kB
      Lars Hofhansl
    5. 6505-v2.txt
      7 kB
      Lars Hofhansl
    6. 6505.txt
      5 kB
      Lars Hofhansl

      Issue Links

        Activity

        Hide
        stack stack added a comment -

        Fix up after bulk move overwrote some 0.94.2 fix versions w/ 0.95.0 (Noticed by Lars Hofhansl)

        Show
        stack stack added a comment - Fix up after bulk move overwrote some 0.94.2 fix versions w/ 0.95.0 (Noticed by Lars Hofhansl)
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-0.94-security-on-Hadoop-23 #7 (See https://builds.apache.org/job/HBase-0.94-security-on-Hadoop-23/7/)
        HBASE-6505 Allow shared RegionObserver state (Revision 1369515)

        Result = FAILURE
        larsh :
        Files :

        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java
        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Show
        hudson Hudson added a comment - Integrated in HBase-0.94-security-on-Hadoop-23 #7 (See https://builds.apache.org/job/HBase-0.94-security-on-Hadoop-23/7/ ) HBASE-6505 Allow shared RegionObserver state (Revision 1369515) Result = FAILURE larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-0.94-security #48 (See https://builds.apache.org/job/HBase-0.94-security/48/)
        HBASE-6505 Allow shared RegionObserver state (Revision 1369515)

        Result = FAILURE
        larsh :
        Files :

        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java
        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Show
        hudson Hudson added a comment - Integrated in HBase-0.94-security #48 (See https://builds.apache.org/job/HBase-0.94-security/48/ ) HBASE-6505 Allow shared RegionObserver state (Revision 1369515) Result = FAILURE larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #122 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/122/)
        HBASE-6505 Allow shared RegionObserver state (Revision 1369516)

        Result = FAILURE
        larsh :
        Files :

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Show
        hudson Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #122 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/122/ ) HBASE-6505 Allow shared RegionObserver state (Revision 1369516) Result = FAILURE larsh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-TRUNK #3197 (See https://builds.apache.org/job/HBase-TRUNK/3197/)
        HBASE-6505 Allow shared RegionObserver state (Revision 1369516)

        Result = SUCCESS
        larsh :
        Files :

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Show
        hudson Hudson added a comment - Integrated in HBase-TRUNK #3197 (See https://builds.apache.org/job/HBase-TRUNK/3197/ ) HBASE-6505 Allow shared RegionObserver state (Revision 1369516) Result = SUCCESS larsh : Files : /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Hide
        hudson Hudson added a comment -

        Integrated in HBase-0.94 #383 (See https://builds.apache.org/job/HBase-0.94/383/)
        HBASE-6505 Allow shared RegionObserver state (Revision 1369515)

        Result = SUCCESS
        larsh :
        Files :

        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java
        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java
        • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Show
        hudson Hudson added a comment - Integrated in HBase-0.94 #383 (See https://builds.apache.org/job/HBase-0.94/383/ ) HBASE-6505 Allow shared RegionObserver state (Revision 1369515) Result = SUCCESS larsh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12539171/6505-trunk.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

        -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12539171/6505-trunk.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 9 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2509//console This message is automatically generated.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Thanks for the reviews.

        Show
        lhofhansl Lars Hofhansl added a comment - Thanks for the reviews.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Committed to 0.94 and 0.96

        Show
        lhofhansl Lars Hofhansl added a comment - Committed to 0.94 and 0.96
        Hide
        lhofhansl Lars Hofhansl added a comment -

        0.94 patch

        Show
        lhofhansl Lars Hofhansl added a comment - 0.94 patch
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Final trunk patch.
        (also added a missing JarFile.close())

        Show
        lhofhansl Lars Hofhansl added a comment - Final trunk patch. (also added a missing JarFile.close())
        Hide
        lhofhansl Lars Hofhansl added a comment -

        I ran the 3 failed tests from last HadoopQA run locally. They all pass.
        Going to commit soon.

        Show
        lhofhansl Lars Hofhansl added a comment - I ran the 3 failed tests from last HadoopQA run locally. They all pass. Going to commit soon.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Thanks Ted. A previous patch had specific cleanup logic in subclasses of CoprocessorHost. The latest patch doesn't (it's deferred to the GC now). I left it in, because I thought it useful to have a removeEnvironment method that mirror createEnvironment and can be overridden in subclasses.

        You're right, though, it's not needed... will remove before commit.

        Show
        lhofhansl Lars Hofhansl added a comment - Thanks Ted. A previous patch had specific cleanup logic in subclasses of CoprocessorHost. The latest patch doesn't (it's deferred to the GC now). I left it in, because I thought it useful to have a removeEnvironment method that mirror createEnvironment and can be overridden in subclasses. You're right, though, it's not needed... will remove before commit.
        Hide
        zhihyu@ebaysf.com Ted Yu added a comment -

        Looks good overall.

        +  protected boolean removeEnvironment(final CoprocessorEnvironment env) {
        +    return coprocessors.remove(env);
        +  }
        

        The above method is only called once. Can the method be inlined ?

        Show
        zhihyu@ebaysf.com Ted Yu added a comment - Looks good overall. + protected boolean removeEnvironment( final CoprocessorEnvironment env) { + return coprocessors.remove(env); + } The above method is only called once. Can the method be inlined ?
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12539161/6505-v4.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

        -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.client.TestAdmin
        org.apache.hadoop.hbase.regionserver.TestSplitLogWorker
        org.apache.hadoop.hbase.master.TestAssignmentManager

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12539161/6505-v4.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestAdmin org.apache.hadoop.hbase.regionserver.TestSplitLogWorker org.apache.hadoop.hbase.master.TestAssignmentManager Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2508//console This message is automatically generated.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Same patch, better test.
        Also verifies that the class specific portion the sharedDataMap indeed becomes eligible for collection when all Environments for this class have been blacklisted.
        (The test needed to be changed slightly so that it itself would not hold on to the shared data).

        I ran TestClassLoading locally, passes fine.
        Now I think this is good to go... Seriously

        Show
        lhofhansl Lars Hofhansl added a comment - Same patch, better test. Also verifies that the class specific portion the sharedDataMap indeed becomes eligible for collection when all Environments for this class have been blacklisted. (The test needed to be changed slightly so that it itself would not hold on to the shared data). I ran TestClassLoading locally, passes fine. Now I think this is good to go... Seriously
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Argghhh... Forgot that string constants are interned (and hence are identical), that why my test did not work correctly.

        Show
        lhofhansl Lars Hofhansl added a comment - Argghhh... Forgot that string constants are interned (and hence are identical), that why my test did not work correctly.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        I'm having a hard time proving in a test that all references are indeed correctly released upon blacklisting and the entry is removed from the sharedDataMap. Will investigate a bit more with an additional test.

        Show
        lhofhansl Lars Hofhansl added a comment - I'm having a hard time proving in a test that all references are indeed correctly released upon blacklisting and the entry is removed from the sharedDataMap. Will investigate a bit more with an additional test.
        Hide
        apurtell Andrew Purtell added a comment -

        If you think that part is overkill and just makes it harder to understand - which might very well be the case, I can easily revert back to the model I had before.

        No objections to trying a bit harder. The code is not more difficult to understand IMO. TaskMonitor uses weak references also so it's not novel for the code base either.

        +1

        Show
        apurtell Andrew Purtell added a comment - If you think that part is overkill and just makes it harder to understand - which might very well be the case, I can easily revert back to the model I had before. No objections to trying a bit harder. The code is not more difficult to understand IMO. TaskMonitor uses weak references also so it's not novel for the code base either. +1
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12539141/6505-v3.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

        -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.coprocessor.TestClassLoading

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12539141/6505-v3.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.coprocessor.TestClassLoading Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2505//console This message is automatically generated.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        New patch:

        • uses static strong-key/weak-value ReferenceMap as sharedDataMap
        • uses classnames as keys
        • has a test the verifies that sharedData maps are in fact identical
        Show
        lhofhansl Lars Hofhansl added a comment - New patch: uses static strong-key/weak-value ReferenceMap as sharedDataMap uses classnames as keys has a test the verifies that sharedData maps are in fact identical
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Thanks Andy.
        I do have a new patch that uses a ReferenceMap (from the common collections library) with hard keys and weak values (where the key would be the class name and the value be the class's map).
        As long as at least one RegionEnvironment exists to hold on to its reference of its map the reference will not be removed from the ReferenceMap (and thus can be shared by other RegionEnvironment for the same class), which is exactly what we want here (and much better than reference counting). When it happens that all Envs for a specific class are blacklisted they will eventually be garbage collected, which then allows the map entry to be collected.

        Patch with test coming momentarily.

        If you think that part is overkill and just makes it harder to understand - which might very well be the case, I can easily revert back to the model I had before.

        Show
        lhofhansl Lars Hofhansl added a comment - Thanks Andy. I do have a new patch that uses a ReferenceMap (from the common collections library) with hard keys and weak values (where the key would be the class name and the value be the class's map). As long as at least one RegionEnvironment exists to hold on to its reference of its map the reference will not be removed from the ReferenceMap (and thus can be shared by other RegionEnvironment for the same class), which is exactly what we want here (and much better than reference counting). When it happens that all Envs for a specific class are blacklisted they will eventually be garbage collected, which then allows the map entry to be collected. Patch with test coming momentarily. If you think that part is overkill and just makes it harder to understand - which might very well be the case, I can easily revert back to the model I had before.
        Hide
        apurtell Andrew Purtell added a comment -

        And maybe we'll not get around to making CPs unloadable. I don't want to accidentally put that out there as a design priority. I don't think we have a use case for that. Instead I'd prefer the external coprocessor host route: HBASE-4047. I haven't started implementing due to having other priorities, but that is IMO the way to go for practical and effective resource management in a CP architecture. Unloading is (handwave) a kill -9, with reaping of all resources by the OS.

        Show
        apurtell Andrew Purtell added a comment - And maybe we'll not get around to making CPs unloadable. I don't want to accidentally put that out there as a design priority. I don't think we have a use case for that. Instead I'd prefer the external coprocessor host route: HBASE-4047 . I haven't started implementing due to having other priorities, but that is IMO the way to go for practical and effective resource management in a CP architecture. Unloading is (handwave) a kill -9, with reaping of all resources by the OS.
        Hide
        apurtell Andrew Purtell added a comment - - edited

        So I should probably key that map with a String (which would Class.getName()).

        Makes sense.

        When an Environment is blacklisted, it is only that environment, is it then safe to remove the classes part from the sharedDataMap (since there might be other instance of the same class, but loaded from a different class loader).

        Thinking on this, perhaps we can defer that until CPs can actually be unloaded. Blacklisting is only a half measure. If I've seen in an RS log that a CP has been blacklisted my thinking is it's time to cycle the RS. Who knows what damage has been done. And attempts to limit heap usage of the map don't make much sense IMO for the same reason, the CP could leak objects directly.

        Edit: But if you agree, would be good to include a comment to this effect near the map.

        Show
        apurtell Andrew Purtell added a comment - - edited So I should probably key that map with a String (which would Class.getName()). Makes sense. When an Environment is blacklisted, it is only that environment, is it then safe to remove the classes part from the sharedDataMap (since there might be other instance of the same class, but loaded from a different class loader). Thinking on this, perhaps we can defer that until CPs can actually be unloaded. Blacklisting is only a half measure. If I've seen in an RS log that a CP has been blacklisted my thinking is it's time to cycle the RS. Who knows what damage has been done. And attempts to limit heap usage of the map don't make much sense IMO for the same reason, the CP could leak objects directly. Edit: But if you agree, would be good to include a comment to this effect near the map.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Is it safe to key the sharedDataMap with Class<?>? It's possible that the same class is loaded with different class loaders in different regions, right?
        So I should probably key that map with a String (which would Class.getName()).

        Then the next interesting thing: When an Environment is blacklisted, it is only that environment, is it then safe to remove the classes part from the sharedDataMap (since there might be other instance of the same class, but loaded from a different class loader).

        Show
        lhofhansl Lars Hofhansl added a comment - Is it safe to key the sharedDataMap with Class<?>? It's possible that the same class is loaded with different class loaders in different regions, right? So I should probably key that map with a String (which would Class.getName()). Then the next interesting thing: When an Environment is blacklisted, it is only that environment, is it then safe to remove the classes part from the sharedDataMap (since there might be other instance of the same class, but loaded from a different class loader).
        Hide
        lhofhansl Lars Hofhansl added a comment -

        NP, and yes a singleton map is better than RegionServerServices.

        Show
        lhofhansl Lars Hofhansl added a comment - NP, and yes a singleton map is better than RegionServerServices.
        Hide
        apurtell Andrew Purtell added a comment -

        Yes that is my fault I Should have thought of that. But rather than RegionServerServices how about the CoprocessorHosts sharing a singleton map?

        Show
        apurtell Andrew Purtell added a comment - Yes that is my fault I Should have thought of that. But rather than RegionServerServices how about the CoprocessorHosts sharing a singleton map?
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Actually while trying to write a test for this to ensure that the maps for multiple instances of the same CP class are identical, I found that they are in fact not.

        A new RegionCoprocessorHost is created for each region (see HRegion.<init>), not per RegionServer as I had assumed, defeating the purpose of what I had in mind.

        Back to the drawing board. The sharedDataMap could be made part of RegionServerServices, instead (and thus reside with the RegionServer).

        Show
        lhofhansl Lars Hofhansl added a comment - Actually while trying to write a test for this to ensure that the maps for multiple instances of the same CP class are identical, I found that they are in fact not. A new RegionCoprocessorHost is created for each region (see HRegion.<init>), not per RegionServer as I had assumed, defeating the purpose of what I had in mind. Back to the drawing board. The sharedDataMap could be made part of RegionServerServices, instead (and thus reside with the RegionServer).
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Thanks Gary and Andy.
        Was thinking about Object keys too, but came to the same conclusion.
        Will add the braces.

        Show
        lhofhansl Lars Hofhansl added a comment - Thanks Gary and Andy. Was thinking about Object keys too, but came to the same conclusion. Will add the braces.
        Hide
        apurtell Andrew Purtell added a comment -

        +1 looks good Lars.

        Show
        apurtell Andrew Purtell added a comment - +1 looks good Lars.
        Hide
        ghelmling Gary Helmling added a comment -

        minor nit: in RegionCoprocessorHost.createEnvironment()

        +      if (tmp != null) 
        +        classData = tmp;
        

        should have braces.

        Otherwise +1 from me.

        I was pondering the constraint of having the shared data map be keyed by String and whether it would be better to Object keys, but that could lead to attempts to key data by byte[] with unexpected results, so constraining to Strings might be good.

        Show
        ghelmling Gary Helmling added a comment - minor nit: in RegionCoprocessorHost.createEnvironment() + if (tmp != null) + classData = tmp; should have braces. Otherwise +1 from me. I was pondering the constraint of having the shared data map be keyed by String and whether it would be better to Object keys, but that could lead to attempts to key data by byte[] with unexpected results, so constraining to Strings might be good.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12539090/6505-v2.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings).

        -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.hbase.master.TestAssignmentManager
        org.apache.hadoop.hbase.security.access.TestAccessController

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12539090/6505-v2.txt against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 5 javac compiler warnings (more than the trunk's current 4 warnings). -1 findbugs. The patch appears to introduce 10 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.master.TestAssignmentManager org.apache.hadoop.hbase.security.access.TestAccessController Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2499//console This message is automatically generated.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        The sharedData map resides per CP class per RegionServer (not per region).

        I would not be able to come up with any good limit here, other then saying: "It depends on what the coprocessor is trying to do".
        It should be up to the coprocessor to be a good citizen (this goes back to the discussion that coprocessors are an HBase extension mechanism).

        Show
        lhofhansl Lars Hofhansl added a comment - The sharedData map resides per CP class per RegionServer (not per region). I would not be able to come up with any good limit here, other then saying: "It depends on what the coprocessor is trying to do". It should be up to the coprocessor to be a good citizen (this goes back to the discussion that coprocessors are an HBase extension mechanism).
        Hide
        zhihyu@ebaysf.com Ted Yu added a comment -

        Since the sharedData map resides per region, can we introduce some mechanism to limit the amount of heap they consume ?

        Show
        zhihyu@ebaysf.com Ted Yu added a comment - Since the sharedData map resides per region, can we introduce some mechanism to limit the amount of heap they consume ?
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Looking for some reviewers

        Show
        lhofhansl Lars Hofhansl added a comment - Looking for some reviewers
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Updated patch.

        • performs cleanup when a coprocessor is blacklisted
        • changed shared map name to "sharedData"
        • change shareData API to ConcurrentMap (rather than just Map) to set the right expectation for a using CP (i.e. no synchronization needed, etc)

        Should be good to go.

        Show
        lhofhansl Lars Hofhansl added a comment - Updated patch. performs cleanup when a coprocessor is blacklisted changed shared map name to "sharedData" change shareData API to ConcurrentMap (rather than just Map) to set the right expectation for a using CP (i.e. no synchronization needed, etc) Should be good to go.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Happy landings

        In HBASE-6496 you mention that the CoprocessorHost should clean up when the CP can't (because it threw an exception and is blacklisted). I see if I can add this here.

        I'm open for a better name than "classData" for the shared data structure.

        Show
        lhofhansl Lars Hofhansl added a comment - Happy landings In HBASE-6496 you mention that the CoprocessorHost should clean up when the CP can't (because it threw an exception and is blacklisted). I see if I can add this here. I'm open for a better name than "classData" for the shared data structure.
        Hide
        apurtell Andrew Purtell added a comment -

        Patch looks good to me but I could only skim because I have to run to board a flight now.

        Show
        apurtell Andrew Purtell added a comment - Patch looks good to me but I could only skim because I have to run to board a flight now.
        Hide
        apurtell Andrew Purtell added a comment -

        Generally, what can one do with statics in coprocesors?

        I think that should be discouraged.

        Table CP are loaded by as many classloaders as there are different jar files to load them from, right?

        When HBASE-6308 goes in we can build on it as needed. I guess I should add a test for HBASE-6308 and get it in.

        Show
        apurtell Andrew Purtell added a comment - Generally, what can one do with statics in coprocesors? I think that should be discouraged. Table CP are loaded by as many classloaders as there are different jar files to load them from, right? When HBASE-6308 goes in we can build on it as needed. I guess I should add a test for HBASE-6308 and get it in.
        Hide
        lhofhansl Lars Hofhansl added a comment -

        Might just be as simple as this.
        Is the putIfAbsent huh had I did necessary to guard concurrent initialization of multiple instances of the same class?

        Generally, what can one do with statics in coprocesors? Table CP are loaded by as many classloaders as there are different jar files to load them from, right? So statics would not work.

        Show
        lhofhansl Lars Hofhansl added a comment - Might just be as simple as this. Is the putIfAbsent huh had I did necessary to guard concurrent initialization of multiple instances of the same class? Generally, what can one do with statics in coprocesors? Table CP are loaded by as many classloaders as there are different jar files to load them from, right? So statics would not work.

          People

          • Assignee:
            lhofhansl Lars Hofhansl
            Reporter:
            lhofhansl Lars Hofhansl
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development