HBase
  1. HBase
  2. HBASE-11550

Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Trivial Trivial
    • Resolution: Fixed
    • Affects Version/s: 0.99.0, 0.98.4, 0.98.5
    • Fix Version/s: 0.99.0, 2.0.0, 0.98.6
    • Component/s: None
    • Labels:
      None

      Description

      User can pass bucket sizes through "hbase.bucketcache.bucket.sizes" config entry.
      The sizes are supposed to be in increasing order. Validation should be added in CacheConfig#getL2().

      1. HBASE-11550-v4-0.98.patch
        5 kB
        Nick Dimiduk
      2. HBASE-11550-v4.patch
        6 kB
        Gustavo Anatoly
      3. HBASE-11550-v3.patch
        15 kB
        Gustavo Anatoly
      4. HBASE-11550-v2.patch
        15 kB
        Gustavo Anatoly
      5. HBASE-11550-v1.patch
        14 kB
        Gustavo Anatoly
      6. HBASE-11550.patch
        6 kB
        Gustavo Anatoly

        Issue Links

          Activity

          Hide
          Enis Soztutar added a comment -

          Closing this issue after 0.99.0 release.

          Show
          Enis Soztutar added a comment - Closing this issue after 0.99.0 release.
          Hide
          stack added a comment -

          I don't think it fair to the new contributor to have their contributions get caught up in the project politicking.

          Gustavo Anatoly To be clear, ill-defined JIRAs consume the attention of those who are trying to follow along. Lack of clarity in the definition requires we need to keep an active eye out. When the issue is trivial, this is particularly irksome.

          This issue is a good example. It starts out without provenance – does the issue come of 'code-reading', testing?, a user reported issue?, an attempt at setting bucket sizes in configs – and it has 'shoulds' and 'supposed to' in subject and original description but there is no justification as to why. Nick, a third party altogether, has to do detective work to elicit there is an actual problem here.

          For another example, see the follow-on, filed again by Ted assigned to you, HBASE-11743. Look at it. It says this issue, HBASE-11550, makes it "...such that there is no wastage in bucket allocation". But Nick resolves this issue with the comment that your patch ensures default and user-supplied config align punting on the wastage question... to he 'guesses' HBASE-11743. This is lack of alignment here. The mess that is this issue looks like it is to repeat over in HBASE-11743.

          To avoid any crossfire in the future, I'd suggest file your own issues especially if you are trying to build yourself a bit of a track record. Also work on non-trivial issues as said before. You will find it easier getting reviewers if the issue is non-trivial.

          Show
          stack added a comment - I don't think it fair to the new contributor to have their contributions get caught up in the project politicking. Gustavo Anatoly To be clear, ill-defined JIRAs consume the attention of those who are trying to follow along. Lack of clarity in the definition requires we need to keep an active eye out. When the issue is trivial, this is particularly irksome. This issue is a good example. It starts out without provenance – does the issue come of 'code-reading', testing?, a user reported issue?, an attempt at setting bucket sizes in configs – and it has 'shoulds' and 'supposed to' in subject and original description but there is no justification as to why. Nick, a third party altogether, has to do detective work to elicit there is an actual problem here. For another example, see the follow-on, filed again by Ted assigned to you, HBASE-11743 . Look at it. It says this issue, HBASE-11550 , makes it "...such that there is no wastage in bucket allocation". But Nick resolves this issue with the comment that your patch ensures default and user-supplied config align punting on the wastage question... to he 'guesses' HBASE-11743 . This is lack of alignment here. The mess that is this issue looks like it is to repeat over in HBASE-11743 . To avoid any crossfire in the future, I'd suggest file your own issues especially if you are trying to build yourself a bit of a track record. Also work on non-trivial issues as said before. You will find it easier getting reviewers if the issue is non-trivial.
          Hide
          Andrew Purtell added a comment - - edited

          I don't think it fair to the new contributor to have their contributions get caught up in the project politicking.

          Hmm, I'm sure there is universal agreement on that point but maybe not agreement that the only thing happening here on this issue is political in nature. There is a certain amount of churn on our JIRA caused by filing of (nearly) content-free issues, or considerations of trivial import. Some are more responsible for that churn than others. Some are more annoyed by that churn than others. A report filtering for issues closed as Invalid or Cannot Reproduce will provide indication of both. It is a problem occasionally worth complaint. When it happens it is indeed bad example, and valid to discuss if the JIRA is helpful or just a distraction. Disagreement on that point is fine, not politics.

          Show
          Andrew Purtell added a comment - - edited I don't think it fair to the new contributor to have their contributions get caught up in the project politicking. Hmm, I'm sure there is universal agreement on that point but maybe not agreement that the only thing happening here on this issue is political in nature. There is a certain amount of churn on our JIRA caused by filing of (nearly) content-free issues, or considerations of trivial import. Some are more responsible for that churn than others. Some are more annoyed by that churn than others. A report filtering for issues closed as Invalid or Cannot Reproduce will provide indication of both. It is a problem occasionally worth complaint. When it happens it is indeed bad example, and valid to discuss if the JIRA is helpful or just a distraction. Disagreement on that point is fine, not politics.
          Hide
          Nick Dimiduk added a comment -

          stack Trivial is fine. I think it's important that the semantics of the allocator placement policy be identical, but the change itself is trivial. I didn't communicate my shift in opinion after researching allocation strategies. I believe Ted's point about roundUpToBucketSizeInfo is quite valid, makes it clear that this allocator implementation does indeed depend on the order of target sizes.

          I don't think it fair to the new contributor to have their contributions get caught up in the project politicking. I'm happy to help see a valuable patch through, even if the circumstances around it are not ideal.

          More, valid tests are better than less. I don't know what the new issue's test might look like. For a cache of configured size and shape, for given sequence of blocks, they're placed such that they all fit, something like this? I'm much more interested in thinking through what constitutes a "better" or "worse" criteria for HBase and if there's a different allocation strategy that fits HBase access patterns "better".

          This looks like a decent starting point http://www.memorymanagement.org/mmref/index.html

          Show
          Nick Dimiduk added a comment - stack Trivial is fine. I think it's important that the semantics of the allocator placement policy be identical, but the change itself is trivial. I didn't communicate my shift in opinion after researching allocation strategies. I believe Ted's point about roundUpToBucketSizeInfo is quite valid, makes it clear that this allocator implementation does indeed depend on the order of target sizes. I don't think it fair to the new contributor to have their contributions get caught up in the project politicking. I'm happy to help see a valuable patch through, even if the circumstances around it are not ideal. More, valid tests are better than less. I don't know what the new issue's test might look like. For a cache of configured size and shape, for given sequence of blocks, they're placed such that they all fit, something like this? I'm much more interested in thinking through what constitutes a "better" or "worse" criteria for HBase and if there's a different allocation strategy that fits HBase access patterns "better". This looks like a decent starting point http://www.memorymanagement.org/mmref/index.html
          Hide
          stack added a comment -

          Marking trivial again. Thought we were going to get some questions answered in this JIRA ('wastage') but it turns out not. Fix the priority if you think I have it wrong.

          Filing an issue to add a test for a patch already committed is elision complicating accounting, wasting a JIRA, and teaching a new committer dodgy practice.

          Show
          stack added a comment - Marking trivial again. Thought we were going to get some questions answered in this JIRA ('wastage') but it turns out not. Fix the priority if you think I have it wrong. Filing an issue to add a test for a patch already committed is elision complicating accounting, wasting a JIRA, and teaching a new committer dodgy practice.
          Hide
          Nick Dimiduk added a comment -

          I committed v4 of this patch because it ensures the placement semantics are identical for the default setting and user-provided value, something I inadvertently missed in HBASE-10641.

          Yes, I think a test would be good. Wastage will depend on the size of buckets, the placement strategy used, and the sequence of block sizes inserted. The difficulty with a test is that memory allocation strategies are not black-and-white. It could well be that there's a better strategy for most of our users than the current greedy approach. I guess there's now HBASE-11743 that will look at this.

          Show
          Nick Dimiduk added a comment - I committed v4 of this patch because it ensures the placement semantics are identical for the default setting and user-provided value, something I inadvertently missed in HBASE-10641 . Yes, I think a test would be good. Wastage will depend on the size of buckets, the placement strategy used, and the sequence of block sizes inserted. The difficulty with a test is that memory allocation strategies are not black-and-white. It could well be that there's a better strategy for most of our users than the current greedy approach. I guess there's now HBASE-11743 that will look at this.
          Hide
          stack added a comment -

          Nick Dimiduk Why commit without the test? You start out with "... I'm not convinced the ticket is valid." – something a test would clear up – and then "[c]an you add a test exercising the lack of sort as a problem?" There is assertion that there is wastage being fixed by this patch. I'd be interested in assertions in code that this is actually the case. I ask "Add tests to assert wastage." I'd also like to know by how much.

          I have been looking at this stuff of late and notice that bucketcache has a 'cost'. I'm interested in figuring where it comes from. This committed patch as is gives no insight when it seems like it could have some bearing.

          Show
          stack added a comment - Nick Dimiduk Why commit without the test? You start out with "... I'm not convinced the ticket is valid." – something a test would clear up – and then " [c] an you add a test exercising the lack of sort as a problem?" There is assertion that there is wastage being fixed by this patch. I'd be interested in assertions in code that this is actually the case. I ask "Add tests to assert wastage." I'd also like to know by how much. I have been looking at this stuff of late and notice that bucketcache has a 'cost'. I'm interested in figuring where it comes from. This committed patch as is gives no insight when it seems like it could have some bearing.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-TRUNK #5396 (See https://builds.apache.org/job/HBase-TRUNK/5396/)
          HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev c60cfbc999e6ae2222064632e297fd361c8c16e4)

          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5396 (See https://builds.apache.org/job/HBase-TRUNK/5396/ ) HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev c60cfbc999e6ae2222064632e297fd361c8c16e4) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #421 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/421/)
          HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev 7e1e818bbc0bed59c12fbd8c003eaed702cf16ad)

          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #421 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/421/ ) HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev 7e1e818bbc0bed59c12fbd8c003eaed702cf16ad) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-1.0 #100 (See https://builds.apache.org/job/HBase-1.0/100/)
          HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev cd59a023c4f84ba458b9665e1a12add2f702ad96)

          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-1.0 #100 (See https://builds.apache.org/job/HBase-1.0/100/ ) HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev cd59a023c4f84ba458b9665e1a12add2f702ad96) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-0.98 #447 (See https://builds.apache.org/job/HBase-0.98/447/)
          HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev 7e1e818bbc0bed59c12fbd8c003eaed702cf16ad)

          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-0.98 #447 (See https://builds.apache.org/job/HBase-0.98/447/ ) HBASE-11550 Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted (Gustavo Anatoly) (ndimiduk: rev 7e1e818bbc0bed59c12fbd8c003eaed702cf16ad) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
          Hide
          Nick Dimiduk added a comment -

          Thanks for the contribution Gustavo Anatoly!

          Show
          Nick Dimiduk added a comment - Thanks for the contribution Gustavo Anatoly !
          Hide
          Nick Dimiduk added a comment -

          Pushed to master. Cherry-picked to branch-1 with minimal conflict. Applied manually to 0.98, attaching patch.

          Show
          Nick Dimiduk added a comment - Pushed to master. Cherry-picked to branch-1 with minimal conflict. Applied manually to 0.98, attaching patch.
          Hide
          Gustavo Anatoly added a comment -

          I don't have objections Nick Dimiduk.
          Thanks everyone. I get happy to help a little bit.

          Show
          Gustavo Anatoly added a comment - I don't have objections Nick Dimiduk . Thanks everyone. I get happy to help a little bit.
          Hide
          Nick Dimiduk added a comment -

          Patch v4 is great, +1. I'll commit this in a couple hours unless there's an objection. Thanks Gustavo Anatoly.

          We can leave tests for detecting wasted space as a separate ticket.

          Show
          Nick Dimiduk added a comment - Patch v4 is great, +1. I'll commit this in a couple hours unless there's an objection. Thanks Gustavo Anatoly . We can leave tests for detecting wasted space as a separate ticket.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12661511/HBASE-11550-v4.patch
          against trunk revision .
          ATTACHMENT ID: 12661511

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.master.TestRollingRestart
          org.apache.hadoop.hbase.TestRegionRebalancing

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12661511/HBASE-11550-v4.patch against trunk revision . ATTACHMENT ID: 12661511 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.master.TestRollingRestart org.apache.hadoop.hbase.TestRegionRebalancing Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10415//console This message is automatically generated.
          Hide
          Gustavo Anatoly added a comment -

          Gentlemen,

          New patch (v4) has been attached.

          Thanks.

          Show
          Gustavo Anatoly added a comment - Gentlemen, New patch (v4) has been attached. Thanks.
          Hide
          Gustavo Anatoly added a comment -

          Hello Nick,

          I will convert again to int[].

          Thanks.

          Show
          Gustavo Anatoly added a comment - Hello Nick, I will convert again to int[]. Thanks.
          Hide
          Nick Dimiduk added a comment -

          Most of patch v3 is converting the int[] to List<Integer>; I see no value in this change. Please undo this part of the refactor. Otherwise, the refactor to remove bucketSizes local variable from BucketCache and sorting the custom config are good changes. I'm happy to commit such a patch.

          One more time Gustavo Anatoly? Thanks a lot for sticking with it!

          Show
          Nick Dimiduk added a comment - Most of patch v3 is converting the int[] to List<Integer> ; I see no value in this change. Please undo this part of the refactor. Otherwise, the refactor to remove bucketSizes local variable from BucketCache and sorting the custom config are good changes. I'm happy to commit such a patch. One more time Gustavo Anatoly ? Thanks a lot for sticking with it!
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12661445/HBASE-11550-v3.patch
          against trunk revision .
          ATTACHMENT ID: 12661445

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.TestRegionRebalancing

          -1 core zombie tests. There are 1 zombie test(s):

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12661445/HBASE-11550-v3.patch against trunk revision . ATTACHMENT ID: 12661445 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.TestRegionRebalancing -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10412//console This message is automatically generated.
          Hide
          Ted Yu added a comment -
          -    void reconfigure(int sizeIndex, int[] bucketSizes, long bucketCapacity) {
          -      Preconditions.checkElementIndex(sizeIndex, bucketSizes.length);
          +    void reconfigure(int sizeIndex, List<Integer> bucketSizes, long bucketCapacity) {
          

          Consider using http://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int[]) so that the number of changes in the patch comes down.

          Show
          Ted Yu added a comment - - void reconfigure( int sizeIndex, int [] bucketSizes, long bucketCapacity) { - Preconditions.checkElementIndex(sizeIndex, bucketSizes.length); + void reconfigure( int sizeIndex, List< Integer > bucketSizes, long bucketCapacity) { Consider using http://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int[ ]) so that the number of changes in the patch comes down.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12661428/HBASE-11550-v2.patch
          against trunk revision .
          ATTACHMENT ID: 12661428

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12661428/HBASE-11550-v2.patch against trunk revision . ATTACHMENT ID: 12661428 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10411//console This message is automatically generated.
          Hide
          Gustavo Anatoly added a comment -

          Gentlemen,

          The new patch was attached with sorted array instead reverse order to that finish this issue. So about the wastage tests I will need more time to complete it.

          Thanks for the patience.

          Show
          Gustavo Anatoly added a comment - Gentlemen, The new patch was attached with sorted array instead reverse order to that finish this issue. So about the wastage tests I will need more time to complete it. Thanks for the patience.
          Hide
          Gustavo Anatoly added a comment -

          Hello, stack

          Sorry for delay to delivery, I was a little bit busy lately. But I'm working the patch at this moment, so these days I'll finish the task.

          Thank you for your attention.

          Show
          Gustavo Anatoly added a comment - Hello, stack Sorry for delay to delivery, I was a little bit busy lately. But I'm working the patch at this moment, so these days I'll finish the task. Thank you for your attention.
          Hide
          stack added a comment -

          Gustavo Anatoly Any luck? Want a hand? Thanks.

          Show
          stack added a comment - Gustavo Anatoly Any luck? Want a hand? Thanks.
          Hide
          Gustavo Anatoly added a comment -

          Hello, stack.

          I will do the confrontation for both ways to after and before of my patch.

          Thanks.

          Show
          Gustavo Anatoly added a comment - Hello, stack . I will do the confrontation for both ways to after and before of my patch. Thanks.
          Hide
          stack added a comment -

          Changing priority. If wastage, fix is important.

          Regards test assert we are giving out the wrong sized buckets before your patch and that we are not after your patch?

          Show
          stack added a comment - Changing priority. If wastage, fix is important. Regards test assert we are giving out the wrong sized buckets before your patch and that we are not after your patch?
          Hide
          Gustavo Anatoly added a comment -

          Gentlemen, thanks for review.

          Hello, Nick Dimiduk about your question:

          Why switch to List everywhere?

          We have the follow code:

          public BucketSizeInfo roundUpToBucketSizeInfo(int blockSize) {
              for (int i = 0; i < bucketSizes.size(); ++i)
                if (blockSize <= bucketSizes.get(i))
                  return bucketSizeInfos[i];
              return null;
            }
          

          Thus, if we have a reversed bucketSizes it will search in the largest bucketSizes( i ) on the top of array first. This sounds obvious, but the intention is increase the probability to find the bucketSizeInfo in the largest bucketSizes first. It's a pretty simple optimization.

          But how you suggest and stack I can add some tests to verify the sort problem and other test to wastage.

          Hello, Ted Yu thanks for the comment, I will write some test to verify the condition mentioned by you.

          Thanks, guys.

          Show
          Gustavo Anatoly added a comment - Gentlemen, thanks for review. Hello, Nick Dimiduk about your question: Why switch to List everywhere? We have the follow code: public BucketSizeInfo roundUpToBucketSizeInfo( int blockSize) { for ( int i = 0; i < bucketSizes.size(); ++i) if (blockSize <= bucketSizes.get(i)) return bucketSizeInfos[i]; return null ; } Thus, if we have a reversed bucketSizes it will search in the largest bucketSizes( i ) on the top of array first. This sounds obvious, but the intention is increase the probability to find the bucketSizeInfo in the largest bucketSizes first. It's a pretty simple optimization. But how you suggest and stack I can add some tests to verify the sort problem and other test to wastage. Hello, Ted Yu thanks for the comment, I will write some test to verify the condition mentioned by you. Thanks, guys.
          Hide
          stack added a comment -

          This would result in wasted storage.

          Prove it. Add tests to assert wastage.

          Show
          stack added a comment - This would result in wasted storage. Prove it. Add tests to assert wastage.
          Hide
          Ted Yu added a comment -

          Using Nick's sample change above, first item is 64 * 1024 + 1024.
          In BucketAllocator#roundUpToBucketSizeInfo() :

              for (int i = 0; i < bucketSizes.length; ++i)
                if (blockSize <= bucketSizes[i])
                  return bucketSizeInfos[i];
          

          If we search for a bucket which is supposed to fit 32 * 1024 + 1024, the bucket for 64 * 1024 + 1024 would be returned. This would result in wasted storage.

          +    Collections.sort(this.bucketSizes, Collections.reverseOrder());
          

          Order should not be reversed.

          Show
          Ted Yu added a comment - Using Nick's sample change above, first item is 64 * 1024 + 1024. In BucketAllocator#roundUpToBucketSizeInfo() : for ( int i = 0; i < bucketSizes.length; ++i) if (blockSize <= bucketSizes[i]) return bucketSizeInfos[i]; If we search for a bucket which is supposed to fit 32 * 1024 + 1024, the bucket for 64 * 1024 + 1024 would be returned. This would result in wasted storage. + Collections.sort( this .bucketSizes, Collections.reverseOrder()); Order should not be reversed.
          Hide
          Nick Dimiduk added a comment -

          Patch is looking good, but I'm not convinced the ticket is valid.

          From the description

          The sizes are supposed to be in increasing order.

          and yet

          +    Collections.sort(this.bucketSizes, Collections.reverseOrder());
          

          Reverse is opposite the description. Which is correct?

          nit: Why switch to List everywhere?

          Can you add a test exercising the lack of sort as a problem? Something that fails on master but passes with your patch. I tried this naive change but nothing fails. Skimming the code, I would expect the cache to be underutilized – if this is a problem at all.

          diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
          index c526834..38a9a43 100644
          --- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
          +++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
          @@ -57,8 +57,8 @@ public class TestBucketCache {
               return Arrays.asList(new Object[][] {
                 { 8192, null }, // TODO: why is 8k the default blocksize for these tests?
                 { 16 * 1024, new int[] {
          -        2 * 1024 + 1024, 4 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024,
          -        28 * 1024 + 1024, 32 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024,
          +        64 * 1024 + 1024, 32 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024,
          +        28 * 1024 + 1024, 4 * 1024 + 1024, 2 * 1024 + 1024, 96 * 1024 + 1024,
                   128 * 1024 + 1024 } }
               });
             }
          
          Show
          Nick Dimiduk added a comment - Patch is looking good, but I'm not convinced the ticket is valid. From the description The sizes are supposed to be in increasing order. and yet + Collections.sort(this.bucketSizes, Collections.reverseOrder()); Reverse is opposite the description. Which is correct? nit: Why switch to List everywhere? Can you add a test exercising the lack of sort as a problem? Something that fails on master but passes with your patch. I tried this naive change but nothing fails. Skimming the code, I would expect the cache to be underutilized – if this is a problem at all. diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java index c526834..38a9a43 100644 --- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java +++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java @@ -57,8 +57,8 @@ public class TestBucketCache { return Arrays.asList(new Object[][] { { 8192, null }, // TODO: why is 8k the default blocksize for these tests? { 16 * 1024, new int[] { - 2 * 1024 + 1024, 4 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024, - 28 * 1024 + 1024, 32 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024, + 64 * 1024 + 1024, 32 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024, + 28 * 1024 + 1024, 4 * 1024 + 1024, 2 * 1024 + 1024, 96 * 1024 + 1024, 128 * 1024 + 1024 } } }); }
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12657596/HBASE-11550-v1.patch
          against trunk revision .
          ATTACHMENT ID: 12657596

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          +1 site. The mvn site goal succeeds with this patch.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.TestRegionRebalancing

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12657596/HBASE-11550-v1.patch against trunk revision . ATTACHMENT ID: 12657596 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 +1 site . The mvn site goal succeeds with this patch. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.TestRegionRebalancing Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/10177//console This message is automatically generated.
          Hide
          Gustavo Anatoly added a comment -

          Hi, Nick Dimiduk.

          Could you please review the new patch?

          Thanks

          Show
          Gustavo Anatoly added a comment - Hi, Nick Dimiduk . Could you please review the new patch? Thanks
          Hide
          Gustavo Anatoly added a comment -

          Thanks Nick Dimiduk, for your attention.

          I will prepare a new draft patch, to resubmit. I agree, it makes no sense drop the variable, it's merely punctual. I didn't saw this possibility to refactor, transferring the responsability to BucketCache.retrieveFromFile

          Thanks again.

          Show
          Gustavo Anatoly added a comment - Thanks Nick Dimiduk , for your attention. I will prepare a new draft patch, to resubmit. I agree, it makes no sense drop the variable, it's merely punctual. I didn't saw this possibility to refactor, transferring the responsability to BucketCache.retrieveFromFile Thanks again.
          Hide
          Nick Dimiduk added a comment -

          Thanks for the patch Gustavo Anatoly.

          If the list of bucket sizes is required to be of increasing capacity, I would prefer this be enforced by BucketAllocator in its constructor. That is, don't do validation, just sort the array before it's used. As far as I can tell, BucketAllocator is the only place where this configuration is consumed. The only reason BucketCache keeps around a local reference to this configuration value is to make the value available when hydrating a persisted cache. Looking at the code again, I think the local variable can be dropped entirely. Ie, pass it through as a parameter to BucketCache.retrieveFromFile.

          Show
          Nick Dimiduk added a comment - Thanks for the patch Gustavo Anatoly . If the list of bucket sizes is required to be of increasing capacity, I would prefer this be enforced by BucketAllocator in its constructor. That is, don't do validation, just sort the array before it's used. As far as I can tell, BucketAllocator is the only place where this configuration is consumed. The only reason BucketCache keeps around a local reference to this configuration value is to make the value available when hydrating a persisted cache. Looking at the code again, I think the local variable can be dropped entirely. Ie, pass it through as a parameter to BucketCache.retrieveFromFile.
          Hide
          Gustavo Anatoly added a comment -

          Gentlemen watchers,

          Who could please review the patch?

          Show
          Gustavo Anatoly added a comment - Gentlemen watchers, Who could please review the patch?
          Hide
          Andrew Purtell added a comment -

          I agree with Stack. Resolve this as 'Not a Problem'?

          Show
          Andrew Purtell added a comment - I agree with Stack. Resolve this as 'Not a Problem'?
          Hide
          stack added a comment -

          Trivial. If anyone notices these can be changed at all, rarer still will be those who actually do.

          Ted, how do you come up with such trivial stuff? Why not give Gustavo something substantial?

          Show
          stack added a comment - Trivial. If anyone notices these can be changed at all, rarer still will be those who actually do. Ted, how do you come up with such trivial stuff? Why not give Gustavo something substantial?

            People

            • Assignee:
              Gustavo Anatoly
              Reporter:
              Ted Yu
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development