HBase
  1. HBase
  2. HBASE-10500

Some tools OOM when BucketCache is enabled

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.98.0, 0.96.0, 0.99.0
    • Fix Version/s: 0.96.2, 0.98.1, 0.99.0
    • Component/s: HFile
    • Labels:
      None

      Description

      Running hbck --repair or LoadIncrementalHFiles when BucketCache is enabled in offheap mode can cause OOM. This is apparently because bin/hbase does not include $HBASE_REGIONSERVER_OPTS for these tools. This results in HRegion or HFileReaders initialized with a CacheConfig that doesn't have the necessary Direct Memory.

      Possible solutions include:

      • disable blockcache in the config used by hbck when running its repairs
      • include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments

      I'm leaning toward the former because it's possible that hbck is run on a host with different hardware profile as the RS.

      1. HBASE-10500.00.patch
        2 kB
        Nick Dimiduk
      2. HBASE-10500.01.patch
        6 kB
        Nick Dimiduk

        Activity

        Nick Dimiduk created issue -
        Hide
        Nick Dimiduk added a comment -

        Here's the full stack trace:

        Exception in thread "main" java.io.IOException: java.lang.OutOfMemoryError: Direct buffer memory
          at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:731)
          at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:638)
          at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:609)
          at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:595)
          at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4195)
          at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4154)
          at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4127)
          at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4205)
          at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4085)
          at org.apache.hadoop.hbase.util.HBaseFsckRepair.createHDFSRegionDir(HBaseFsckRepair.java:190)
          at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo$HDFSIntegrityFixer.handleHoleInRegionChain(HBaseFsck.java:2312)
          at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.checkRegionChain(HBaseFsck.java:2492)
          at org.apache.hadoop.hbase.util.HBaseFsck.checkHdfsIntegrity(HBaseFsck.java:1226)
          at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:741)
          at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:386)
          at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:475)
          at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4029)
          at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3838)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
          at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3826)
        Caused by: java.lang.OutOfMemoryError: Direct buffer memory
          at java.nio.Bits.reserveMemory(Bits.java:658)
          at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
          at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
          at org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
          at org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:44)
          at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:270)
          at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:210)
          at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:399)
          at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:143)
          at org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:231)
          at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3309)
          at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:702)
          at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:699)
          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
          at java.util.concurrent.FutureTask.run(FutureTask.java:166)
          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
          at java.util.concurrent.FutureTask.run(FutureTask.java:166)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
          at java.lang.Thread.run(Thread.java:722)
        
        Show
        Nick Dimiduk added a comment - Here's the full stack trace: Exception in thread "main" java.io.IOException: java.lang.OutOfMemoryError: Direct buffer memory at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:731) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:638) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:609) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:595) at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4195) at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4154) at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4127) at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4205) at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:4085) at org.apache.hadoop.hbase.util.HBaseFsckRepair.createHDFSRegionDir(HBaseFsckRepair.java:190) at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo$HDFSIntegrityFixer.handleHoleInRegionChain(HBaseFsck.java:2312) at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.checkRegionChain(HBaseFsck.java:2492) at org.apache.hadoop.hbase.util.HBaseFsck.checkHdfsIntegrity(HBaseFsck.java:1226) at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:741) at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:386) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:475) at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4029) at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:3838) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3826) Caused by: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) at org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65) at org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:44) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:270) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:210) at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:399) at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:143) at org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:231) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3309) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:702) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:699) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)
        Nick Dimiduk made changes -
        Field Original Value New Value
        Description Running {{hbck --repair}} when BucketCache is enabled in offheap mode can cause OOM. This is apparently because {{bin/hbase}} does not include $HBASE_REGIONSERVER_OPTS for hbck. It instantiates an HRegion instance as part of HDFSIntegrityFixer.handleHoleInRegionChain. That HRegion initializes its CacheConfig, which doesn't have the necessary Direct Memory.

        Possible solutions include:
         - disable blockcache in the config used by hbck when running its repairs
         - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments

        I'm leaning toward the former because it's possible that hbck is run on a host with the same hardware profile as the RS.
        Running {{hbck --repair}} when BucketCache is enabled in offheap mode can cause OOM. This is apparently because {{bin/hbase}} does not include $HBASE_REGIONSERVER_OPTS for hbck. It instantiates an HRegion instance as part of HDFSIntegrityFixer.handleHoleInRegionChain. That HRegion initializes its CacheConfig, which doesn't have the necessary Direct Memory.

        Possible solutions include:
         - disable blockcache in the config used by hbck when running its repairs
         - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments

        I'm leaning toward the former because it's possible that hbck is run on a host with different hardware profile as the RS.
        Hide
        Nick Dimiduk added a comment -

        Looks like the same kind of issue crops up with LoadIncrementalHFiles:

        2014-02-11 18:14:30,021 ERROR [main] mapreduce.LoadIncrementalHFiles: Unexpected execution exception during splitting
          java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct buffer memory
          at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
          at java.util.concurrent.FutureTask.get(FutureTask.java:111)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:407)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:288)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:822)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:827)
        Caused by: java.lang.OutOfMemoryError: Direct buffer memory
          at java.nio.Bits.reserveMemory(Bits.java:658)
          at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
          at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
          at org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
          at org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:44)
          at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:270)
          at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:210)
          at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:399)
          at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:166)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:476)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:397)
          at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:395)
          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
          at java.util.concurrent.FutureTask.run(FutureTask.java:166)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
          at java.lang.Thread.run(Thread.java:722)
        
        Show
        Nick Dimiduk added a comment - Looks like the same kind of issue crops up with LoadIncrementalHFiles: 2014-02-11 18:14:30,021 ERROR [main] mapreduce.LoadIncrementalHFiles: Unexpected execution exception during splitting java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct buffer memory at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:407) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:288) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:822) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:827) Caused by: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) at org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65) at org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:44) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:270) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:210) at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:399) at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:166) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:476) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:397) at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:395) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)
        Hide
        Nick Dimiduk added a comment -

        Here's a simple patch for HadoopQA that disables the blockcache for these tools.

        Show
        Nick Dimiduk added a comment - Here's a simple patch for HadoopQA that disables the blockcache for these tools.
        Nick Dimiduk made changes -
        Attachment HBASE-10500.00.patch [ 12628314 ]
        Nick Dimiduk made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Affects Version/s 0.96.0 [ 12324822 ]
        Affects Version/s 0.99.0 [ 12325675 ]
        Hide
        Jonathan Hsieh added a comment -

        lgtm.. +1

        Show
        Jonathan Hsieh added a comment - lgtm.. +1
        Hide
        stack added a comment -

        +1

        Show
        stack added a comment - +1
        Nick Dimiduk made changes -
        Summary hbck and OOM when BucketCache is enabled Some tools OOM when BucketCache is enabled
        Description Running {{hbck --repair}} when BucketCache is enabled in offheap mode can cause OOM. This is apparently because {{bin/hbase}} does not include $HBASE_REGIONSERVER_OPTS for hbck. It instantiates an HRegion instance as part of HDFSIntegrityFixer.handleHoleInRegionChain. That HRegion initializes its CacheConfig, which doesn't have the necessary Direct Memory.

        Possible solutions include:
         - disable blockcache in the config used by hbck when running its repairs
         - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments

        I'm leaning toward the former because it's possible that hbck is run on a host with different hardware profile as the RS.
        Running {{hbck --repair}} or {{LoadIncrementalHFiles}} when BucketCache is enabled in offheap mode can cause OOM. This is apparently because {{bin/hbase}} does not include $HBASE_REGIONSERVER_OPTS for these tools. This results in HRegion or HFileReaders initialized with a CacheConfig that doesn't have the necessary Direct Memory.

        Possible solutions include:
         - disable blockcache in the config used by hbck when running its repairs
         - include HBASE_REGIONSERVER_OPTS in the HBaseFSCK startup arguments

        I'm leaning toward the former because it's possible that hbck is run on a host with different hardware profile as the RS.
        Component/s HFile [ 12319660 ]
        Component/s hbck [ 12315702 ]
        Hide
        Nick Dimiduk added a comment -

        Patch moves conf management into constructors so that programatic use is also corrected. Without it, IntegrationTestImportTsv and IntegrationTestBulkLoad fail.

        Also remove the apparently redundant config from LoadIncrementalHFiles. If you were kind enough to provide a +1 earlier, note that this patch is a little more invasive.

        Show
        Nick Dimiduk added a comment - Patch moves conf management into constructors so that programatic use is also corrected. Without it, IntegrationTestImportTsv and IntegrationTestBulkLoad fail. Also remove the apparently redundant config from LoadIncrementalHFiles. If you were kind enough to provide a +1 earlier, note that this patch is a little more invasive.
        Nick Dimiduk made changes -
        Attachment HBASE-10500.01.patch [ 12628364 ]
        Hide
        stack added a comment -

        Yeah, probably better. I can't think of case where tool would need to offheap. If it does, lets deal then. Meantime, get these tools useable again when offheap enabled.

        Show
        stack added a comment - Yeah, probably better. I can't think of case where tool would need to offheap. If it does, lets deal then. Meantime, get these tools useable again when offheap enabled.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12628314/HBASE-10500.00.patch
        against trunk revision .
        ATTACHMENT ID: 12628314

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        -1 site. The patch appears to cause mvn site goal to fail.

        -1 core tests. The patch failed these unit tests:

        -1 core zombie tests. There are 1 zombie test(s):

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12628314/HBASE-10500.00.patch against trunk revision . ATTACHMENT ID: 12628314 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8662//console This message is automatically generated.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12628364/HBASE-10500.01.patch
        against trunk revision .
        ATTACHMENT ID: 12628364

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 lineLengths. The patch does not introduce lines longer than 100

        -1 site. The patch appears to cause mvn site goal to fail.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12628364/HBASE-10500.01.patch against trunk revision . ATTACHMENT ID: 12628364 +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8664//console This message is automatically generated.
        Hide
        Nick Dimiduk added a comment -

        IntegrationTestBulkLoad and IntegrationTestImportTsv both pass here. Lacking objection, will commit tomorrow.

        Show
        Nick Dimiduk added a comment - IntegrationTestBulkLoad and IntegrationTestImportTsv both pass here. Lacking objection, will commit tomorrow.
        Hide
        Nick Dimiduk added a comment -

        Committed to trunk, 0.98, and 0.96. Thanks for the reviews.

        Show
        Nick Dimiduk added a comment - Committed to trunk, 0.98, and 0.96. Thanks for the reviews.
        Nick Dimiduk made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Fix Version/s 0.96.2 [ 12325658 ]
        Fix Version/s 0.98.1 [ 12325664 ]
        Fix Version/s 0.99.0 [ 12325675 ]
        Resolution Fixed [ 1 ]
        Hide
        Hudson added a comment -

        FAILURE: Integrated in hbase-0.96 #292 (See https://builds.apache.org/job/hbase-0.96/292/)
        HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567689)

        • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Show
        Hudson added a comment - FAILURE: Integrated in hbase-0.96 #292 (See https://builds.apache.org/job/hbase-0.96/292/ ) HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567689) /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #140 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/140/)
        HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567688)

        • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Show
        Hudson added a comment - SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #140 (See https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/140/ ) HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567688) /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK #4913 (See https://builds.apache.org/job/HBase-TRUNK/4913/)
        HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567687)

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #4913 (See https://builds.apache.org/job/HBase-TRUNK/4913/ ) HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567687) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in hbase-0.96-hadoop2 #202 (See https://builds.apache.org/job/hbase-0.96-hadoop2/202/)
        HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567689)

        • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Show
        Hudson added a comment - FAILURE: Integrated in hbase-0.96-hadoop2 #202 (See https://builds.apache.org/job/hbase-0.96-hadoop2/202/ ) HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567689) /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #88 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/88/)
        HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567687)

        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #88 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/88/ ) HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567687) /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-0.98 #154 (See https://builds.apache.org/job/HBase-0.98/154/)
        HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567688)

        • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
        • /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-0.98 #154 (See https://builds.apache.org/job/HBase-0.98/154/ ) HBASE-10500 Some tools OOM when BucketCache is enabled (ndimiduk: rev 1567688) /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
        Hide
        Enis Soztutar added a comment -

        Closing this issue after 0.99.0 release.

        Show
        Enis Soztutar added a comment - Closing this issue after 0.99.0 release.
        Enis Soztutar made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Transition Time In Source Status Execution Times Last Executer Last Execution Date
        Open Open Patch Available Patch Available
        1h 51m 1 Nick Dimiduk 11/Feb/14 19:57
        Patch Available Patch Available Resolved Resolved
        21h 19m 1 Nick Dimiduk 12/Feb/14 17:16
        Resolved Resolved Closed Closed
        374d 6h 13m 1 Enis Soztutar 21/Feb/15 23:29

          People

          • Assignee:
            Nick Dimiduk
            Reporter:
            Nick Dimiduk
          • Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development