HBase
  1. HBase
  2. HBASE-11042

TestForceCacheImportantBlocks OOMs occasionally in 0.94

    Details

    • Type: Test Test
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.94.19
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      This trace:

      Caused by: java.lang.OutOfMemoryError
      	at java.util.zip.Deflater.init(Native Method)
      	at java.util.zip.Deflater.<init>(Deflater.java:169)
      	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:91)
      	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:110)
      	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.<init>(ReusableStreamGzipCodec.java:79)
      	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.<init>(ReusableStreamGzipCodec.java:90)
      	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130)
      	at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101)
      	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:299)
      	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:283)
      	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:207)
      	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:356)
      	at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1330)
      	at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:913)
      

      Note that is caused specifically by HFileWriteV1 when using compression. It looks like the compression resources are not released.

      Not sure it's worth fixing this at this point. The test can be fixed by either not using compression (why are we using compression anyway), or by not testing for HFileV1.

      stack it seems you know the the code in HFileWriterV1. Do you want to have a look? Maybe there is a quick fix in HFileWriterV1.

      1. 11042-0.94.txt
        2 kB
        Lars Hofhansl

        Activity

        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #80 (See https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/80/)
        HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841)

        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #80 (See https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/80/ ) HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841) /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-0.94 #1362 (See https://builds.apache.org/job/HBase-0.94/1362/)
        HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841)

        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-0.94 #1362 (See https://builds.apache.org/job/HBase-0.94/1362/ ) HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841) /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Hide
        Hudson added a comment -

        FAILURE: Integrated in HBase-0.94-JDK7 #128 (See https://builds.apache.org/job/HBase-0.94-JDK7/128/)
        HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841)

        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Show
        Hudson added a comment - FAILURE: Integrated in HBase-0.94-JDK7 #128 (See https://builds.apache.org/job/HBase-0.94-JDK7/128/ ) HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841) /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Hide
        Hudson added a comment -

        SUCCESS: Integrated in HBase-0.94-security #478 (See https://builds.apache.org/job/HBase-0.94-security/478/)
        HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841)

        • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Show
        Hudson added a comment - SUCCESS: Integrated in HBase-0.94-security #478 (See https://builds.apache.org/job/HBase-0.94-security/478/ ) HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841) /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
        Hide
        Lars Hofhansl added a comment -

        Alright. Committed to 0.94. Thanks stack.

        Show
        Lars Hofhansl added a comment - Alright. Committed to 0.94. Thanks stack .
        Hide
        stack added a comment -

        stack it seems you know the the code in HFileWriterV1. Do you want to have a look?

        No. v1 is dead. Go ahead w/ your patch.

        Show
        stack added a comment - stack it seems you know the the code in HFileWriterV1. Do you want to have a look? No. v1 is dead. Go ahead w/ your patch.
        Hide
        Lars Hofhansl added a comment -

        This is in HFileWriterV1. stack, looks like a comment you would write

          private DataOutputStream getCompressingStream() throws IOException {
            this.compressor = compressAlgo.getCompressor();
            // Get new DOS compression stream. In tfile, the DOS, is not closed,
            // just finished, and that seems to be fine over there. TODO: Check
            // no memory retention of the DOS. Should I disable the 'flush' on the
            // DOS as the BCFile over in tfile does? It wants to make it so flushes
            // don't go through to the underlying compressed stream. Flush on the
            // compressed downstream should be only when done. I was going to but
            // looks like when we call flush in here, its legitimate flush that
            // should go through to the compressor.
            OutputStream os = this.compressAlgo.createCompressionStream(
                this.outputStream, this.compressor, 0);
            return new DataOutputStream(os);
          }
        
        Show
        Lars Hofhansl added a comment - This is in HFileWriterV1. stack , looks like a comment you would write private DataOutputStream getCompressingStream() throws IOException { this .compressor = compressAlgo.getCompressor(); // Get new DOS compression stream. In tfile, the DOS, is not closed, // just finished, and that seems to be fine over there. TODO: Check // no memory retention of the DOS. Should I disable the 'flush' on the // DOS as the BCFile over in tfile does? It wants to make it so flushes // don't go through to the underlying compressed stream. Flush on the // compressed downstream should be only when done. I was going to but // looks like when we call flush in here, its legitimate flush that // should go through to the compressor. OutputStream os = this .compressAlgo.createCompressionStream( this .outputStream, this .compressor, 0); return new DataOutputStream(os); }
        Hide
        Lars Hofhansl added a comment -

        Here's a test-only fix for 0.94 (note that this test is broken in 0.96 and later - it's no-op there).

        • turns off compression for the HFileV1 test (that is what is using up all the heap)
        • adds tests so that HFileV2 is testing with and without compression
        Show
        Lars Hofhansl added a comment - Here's a test-only fix for 0.94 (note that this test is broken in 0.96 and later - it's no-op there). turns off compression for the HFileV1 test (that is what is using up all the heap) adds tests so that HFileV2 is testing with and without compression

          People

          • Assignee:
            Lars Hofhansl
            Reporter:
            Lars Hofhansl
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development