Hadoop Common
  1. Hadoop Common
  2. HADOOP-3821

SequenceFile's Reader.decompressorPool or Writer.decompressorPool gets into an inconsistent state when calling close() more than once

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.17.0, 0.17.1, 0.18.0, 0.19.0
    • Fix Version/s: 0.18.1
    • Component/s: io
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      SequenceFile.Reader uses a decompressorPool to reuse Decompressor instances. The Reader obtains such an instance from the pool on object creation and returns it back to the pool it when close() is called.

      SequenceFile.Reader implements the java.io.Closable interface and it's spec on the close() method says:

      Closes this stream and releases any system resources associated
      with it. If the stream is already closed then invoking this
      method has no effect.

      This spec is violated by the Reader implementation, because calling close() multiple times has really bad implications.
      When you call close() twice, one and the same Decompressor instances will be returned to the pool two times and the pool would now maintain duplicated references to the same Decompressor instances. When other Readers now request instances from the pool it might happen that two Readers get the same Decompressor instance.

      The correct behavior would be to just ignore a second call to close().

      The exact same issue applies to the SequenceFile.Writer as well.

      We were having big trouble with this, because we were observing sporadic exceptions from merge operations. The strange thing was that executing the same merge again usually succeeded. But sometimes it took multiple attempts to complete a merge successfully. It was very hard to debug that the root cause was some duplicated Decompressor references in the decompressorPool.

      Exceptions that we observed in production looked like this (we were using hadoop 0.17.0):

      java.io.IOException: unknown compression method
      at org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater.decompress(BuiltInZlibInflater.java:47)
      at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:80)
      at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74)
      at java.io.DataInputStream.readFully(DataInputStream.java:178)
      at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:56)
      at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:90)
      at org.apache.hadoop.io.SequenceFile$Reader.nextRawKey(SequenceFile.java:1995)
      at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:3002)
      at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2760)
      at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2625)
      at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2644)
      

      or

      java.io.IOException: zero length keys not allowed
      at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.appendRaw(SequenceFile.java:1340)
      at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2626)
      at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2644)
      

      The following snippet reproduces the problem:

          public void testCodecPool() throws IOException {
              Configuration conf = new Configuration();
              LocalFileSystem fs = new LocalFileSystem();
              fs.setConf(conf);
              fs.getRawFileSystem().setConf(conf);
      
              // create a sequence file
              Path path = new Path("target/seqFile");
              SequenceFile.Writer writer = SequenceFile.createWriter(fs, conf, path, Text.class, NullWritable.class, CompressionType.BLOCK);
              writer.append(new Text("key1"), NullWritable.get());
              writer.append(new Text("key2"), NullWritable.get());
              writer.close();
      
              // Create a reader which uses 4 BuiltInZLibInflater instances
              SequenceFile.Reader reader = new SequenceFile.Reader(fs, path, conf);
              // Returns the 4 BuiltInZLibInflater instances to the CodecPool
              reader.close();
              // The second close erroneously returns the same 4 BuiltInZLibInflater instances to the CodecPool again
              reader.close();
      
              // The first reader gets 4 BuiltInZLibInflater instances from the CodecPool
              SequenceFile.Reader reader1 = new SequenceFile.Reader(fs, path, conf);
              // read first value from reader1
              Text text = new Text();
              reader1.next(text);
              assertEquals("key1", text.toString());
              // The second reader gets the same 4 BuiltInZLibInflater instances from the CodePool as reader1
              SequenceFile.Reader reader2 = new SequenceFile.Reader(fs, path, conf);
              // read first value from reader2
              reader2.next(text);
              assertEquals("key1", text.toString());
              // read second value from reader1
              reader1.next(text);
              assertEquals("key2", text.toString());
              // read second value from reader2 (this throws an exception)
              reader2.next(text);
              assertEquals("key2", text.toString());
              
              assertFalse(reader1.next(text));
              assertFalse(reader2.next(text));
          }
      

      It fails with the exception:

      java.io.EOFException
      	at java.io.DataInputStream.readByte(DataInputStream.java:243)
      	at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:324)
      	at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:345)
      	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1835)
      	at CodecPoolTest.testCodecPool(CodecPoolTest.java:56)
      

      But this is just a very simple test that shows the problem. Much more weired things can happen when running in a complex production environment. Esp. heavy concurrency makes the behavior much more exciting.

      1. HADOOP-3821_0_20080724.patch
        2 kB
        Arun C Murthy
      2. HADOOP-3821_1_20080821.patch
        5 kB
        Arun C Murthy
      3. HADOOP-3821_1_20080821.patch
        5 kB
        Arun C Murthy
      4. HADOOP-3821_test1_20080805.patch
        2 kB
        Peter Voss

        Activity

        Hide
        Arun C Murthy added a comment -

        I'd propose a straight-forward fix of setting the compressors/decompressors to null in SequenceFile.

        {Writer|Reader}

        .close() ... the pool then automatically ignores null (de)compressors.

        Show
        Arun C Murthy added a comment - I'd propose a straight-forward fix of setting the compressors/decompressors to null in SequenceFile. {Writer|Reader} .close() ... the pool then automatically ignores null (de)compressors.
        Hide
        Arun C Murthy added a comment -

        Thoughts?

        Show
        Arun C Murthy added a comment - Thoughts?
        Hide
        Peter Voss added a comment -

        Thanks for the very quick response.

        Yes, that would be an appropriate solution and the patch solves the problem (from what I see). My simple test case works with this now.

        Show
        Peter Voss added a comment - Thanks for the very quick response. Yes, that would be an appropriate solution and the patch solves the problem (from what I see). My simple test case works with this now.
        Hide
        Arun C Murthy added a comment -

        Marking this patch PA.

        Show
        Arun C Murthy added a comment - Marking this patch PA.
        Hide
        Peter Voss added a comment -

        Will this also go to the 0.17 branch, since this is the version that we are using? Or the 0.16 branch (I think 0.16.4 is the latest stable release)?

        Show
        Peter Voss added a comment - Will this also go to the 0.17 branch, since this is the version that we are using? Or the 0.16 branch (I think 0.16.4 is the latest stable release)?
        Hide
        Arun C Murthy added a comment - - edited

        0.17.1 is the latest stable release.

        I'm not sure if we there is a 0.17.2 planned, if not there isn't any point in porting this to branch-0.17 - unfortunately I'm pretty sure this patch won't apply there (not that rewriting it for branch-0.17 is a major exercise). Would it help if we put in branch-0.18? The hadoop-0.18 release is very close now...

        Show
        Arun C Murthy added a comment - - edited 0.17.1 is the latest stable release. I'm not sure if we there is a 0.17.2 planned, if not there isn't any point in porting this to branch-0.17 - unfortunately I'm pretty sure this patch won't apply there (not that rewriting it for branch-0.17 is a major exercise). Would it help if we put in branch-0.18? The hadoop-0.18 release is very close now...
        Hide
        Peter Voss added a comment -

        Right. The patch won't apply to branch-0.17.

        If 0.18 release is close that should be fine as well. Although I still would prefer a patch for 0.17, because we are going to ship a new release of our product that uses hadoop 0.17 soon. I just would like to avoid any risks that would come with a switch to 0.18 in that stage of the release process. Anyway, if our QA doesn't want an 0.18 upgrade for this release I could also patch the 0.17 branch myself and we switch to 0.18 for our next release.

        Interesting that 0.17.1 is the latest stable release. I don't see it at http://apache.cs.utah.edu/hadoop/core/stable/

        Thanks a lot for your really quick progress on this issue.

        Show
        Peter Voss added a comment - Right. The patch won't apply to branch-0.17. If 0.18 release is close that should be fine as well. Although I still would prefer a patch for 0.17, because we are going to ship a new release of our product that uses hadoop 0.17 soon. I just would like to avoid any risks that would come with a switch to 0.18 in that stage of the release process. Anyway, if our QA doesn't want an 0.18 upgrade for this release I could also patch the 0.17 branch myself and we switch to 0.18 for our next release. Interesting that 0.17.1 is the latest stable release. I don't see it at http://apache.cs.utah.edu/hadoop/core/stable/ Thanks a lot for your really quick progress on this issue.
        Hide
        dhruba borthakur added a comment -

        If it is not too development overhead, then we would like the patch to be made available for 0.17.2. too. Thanks!

        Show
        dhruba borthakur added a comment - If it is not too development overhead, then we would like the patch to be made available for 0.17.2. too. Thanks!
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12386788/HADOOP-3821_0_20080724.patch
        against trunk revision 679601.

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no tests are needed for this patch.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed core unit tests.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12386788/HADOOP-3821_0_20080724.patch against trunk revision 679601. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no tests are needed for this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2943/console This message is automatically generated.
        Hide
        Peter Voss added a comment -

        Have you come to any decision regarding the fix version? Will this issue also be fixed in the 0.18 branch or event the 0.17 branch? At least applying it to the 0.18 branch seems quite important to me, because 0.19.0 will probably not be releases that soon, right?

        Show
        Peter Voss added a comment - Have you come to any decision regarding the fix version? Will this issue also be fixed in the 0.18 branch or event the 0.17 branch? At least applying it to the 0.18 branch seems quite important to me, because 0.19.0 will probably not be releases that soon, right?
        Hide
        Arun C Murthy added a comment -

        Peter - I was trying to incorporate your test case, and it strangely seems to work on trunk without any problems - could you please recheck? Thanks!

        Show
        Arun C Murthy added a comment - Peter - I was trying to incorporate your test case, and it strangely seems to work on trunk without any problems - could you please recheck? Thanks!
        Hide
        Peter Voss added a comment -

        The test case fails for me (without applying the patch of course). I am running this on a Mac with Java 1.5 and without the native-hadoop library. Maybe the actual Inflater implementation could influence the behavior of the test.

        What configuration are you using? I can try to find a more complex test that exposes the problem with other configurations as well.

        Show
        Peter Voss added a comment - The test case fails for me (without applying the patch of course). I am running this on a Mac with Java 1.5 and without the native-hadoop library. Maybe the actual Inflater implementation could influence the behavior of the test. What configuration are you using? I can try to find a more complex test that exposes the problem with other configurations as well.
        Hide
        Peter Voss added a comment -

        I have attached a JUnit test that now uses two different files and the two Readers are reading from different files. Arun, does this reproduce the problem for you? At least it does it for me?
        The JUnit test certainly needs some clean-up to follow your conventions. E.g. I am using "target" as temp dir, which is certainly wrong.

        Let me know, if you need more help to get a good test case.

        Show
        Peter Voss added a comment - I have attached a JUnit test that now uses two different files and the two Readers are reading from different files. Arun, does this reproduce the problem for you? At least it does it for me? The JUnit test certainly needs some clean-up to follow your conventions. E.g. I am using "target" as temp dir, which is certainly wrong. Let me know, if you need more help to get a good test case.
        Hide
        Peter Voss added a comment -

        Arun, can this issue be targeted for 0.18 or 0.17.2? It will be really bad for us if we would have to wait till 0.19 to get a official stable release that includes this fix. We'll wind up patching hadoop ourselves. If you need help, let me know.

        Show
        Peter Voss added a comment - Arun, can this issue be targeted for 0.18 or 0.17.2? It will be really bad for us if we would have to wait till 0.19 to get a official stable release that includes this fix. We'll wind up patching hadoop ourselves. If you need help, let me know.
        Hide
        Arun C Murthy added a comment -

        Apologies for being slow on this, I'm scheduling this for 0.18.1.

        Show
        Arun C Murthy added a comment - Apologies for being slow on this, I'm scheduling this for 0.18.1.
        Hide
        Arun C Murthy added a comment -

        I figure there is a better way to have the test case rather than introduce CodecPool.clear...

        Show
        Arun C Murthy added a comment - I figure there is a better way to have the test case rather than introduce CodecPool.clear...
        Hide
        Arun C Murthy added a comment -

        Updated patch.

        Show
        Arun C Murthy added a comment - Updated patch.
        Hide
        Chris Douglas added a comment -

        +1 Looks good

        Show
        Chris Douglas added a comment - +1 Looks good
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12388709/HADOOP-3821_1_20080821.patch
        against trunk revision 688101.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 5 new or modified tests.

        -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12388709/HADOOP-3821_1_20080821.patch against trunk revision 688101. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 5 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3091/console This message is automatically generated.
        Hide
        Chris Douglas added a comment -

        I just committed this. Thanks, Arun

        Show
        Chris Douglas added a comment - I just committed this. Thanks, Arun
        Hide
        Hudson added a comment -
        Show
        Hudson added a comment - Integrated in Hadoop-trunk #586 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/586/ )

          People

          • Assignee:
            Arun C Murthy
            Reporter:
            Peter Voss
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development