Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-14376

Memory leak when reading a compressed file using the native library

    Details

    • Hadoop Flags:
      Reviewed

      Description

      Opening and closing a large number of bzip2-compressed input streams causes the process to be killed on OutOfMemory when using the native bzip2 library.

      Our initial analysis suggests that this can be caused by DecompressorStream overriding the close() method, and therefore skipping the line from its parent: CodecPool.returnDecompressor(trackedDecompressor). When the decompressor object is a Bzip2Decompressor, its native end() method is never called, and the allocated memory isn't freed.

      If this analysis is correct, the simplest way to fix this bug would be to replace in.close() with super.close() in DecompressorStream.

      1. Bzip2MemoryTester.java
        0.8 kB
        Eli Acherkan
      2. HADOOP-14376.001.patch
        10 kB
        Eli Acherkan
      3. HADOOP-14376.002.patch
        12 kB
        Eli Acherkan
      4. HADOOP-14376.003.patch
        12 kB
        Eli Acherkan
      5. HADOOP-14376.004.patch
        12 kB
        Eli Acherkan
      6. log4j.properties
        0.3 kB
        Eli Acherkan

        Activity

        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11731 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11731/)
        HADOOP-14376. Memory leak when reading a compressed file using the (jlowe: rev 7bc217224891b7f7f0a2e35e37e46b36d8c5309d)

        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionOutputStream.java
        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DecompressorStream.java
        • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java
        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionInputStream.java
        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CodecPool.java
        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressorStream.java
        • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BZip2Codec.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11731 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11731/ ) HADOOP-14376 . Memory leak when reading a compressed file using the (jlowe: rev 7bc217224891b7f7f0a2e35e37e46b36d8c5309d) (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionOutputStream.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/DecompressorStream.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionInputStream.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CodecPool.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressorStream.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BZip2Codec.java
        Hide
        jlowe Jason Lowe added a comment -

        Thanks, Eli! I committed this to trunk, branch-2, branch-2.8, and branch-2.7.

        Show
        jlowe Jason Lowe added a comment - Thanks, Eli! I committed this to trunk, branch-2, branch-2.8, and branch-2.7.
        Hide
        jlowe Jason Lowe added a comment -

        +1 latest patch ltgm. I'll commit this later today if there are no objections.

        Show
        jlowe Jason Lowe added a comment - +1 latest patch ltgm. I'll commit this later today if there are no objections.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 15s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 12m 50s trunk passed
        +1 compile 15m 8s trunk passed
        +1 checkstyle 0m 31s trunk passed
        +1 mvnsite 0m 59s trunk passed
        +1 mvneclipse 0m 18s trunk passed
        -1 findbugs 1m 24s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
        +1 javadoc 0m 45s trunk passed
        +1 mvninstall 0m 35s the patch passed
        +1 compile 13m 13s the patch passed
        +1 javac 13m 13s the patch passed
        +1 checkstyle 0m 32s hadoop-common-project/hadoop-common: The patch generated 0 new + 118 unchanged - 6 fixed = 118 total (was 124)
        +1 mvnsite 0m 58s the patch passed
        +1 mvneclipse 0m 16s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 28s the patch passed
        +1 javadoc 0m 45s the patch passed
        +1 unit 7m 25s hadoop-common in the patch passed.
        +1 asflicense 0m 30s The patch does not generate ASF License warnings.
        59m 41s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue HADOOP-14376
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867410/HADOOP-14376.004.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux ef6ec2450546 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / ad1e3e4
        Default Java 1.8.0_121
        findbugs v3.1.0-RC1
        findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12290/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12290/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12290/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 12m 50s trunk passed +1 compile 15m 8s trunk passed +1 checkstyle 0m 31s trunk passed +1 mvnsite 0m 59s trunk passed +1 mvneclipse 0m 18s trunk passed -1 findbugs 1m 24s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 0m 45s trunk passed +1 mvninstall 0m 35s the patch passed +1 compile 13m 13s the patch passed +1 javac 13m 13s the patch passed +1 checkstyle 0m 32s hadoop-common-project/hadoop-common: The patch generated 0 new + 118 unchanged - 6 fixed = 118 total (was 124) +1 mvnsite 0m 58s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 28s the patch passed +1 javadoc 0m 45s the patch passed +1 unit 7m 25s hadoop-common in the patch passed. +1 asflicense 0m 30s The patch does not generate ASF License warnings. 59m 41s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14376 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867410/HADOOP-14376.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux ef6ec2450546 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ad1e3e4 Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12290/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12290/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12290/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        eliac Eli Acherkan added a comment -

        Great. Attaching patch 004, which includes adding output.close() to BZip2CompressionOutputStream.close(), and aligning DecompressorStream.close() to the same try/finally structure as CompressorStream.close().

        Show
        eliac Eli Acherkan added a comment - Great. Attaching patch 004, which includes adding output.close() to BZip2CompressionOutputStream.close(), and aligning DecompressorStream.close() to the same try/finally structure as CompressorStream.close().
        Hide
        jlowe Jason Lowe added a comment -

        Yeah, I see what you mean if derived classes are looking at the closed flag. Let's leave the closed flag logic as-is for now in CompressorStream, although I do think we should make the DecompressorStream logic consistent with how it's done in CompressorStream.

        Show
        jlowe Jason Lowe added a comment - Yeah, I see what you mean if derived classes are looking at the closed flag. Let's leave the closed flag logic as-is for now in CompressorStream, although I do think we should make the DecompressorStream logic consistent with how it's done in CompressorStream.
        Hide
        eliac Eli Acherkan added a comment -

        I see what you mean, Jason. Thanks for your comments!

        BZip2CompressionOutputStream:
        Putting back output.close() brings us to the following:

          @Override
          public void close() throws IOException {
            try {
              super.close();
            } finally {
              output.close();
            }
          }
        

        CompressorStream:
        I was attempting to change the current implementation as little as possible. Switching the order of closed = true and super.close() may affect subclasses, especially user-supplied ones (e.g. if they rely on the state of the closed flag in their finish() method). So what would be the best course of action here? Switch the order to simplify the method? Move the closed check logic into the parent (which also affects subclasses)? If so, should a separate "finished" flag be added to keep track of whether finish() was completed successfully? Similarly, should the closed check logic of DecompressorStream be moved to its parent? Also, in DecompressorStream the closed flag is set to true only if super.close() doesn't throw - which I also haven't changed so far.

        Show
        eliac Eli Acherkan added a comment - I see what you mean, Jason. Thanks for your comments! BZip2CompressionOutputStream: Putting back output.close() brings us to the following: @Override public void close() throws IOException { try { super .close(); } finally { output.close(); } } CompressorStream: I was attempting to change the current implementation as little as possible. Switching the order of closed = true and super.close() may affect subclasses, especially user-supplied ones (e.g. if they rely on the state of the closed flag in their finish() method). So what would be the best course of action here? Switch the order to simplify the method? Move the closed check logic into the parent (which also affects subclasses)? If so, should a separate "finished" flag be added to keep track of whether finish() was completed successfully? Similarly, should the closed check logic of DecompressorStream be moved to its parent? Also, in DecompressorStream the closed flag is set to true only if super.close() doesn't throw - which I also haven't changed so far.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for updating the patch!

        I still think BZip2CompressionOutputStream.close() should be doing more than just calling super.close(). BZip2CompressionOutputStream has an "output" field that is private and instantiated by the class, yet it never calls the close() method on it. While it's true that today calling output.close() won't do anything useful because underlying resources are closed/freed by other entities, that may not always be the case in the future. Someone could come along later and update CBZip2OutputStream such that it becomes critical to call its close() method, and failure to do so means we start leaking at that point.

        The following:

          @Override
          public void close() throws IOException {
            if (!closed) {
              try {
                super.close();
              }
              finally {
                closed = true;
              }
            }
          }
        

        can be simplified to:

          @Override
          public void close() throws IOException {
            if (!closed) {
              closed = true;
              super.close();
            }
          }
        

        although even that has a code smell. Why are we protecting the parent's close method from being idempotent on redundant close? The parent's method should already be doing that, which precludes the need to have an override at all since there's nothing else to do in the close method other than call the parent's version. The closed check logic should be moved into the parent rather than having the child do it on behalf of the parent.

        Show
        jlowe Jason Lowe added a comment - Thanks for updating the patch! I still think BZip2CompressionOutputStream.close() should be doing more than just calling super.close(). BZip2CompressionOutputStream has an "output" field that is private and instantiated by the class, yet it never calls the close() method on it. While it's true that today calling output.close() won't do anything useful because underlying resources are closed/freed by other entities, that may not always be the case in the future . Someone could come along later and update CBZip2OutputStream such that it becomes critical to call its close() method, and failure to do so means we start leaking at that point. The following: @Override public void close() throws IOException { if (!closed) { try { super .close(); } finally { closed = true ; } } } can be simplified to: @Override public void close() throws IOException { if (!closed) { closed = true ; super .close(); } } although even that has a code smell. Why are we protecting the parent's close method from being idempotent on redundant close? The parent's method should already be doing that, which precludes the need to have an override at all since there's nothing else to do in the close method other than call the parent's version. The closed check logic should be moved into the parent rather than having the child do it on behalf of the parent.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 19s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 14m 34s trunk passed
        +1 compile 16m 15s trunk passed
        +1 checkstyle 0m 37s trunk passed
        +1 mvnsite 1m 4s trunk passed
        +1 mvneclipse 0m 20s trunk passed
        -1 findbugs 1m 23s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
        +1 javadoc 0m 50s trunk passed
        +1 mvninstall 0m 38s the patch passed
        +1 compile 14m 11s the patch passed
        +1 javac 14m 11s the patch passed
        +1 checkstyle 0m 37s hadoop-common-project/hadoop-common: The patch generated 0 new + 119 unchanged - 6 fixed = 119 total (was 125)
        +1 mvnsite 1m 2s the patch passed
        +1 mvneclipse 0m 20s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 1m 32s the patch passed
        +1 javadoc 0m 49s the patch passed
        +1 unit 8m 11s hadoop-common in the patch passed.
        +1 asflicense 0m 35s The patch does not generate ASF License warnings.
        65m 16s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue HADOOP-14376
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867034/HADOOP-14376.003.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux e28c1350b9d7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 749e5c0
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12275/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12275/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12275/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 14m 34s trunk passed +1 compile 16m 15s trunk passed +1 checkstyle 0m 37s trunk passed +1 mvnsite 1m 4s trunk passed +1 mvneclipse 0m 20s trunk passed -1 findbugs 1m 23s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 0m 50s trunk passed +1 mvninstall 0m 38s the patch passed +1 compile 14m 11s the patch passed +1 javac 14m 11s the patch passed +1 checkstyle 0m 37s hadoop-common-project/hadoop-common: The patch generated 0 new + 119 unchanged - 6 fixed = 119 total (was 125) +1 mvnsite 1m 2s the patch passed +1 mvneclipse 0m 20s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 32s the patch passed +1 javadoc 0m 49s the patch passed +1 unit 8m 11s hadoop-common in the patch passed. +1 asflicense 0m 35s The patch does not generate ASF License warnings. 65m 16s Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14376 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867034/HADOOP-14376.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e28c1350b9d7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 749e5c0 Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12275/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12275/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12275/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        eliac Eli Acherkan added a comment -

        Patch 003 is the same as 002 with tabs converted to spaces.

        Show
        eliac Eli Acherkan added a comment - Patch 003 is the same as 002 with tabs converted to spaces.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 18s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 13m 18s trunk passed
        +1 compile 14m 50s trunk passed
        +1 checkstyle 0m 32s trunk passed
        -1 mvnsite 0m 27s hadoop-common in trunk failed.
        +1 mvneclipse 0m 21s trunk passed
        -1 findbugs 1m 41s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
        +1 javadoc 0m 44s trunk passed
        +1 mvninstall 0m 37s the patch passed
        +1 compile 12m 49s the patch passed
        +1 javac 12m 49s the patch passed
        -0 checkstyle 0m 34s hadoop-common-project/hadoop-common: The patch generated 1 new + 118 unchanged - 6 fixed = 119 total (was 124)
        +1 mvnsite 0m 57s the patch passed
        +1 mvneclipse 0m 16s the patch passed
        -1 whitespace 0m 0s The patch 3 line(s) with tabs.
        +1 findbugs 1m 26s the patch passed
        +1 javadoc 0m 42s the patch passed
        -1 unit 6m 50s hadoop-common in the patch failed.
        +1 asflicense 0m 25s The patch does not generate ASF License warnings.
        58m 34s



        Reason Tests
        Failed junit tests hadoop.net.TestDNS



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue HADOOP-14376
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867001/HADOOP-14376.002.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux e35f1d93bd5c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 1769b12
        Default Java 1.8.0_121
        mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt
        findbugs v3.1.0-RC1
        findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
        checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
        whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/whitespace-tabs.txt
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 13m 18s trunk passed +1 compile 14m 50s trunk passed +1 checkstyle 0m 32s trunk passed -1 mvnsite 0m 27s hadoop-common in trunk failed. +1 mvneclipse 0m 21s trunk passed -1 findbugs 1m 41s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 0m 44s trunk passed +1 mvninstall 0m 37s the patch passed +1 compile 12m 49s the patch passed +1 javac 12m 49s the patch passed -0 checkstyle 0m 34s hadoop-common-project/hadoop-common: The patch generated 1 new + 118 unchanged - 6 fixed = 119 total (was 124) +1 mvnsite 0m 57s the patch passed +1 mvneclipse 0m 16s the patch passed -1 whitespace 0m 0s The patch 3 line(s) with tabs. +1 findbugs 1m 26s the patch passed +1 javadoc 0m 42s the patch passed -1 unit 6m 50s hadoop-common in the patch failed. +1 asflicense 0m 25s The patch does not generate ASF License warnings. 58m 34s Reason Tests Failed junit tests hadoop.net.TestDNS Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14376 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867001/HADOOP-14376.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux e35f1d93bd5c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 1769b12 Default Java 1.8.0_121 mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/whitespace-tabs.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12272/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        eliac Eli Acherkan added a comment -

        Thanks Jason Lowe. Those are excellent points, I completely agree that the patch introduced subtle differences if some of the streams throw exceptions upon close(). My previous reasoning was that in this case something's probably gone horribly and irrevocably wrong anyway. But following your comments, I prepared a more defensive patch, in which even if some of the close() or finish() methods throw exceptions we still try to close/recover what we can. The price of this is assuming that it's okay to call the close() method of a stream multiple times.

        BZip2CompressionOutputStream:

        Other maintainers will come along and see that BZip2CompressionOutputStream never calls close() on one of its private member streams which is usually a bug. Even if the close() ends up being redundant today that doesn't mean it always will. The root cause for this JIRA is a great example.

        In patch 002 I put back the BZip2CompressionOutputStream.close() method with a call to super.close() and some explanatory documentation. It still seems to me that calling super.close() should be sufficient, let me try to explain why.

        I'm not seeing how the superclass's finish() method ever ends up closing the out stream. I see it write some final bytes and sets it to null, which in turn prevents the close() method from trying to call out.close(), so I'm wondering how the output stream normally gets closed.

        My understanding is that the output stream does get closed, thanks to the out.close() call in CompressionOutputStream.close(). The out data member of CBZip2OutputStream is indeed nullified, but the out data member of CompressionOutputStream should still reference the actual stream object.

        My reasoning was: the only difference between BZip2CompressionOutputStream.finish() and close() is that BZip2CompressionOutputStream.finish() calls output.finish(), whereas BZip2CompressionOutputStream.close() calls output.flush() and output.close(). Changing BZip2CompressionOutputStream.close() to super.close() will mean that we invoke finish() only instead of flush() and close(). Looking at CBZip2OutputStream (which can be the only class of the output data member in the current implementation), it seems to me that it's okay to invoke finish() instead of flush() + close(), because the only difference between them is calling out.flush() + out.close(). As I said above, out.close() will be called anyway by CompressionOutputStream.close(), and I'm assuming that any reasonable stream calls flush() internally on close().

        BZip2CompressionInputStream:
        In BZip2CompressionInputStream, patch 002 puts the call to super.close() in a finally block. This preserves the previous logic (set needsReset to true only if input.close() didn't throw) while ensuring that super.close() will unconditionally close the in stream and return the trackedDecompressor to the pool.

        CompressorStream/CompressionInput/OutputStream:

        This is subtly different that the previous code because finish() can throw. In the old code, finish() could throw and out.close() would still be called but now we'll skip calling out.close() but still set closed=true so we can't retry the close. (...) Similarly the CompressionInputStream/CompressionOutputStream code won't return the codec to the pool if finish() throws or the underlying stream's close() throws.

        In patch 002 I wrapped each of CompressionInput/OutputStream.close()'s internal steps in try/finally. (For CompressionOutputStream.close() this leaves the corner case of both finish() and out.close() throwing an exception each, but I think it's reasonable that only one of them will be propagated since it's a doomed stream anyway.) This brings the behavior of the patched CompressorStream.close() to what it was before my changes: if finish() throws, out.close() is still called.

        Show
        eliac Eli Acherkan added a comment - Thanks Jason Lowe . Those are excellent points, I completely agree that the patch introduced subtle differences if some of the streams throw exceptions upon close(). My previous reasoning was that in this case something's probably gone horribly and irrevocably wrong anyway. But following your comments, I prepared a more defensive patch, in which even if some of the close() or finish() methods throw exceptions we still try to close/recover what we can. The price of this is assuming that it's okay to call the close() method of a stream multiple times. BZip2CompressionOutputStream : Other maintainers will come along and see that BZip2CompressionOutputStream never calls close() on one of its private member streams which is usually a bug. Even if the close() ends up being redundant today that doesn't mean it always will. The root cause for this JIRA is a great example. In patch 002 I put back the BZip2CompressionOutputStream.close() method with a call to super.close() and some explanatory documentation. It still seems to me that calling super.close() should be sufficient, let me try to explain why. I'm not seeing how the superclass's finish() method ever ends up closing the out stream. I see it write some final bytes and sets it to null, which in turn prevents the close() method from trying to call out.close(), so I'm wondering how the output stream normally gets closed. My understanding is that the output stream does get closed, thanks to the out.close() call in CompressionOutputStream.close(). The out data member of CBZip2OutputStream is indeed nullified, but the out data member of CompressionOutputStream should still reference the actual stream object. My reasoning was: the only difference between BZip2CompressionOutputStream.finish() and close() is that BZip2CompressionOutputStream.finish() calls output.finish(), whereas BZip2CompressionOutputStream.close() calls output.flush() and output.close(). Changing BZip2CompressionOutputStream.close() to super.close() will mean that we invoke finish() only instead of flush() and close(). Looking at CBZip2OutputStream (which can be the only class of the output data member in the current implementation), it seems to me that it's okay to invoke finish() instead of flush() + close(), because the only difference between them is calling out.flush() + out.close(). As I said above, out.close() will be called anyway by CompressionOutputStream.close(), and I'm assuming that any reasonable stream calls flush() internally on close(). BZip2CompressionInputStream : In BZip2CompressionInputStream, patch 002 puts the call to super.close() in a finally block. This preserves the previous logic (set needsReset to true only if input.close() didn't throw) while ensuring that super.close() will unconditionally close the in stream and return the trackedDecompressor to the pool. CompressorStream/CompressionInput/OutputStream : This is subtly different that the previous code because finish() can throw. In the old code, finish() could throw and out.close() would still be called but now we'll skip calling out.close() but still set closed=true so we can't retry the close. (...) Similarly the CompressionInputStream/CompressionOutputStream code won't return the codec to the pool if finish() throws or the underlying stream's close() throws. In patch 002 I wrapped each of CompressionInput/OutputStream.close()'s internal steps in try/finally. (For CompressionOutputStream.close() this leaves the corner case of both finish() and out.close() throwing an exception each, but I think it's reasonable that only one of them will be propagated since it's a doomed stream anyway.) This brings the behavior of the patched CompressorStream.close() to what it was before my changes: if finish() throws, out.close() is still called.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the patch! The test failure is unrelated.

        Regarding BZip2Codec.BZip2CompressionOutputStream.close(), I removed the overriding method altogether, because the superclass's close() method invokes finish(). The finish() method handles internalReset() if needed, and also calls output.finish(), which eliminates the need to call output.flush() or output.close().

        I'm not sure this is a net good change. Other maintainers will come along and see that BZip2CompressionOutputStream never calls close() on one of its private member streams which is usually a bug. Even if the close() ends up being redundant today that doesn't mean it always will. The root cause for this JIRA is a great example. Also I'm not seeing how the superclass's finish() method ever ends up closing the out stream. I see it write some final bytes and sets it to null, which in turn prevents the close() method from trying to call out.close(), so I'm wondering how the output stream normally gets closed.

        For the BZip2CompressionInputStream change, if input.close() throws then we won't call super.close() and we could leak some resources and won't return the codec to the pool.

        For the CompressorStream patch:

           public void close() throws IOException {
             if (!closed) {
               try {
        -        finish();
        +        super.close();
               }
               finally {
        -        out.close();
                 closed = true;
               }
             }
        

        This is subtly different that the previous code because finish() can throw. In the old code, finish() could throw and out.close() would still be called but now we'll skip calling out.close() but still set closed=true so we can't retry the close. This change was done by HADOOP-10526 but it looks like they missed the same change for DecompressorStream. Similarly the CompressionInputStream/CompressionOutputStream code won't return the codec to the pool if finish() throws or the underlying stream's close() throws.

        Show
        jlowe Jason Lowe added a comment - Thanks for the patch! The test failure is unrelated. Regarding BZip2Codec.BZip2CompressionOutputStream.close(), I removed the overriding method altogether, because the superclass's close() method invokes finish(). The finish() method handles internalReset() if needed, and also calls output.finish(), which eliminates the need to call output.flush() or output.close(). I'm not sure this is a net good change. Other maintainers will come along and see that BZip2CompressionOutputStream never calls close() on one of its private member streams which is usually a bug. Even if the close() ends up being redundant today that doesn't mean it always will. The root cause for this JIRA is a great example. Also I'm not seeing how the superclass's finish() method ever ends up closing the out stream. I see it write some final bytes and sets it to null, which in turn prevents the close() method from trying to call out.close(), so I'm wondering how the output stream normally gets closed. For the BZip2CompressionInputStream change, if input.close() throws then we won't call super.close() and we could leak some resources and won't return the codec to the pool. For the CompressorStream patch: public void close() throws IOException { if (!closed) { try { - finish(); + super .close(); } finally { - out.close(); closed = true ; } } This is subtly different that the previous code because finish() can throw. In the old code, finish() could throw and out.close() would still be called but now we'll skip calling out.close() but still set closed=true so we can't retry the close. This change was done by HADOOP-10526 but it looks like they missed the same change for DecompressorStream. Similarly the CompressionInputStream/CompressionOutputStream code won't return the codec to the pool if finish() throws or the underlying stream's close() throws.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 25s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 17m 21s trunk passed
        +1 compile 19m 50s trunk passed
        +1 checkstyle 0m 42s trunk passed
        +1 mvnsite 1m 12s trunk passed
        +1 mvneclipse 0m 20s trunk passed
        -1 findbugs 1m 49s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
        +1 javadoc 1m 2s trunk passed
        +1 mvninstall 0m 57s the patch passed
        +1 compile 16m 50s the patch passed
        +1 javac 16m 50s the patch passed
        +1 checkstyle 0m 42s hadoop-common-project/hadoop-common: The patch generated 0 new + 112 unchanged - 6 fixed = 112 total (was 118)
        +1 mvnsite 1m 19s the patch passed
        +1 mvneclipse 0m 23s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 findbugs 2m 2s the patch passed
        +1 javadoc 1m 1s the patch passed
        -1 unit 8m 1s hadoop-common in the patch failed.
        +1 asflicense 0m 34s The patch does not generate ASF License warnings.
        76m 45s



        Reason Tests
        Failed junit tests hadoop.net.TestDNS



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:14b5c93
        JIRA Issue HADOOP-14376
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12866684/HADOOP-14376.001.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 642da560dc51 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / cef2815
        Default Java 1.8.0_131
        findbugs v3.1.0-RC1
        findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
        unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/testReport/
        modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 25s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 17m 21s trunk passed +1 compile 19m 50s trunk passed +1 checkstyle 0m 42s trunk passed +1 mvnsite 1m 12s trunk passed +1 mvneclipse 0m 20s trunk passed -1 findbugs 1m 49s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. +1 javadoc 1m 2s trunk passed +1 mvninstall 0m 57s the patch passed +1 compile 16m 50s the patch passed +1 javac 16m 50s the patch passed +1 checkstyle 0m 42s hadoop-common-project/hadoop-common: The patch generated 0 new + 112 unchanged - 6 fixed = 112 total (was 118) +1 mvnsite 1m 19s the patch passed +1 mvneclipse 0m 23s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 2s the patch passed +1 javadoc 1m 1s the patch passed -1 unit 8m 1s hadoop-common in the patch failed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 76m 45s Reason Tests Failed junit tests hadoop.net.TestDNS Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HADOOP-14376 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12866684/HADOOP-14376.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 642da560dc51 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / cef2815 Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html unit https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/12267/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        eliac Eli Acherkan added a comment -

        Patch attached. First time contributor, I hope I followed the guidelines correctly.

        For testing, I enhanced an existing unit test - TestCodec.codecTest(), since it's already invoked for different types of native and pure-Java codecs. I added an assertion that the number of leased decompressors after the test equals to the one before it. This exposed a similar bug in BZip2Codec.BZip2CompressionInputStream.close(), which also doesn't call its super.close() method, and thus doesn't return the decompressor to the pool.

        Adding an assertion for compressors as well as decompressors uncovered a similar issue in CompressorStream.close(), GzipCodec.GzipOutputStream.close(), and BZip2Codec.BZip2CompressionOutputStream.close(), which I attempted to fix as well.

        Regarding BZip2Codec.BZip2CompressionOutputStream.close(), I removed the overriding method altogether, because the superclass's close() method invokes finish(). The finish() method handles internalReset() if needed, and also calls output.finish(), which eliminates the need to call output.flush() or output.close().

        Testing GzipCodec without native libraries showed that CodecPool erroneously calls updateLeaseCounts even for compressors/decompressors that are null, or ones with the @DoNotPool annotation. I added a condition that checks for that.

        The memory leak only manifests when using the native libraries. In Eclipse I achieved this by setting java.library.path in the unit test launcher. Seeing the usage of assumeTrue(isNative*Loaded()), I understand that native-related tests are covered in Maven builds as well.

        Looking forward to a code review.

        Show
        eliac Eli Acherkan added a comment - Patch attached. First time contributor, I hope I followed the guidelines correctly. For testing, I enhanced an existing unit test - TestCodec.codecTest(), since it's already invoked for different types of native and pure-Java codecs. I added an assertion that the number of leased decompressors after the test equals to the one before it. This exposed a similar bug in BZip2Codec.BZip2CompressionInputStream.close(), which also doesn't call its super.close() method, and thus doesn't return the decompressor to the pool. Adding an assertion for compressors as well as decompressors uncovered a similar issue in CompressorStream.close(), GzipCodec.GzipOutputStream.close(), and BZip2Codec.BZip2CompressionOutputStream.close(), which I attempted to fix as well. Regarding BZip2Codec.BZip2CompressionOutputStream.close(), I removed the overriding method altogether, because the superclass's close() method invokes finish(). The finish() method handles internalReset() if needed, and also calls output.finish(), which eliminates the need to call output.flush() or output.close(). Testing GzipCodec without native libraries showed that CodecPool erroneously calls updateLeaseCounts even for compressors/decompressors that are null, or ones with the @DoNotPool annotation. I added a condition that checks for that. The memory leak only manifests when using the native libraries. In Eclipse I achieved this by setting java.library.path in the unit test launcher. Seeing the usage of assumeTrue(isNative*Loaded()), I understand that native-related tests are covered in Maven builds as well. Looking forward to a code review.
        Hide
        eliac Eli Acherkan added a comment -

        Thanks Jason Lowe! Absolutely, I'll prepare a patch. I wasn't sure how to write a unit test that checks off-heap memory for a leak, but using CodecPool.getLeasedDecompressorsCount is much simpler.

        Show
        eliac Eli Acherkan added a comment - Thanks Jason Lowe ! Absolutely, I'll prepare a patch. I wasn't sure how to write a unit test that checks off-heap memory for a leak, but using CodecPool.getLeasedDecompressorsCount is much simpler.
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for the report, Eli Acherkan! This problem isn't specific to bzip2, as I was able to reproduce the problem with both the gzip and zstandard codecs. I updated the summary accordingly.

        This looks like it may have been an accidental oversight when HADOOP-10591 was added. Before that change the DecompressorStream close method was a superset of what CompressionInputStream did.

        It looks like LineRecordReader and some other users of codecs aren't susceptible to this because they explicitly get the decompressor from the codec pool, create the input stream, then explicitly return the decompressor to the pool afterwards. I believe it's safe to try to return the same decompressor to the pool multiple times, so we should be able to safely update the DecompressorStream to call super.close() rather than in.close(). Also should be straightforward to write a unit test, using CodecPool.getLeasedDecompressorsCount to verify the codec is not being returned to the pool before the change and is afterwards.

        Eli Acherkan are you interested in taking a crack at the patch? If not then I should be able to put up something later this week.

        Show
        jlowe Jason Lowe added a comment - Thanks for the report, Eli Acherkan ! This problem isn't specific to bzip2, as I was able to reproduce the problem with both the gzip and zstandard codecs. I updated the summary accordingly. This looks like it may have been an accidental oversight when HADOOP-10591 was added. Before that change the DecompressorStream close method was a superset of what CompressionInputStream did. It looks like LineRecordReader and some other users of codecs aren't susceptible to this because they explicitly get the decompressor from the codec pool, create the input stream, then explicitly return the decompressor to the pool afterwards. I believe it's safe to try to return the same decompressor to the pool multiple times, so we should be able to safely update the DecompressorStream to call super.close() rather than in.close(). Also should be straightforward to write a unit test, using CodecPool.getLeasedDecompressorsCount to verify the codec is not being returned to the pool before the change and is afterwards. Eli Acherkan are you interested in taking a crack at the patch? If not then I should be able to put up something later this week.
        Hide
        eliac Eli Acherkan added a comment -

        Test case

        Show
        eliac Eli Acherkan added a comment - Test case
        Hide
        eliac Eli Acherkan added a comment -

        Attached a test case class that opens and closes a stream in a loop:

        Bzip2MemoryTester.java
        for (int i = 0; i < iterations; i++) {
        	try (InputStream stream = codec.createInputStream(fileSystem.open(inputFile))) {
        		System.out.println(stream.read());
        	}
        }
        

        Running the loop for 100000 times causes the process to be killed by the OS on my machine before reaching 100000 lines of output. Monitoring the process's RSS shows that it grows significantly.

        After placing the attached Bzip2MemoryTester.java and log4j.properties files in an arbitrary folder and setting the HADOOP_HOME environment variable, the following can be used to run the test case:

        echo 'a' > test && bzip2 test
        
        javac -cp $HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/common/lib/* Bzip2MemoryTester.java
        
        java -Xmx128m -cp .:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/common/lib/* -Djava.library.path=$HADOOP_HOME/lib/native Bzip2MemoryTester test.bz2 100000 > out.txt 2> err.txt &
        
        export PID=$(jps | grep Bzip2MemoryTester | cut -d' ' -f1); while [ -a /proc/${PID} ]; do grep VmRSS /proc/${PID}/status; sleep 2; done
        
        grep '^97$' out.txt | wc -l out.txt
        
        Show
        eliac Eli Acherkan added a comment - Attached a test case class that opens and closes a stream in a loop: Bzip2MemoryTester.java for ( int i = 0; i < iterations; i++) { try (InputStream stream = codec.createInputStream(fileSystem.open(inputFile))) { System .out.println(stream.read()); } } Running the loop for 100000 times causes the process to be killed by the OS on my machine before reaching 100000 lines of output. Monitoring the process's RSS shows that it grows significantly. After placing the attached Bzip2MemoryTester.java and log4j.properties files in an arbitrary folder and setting the HADOOP_HOME environment variable, the following can be used to run the test case: echo 'a' > test && bzip2 test javac -cp $HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/common/lib/* Bzip2MemoryTester.java java -Xmx128m -cp .:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/common/lib/* -Djava.library.path=$HADOOP_HOME/lib/native Bzip2MemoryTester test.bz2 100000 > out.txt 2> err.txt & export PID=$(jps | grep Bzip2MemoryTester | cut -d' ' -f1); while [ -a /proc/${PID} ]; do grep VmRSS /proc/${PID}/status; sleep 2; done grep '^97$' out.txt | wc -l out.txt

          People

          • Assignee:
            eliac Eli Acherkan
            Reporter:
            eliac Eli Acherkan
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development