Details

      Description

      The GzipCodec will NPE upon reset after finish when the native zlib codec is not loaded. When the native zlib is loaded the codec creates a CompressorOutputStream that doesn't have the problem, otherwise, the GZipCodec uses GZIPOutputStream which is extended to provide the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, GZIPOutputStream#finish will release the underlying deflater, which causes NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK doesn't have this issue.

      1. HADOOP-8419-branch-1.patch
        9 kB
        Yu Li
      2. HADOOP-8419-branch1-v2.patch
        9 kB
        Yu Li
      3. HADOOP-8419-trunk.patch
        9 kB
        Yu Li
      4. HADOOP-8419-trunk-v2.patch
        10 kB
        Yu Li

        Issue Links

          Activity

          Hide
          Yu Li added a comment -

          More details about this issue:
          In IBM JDK, GZIPOutputStream class calls the deflater's end method as part of GZIPOutputStream.finish(), so deflater's reset can't be called after, while in SUN or Open JDK implementation this end method won't be called.

          To work-around this issue, we need to override the finish method of coresponding classes which extends GZIPOutputStream, thus won't depend on implementation of differenct JDK. And since the needed writeTrailer, writeInt and writeShort all become private method in JDK6(SUN/IBM/OPEN JDK), we also need to add these 3 methods in the patch.

          Show
          Yu Li added a comment - More details about this issue: In IBM JDK, GZIPOutputStream class calls the deflater's end method as part of GZIPOutputStream.finish(), so deflater's reset can't be called after, while in SUN or Open JDK implementation this end method won't be called. To work-around this issue, we need to override the finish method of coresponding classes which extends GZIPOutputStream, thus won't depend on implementation of differenct JDK. And since the needed writeTrailer, writeInt and writeShort all become private method in JDK6(SUN/IBM/OPEN JDK), we also need to add these 3 methods in the patch.
          Hide
          Yu Li added a comment -

          Attached the patch for branch-1

          Show
          Yu Li added a comment - Attached the patch for branch-1
          Hide
          Luke Lu added a comment -

          The patch lgtm. Thanks Yu! Can you post a patch for trunk as well? That way we can leverage the trunk unit tests. Also please post ant test and test-patch results for branch-1 for Sun JDK as well.

          Show
          Luke Lu added a comment - The patch lgtm. Thanks Yu! Can you post a patch for trunk as well? That way we can leverage the trunk unit tests. Also please post ant test and test-patch results for branch-1 for Sun JDK as well.
          Hide
          Yu Li added a comment -

          The result of test-patch:
          ========================================================
          +1 overall.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.1) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.
          ========================================================

          Show
          Yu Li added a comment - The result of test-patch: ======================================================== +1 overall . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.1) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. ========================================================
          Hide
          Yu Li added a comment -

          Test result on branch-1:

          Both with and w/o my patch, below UT cases failed, not sure whether it's env issue, but from error message it should be irrelavant with compression:
          ========================================================
          [junit] Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
          [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 2.067 sec
          [junit] Running org.apache.hadoop.hdfs.TestRestartDFS
          [junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 16.016 sec
          [junit] Running org.apache.hadoop.hdfs.TestSafeMode
          [junit] Tests run: 3, Failures: 0, Errors: 2, Time elapsed: 64.601 sec
          [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
          [junit] Tests run: 3, Failures: 0, Errors: 3, Time elapsed: 41.901 sec
          [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
          [junit] Tests run: 3, Failures: 2, Errors: 0, Time elapsed: 44.583 sec
          ========================================================

          All cases with error has error messages like:
          =======================================================
          Edit log corruption detected: corruption length = 9748 > toleration length = 0; the corruption is intolerable.
          java.io.IOException: Edit log corruption detected: corruption length = 9748 > toleration length = 0; the corruption is intolerable.
          at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkEndOfLog(FSEditLog.java:608)
          =======================================================

          The case with failure has error message like:
          =======================================================
          java.io.IOException: Failed to parse edit log (/home/biadmin/hadoop/build/test/data/dfs/chkpt/current/edits) at position 555, edit log length is 690, opcode=0, isTolerationEnabled=false, Rec
          ent opcode offsets=[65 124 244 388]
          at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:84)
          =======================================================

          Show
          Yu Li added a comment - Test result on branch-1: Both with and w/o my patch, below UT cases failed, not sure whether it's env issue, but from error message it should be irrelavant with compression: ======================================================== [junit] Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 2.067 sec [junit] Running org.apache.hadoop.hdfs.TestRestartDFS [junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 16.016 sec [junit] Running org.apache.hadoop.hdfs.TestSafeMode [junit] Tests run: 3, Failures: 0, Errors: 2, Time elapsed: 64.601 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint [junit] Tests run: 3, Failures: 0, Errors: 3, Time elapsed: 41.901 sec [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup [junit] Tests run: 3, Failures: 2, Errors: 0, Time elapsed: 44.583 sec ======================================================== All cases with error has error messages like: ======================================================= Edit log corruption detected: corruption length = 9748 > toleration length = 0; the corruption is intolerable. java.io.IOException: Edit log corruption detected: corruption length = 9748 > toleration length = 0; the corruption is intolerable. at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkEndOfLog(FSEditLog.java:608) ======================================================= The case with failure has error message like: ======================================================= java.io.IOException: Failed to parse edit log (/home/biadmin/hadoop/build/test/data/dfs/chkpt/current/edits) at position 555, edit log length is 690, opcode=0, isTolerationEnabled=false, Rec ent opcode offsets= [65 124 244 388] at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:84) =======================================================
          Hide
          Yu Li added a comment -

          Test result on trunk:

          Both with and w/o my patch, below UT case failed, but from error message it should be irrelavant with compression:
          =======================================================
          Tests in error:
          testRDNS(org.apache.hadoop.net.TestDNS): DNS server failure [response code 2]

          Tests run: 1784, Failures: 0, Errors: 1, Skipped: 18

          [INFO] ------------------------------------------------------------------------
          [INFO] Reactor Summary:
          [INFO]
          [INFO] Apache Hadoop Main ................................ SUCCESS [1.702s]
          [INFO] Apache Hadoop Project POM ......................... SUCCESS [3.812s]
          [INFO] Apache Hadoop Annotations ......................... SUCCESS [1.312s]
          [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.245s]
          [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.335s]
          [INFO] Apache Hadoop Auth ................................ SUCCESS [6.754s]
          [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.322s]
          [INFO] Apache Hadoop Common .............................. FAILURE [16:42.921s]
          [INFO] Apache Hadoop Common Project ...................... SKIPPED
          =======================================================

          From the UT log we could see below error message:
          =======================================================
          Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.459 sec <<< FAILURE!
          testRDNS(org.apache.hadoop.net.TestDNS) Time elapsed: 15233 sec <<< ERROR!
          javax.naming.ServiceUnavailableException: DNS server failure [response code 2]; remaining name '81.122.30.9.in-addr.arpa'
          at com.sun.jndi.dns.DnsClient.checkResponseCode(DnsClient.java:594)
          at com.sun.jndi.dns.DnsClient.isMatchResponse(DnsClient.java:553)
          =======================================================

          Show
          Yu Li added a comment - Test result on trunk: Both with and w/o my patch, below UT case failed, but from error message it should be irrelavant with compression: ======================================================= Tests in error: testRDNS(org.apache.hadoop.net.TestDNS): DNS server failure [response code 2] Tests run: 1784, Failures: 0, Errors: 1, Skipped: 18 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [1.702s] [INFO] Apache Hadoop Project POM ......................... SUCCESS [3.812s] [INFO] Apache Hadoop Annotations ......................... SUCCESS [1.312s] [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.245s] [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.335s] [INFO] Apache Hadoop Auth ................................ SUCCESS [6.754s] [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.322s] [INFO] Apache Hadoop Common .............................. FAILURE [16:42.921s] [INFO] Apache Hadoop Common Project ...................... SKIPPED ======================================================= From the UT log we could see below error message: ======================================================= Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.459 sec <<< FAILURE! testRDNS(org.apache.hadoop.net.TestDNS) Time elapsed: 15233 sec <<< ERROR! javax.naming.ServiceUnavailableException: DNS server failure [response code 2] ; remaining name '81.122.30.9.in-addr.arpa' at com.sun.jndi.dns.DnsClient.checkResponseCode(DnsClient.java:594) at com.sun.jndi.dns.DnsClient.isMatchResponse(DnsClient.java:553) =======================================================
          Hide
          Luke Lu added a comment -

          Yu, Please click Submit patch to let jenkins to review the trunk patch.

          Show
          Luke Lu added a comment - Yu, Please click Submit patch to let jenkins to review the trunk patch.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12552831/HADOOP-8419-trunk.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1729//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1729//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12552831/HADOOP-8419-trunk.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1729//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1729//console This message is automatically generated.
          Hide
          Luke Lu added a comment -

          Since this is restricted to some IBM JDK 6 releases. We should restrict override to these release as well, a la changes in HBASE-7145.

          Show
          Luke Lu added a comment - Since this is restricted to some IBM JDK 6 releases. We should restrict override to these release as well, a la changes in HBASE-7145 .
          Hide
          Yu Li added a comment -

          Thanks Luke for the comments, I have updated and resubmit the patch.

          For the updated patch, all test-commit UT passed in my env, and below is the result of test-patch:

          +1 overall.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.1) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          Show
          Yu Li added a comment - Thanks Luke for the comments, I have updated and resubmit the patch. For the updated patch, all test-commit UT passed in my env, and below is the result of test-patch: +1 overall . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.1) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12562314/HADOOP-8419-trunk-v2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1924//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1924//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12562314/HADOOP-8419-trunk-v2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1924//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1924//console This message is automatically generated.
          Hide
          Eric Yang added a comment -

          +1, I just committed this, thank you Yu.

          Show
          Eric Yang added a comment - +1, I just committed this, thank you Yu.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-trunk-Commit #3215 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3215/)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739)

          Result = SUCCESS
          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Show
          Hudson added a comment - Integrated in Hadoop-trunk-Commit #3215 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3215/ ) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739) Result = SUCCESS eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Yarn-trunk #93 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/93/)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739)

          Result = SUCCESS
          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Show
          Hudson added a comment - Integrated in Hadoop-Yarn-trunk #93 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/93/ ) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739) Result = SUCCESS eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1282 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1282/)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739)

          Result = FAILURE
          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1282 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1282/ ) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739) Result = FAILURE eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1310 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1310/)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740)
          HADOOP-8419. Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739)

          Result = FAILURE
          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java

          eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1310 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1310/ ) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431740) HADOOP-8419 . Fixed GzipCode NPE reset for IBM JDK. (Yu Li via eyang) (Revision 1431739) Result = FAILURE eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431740 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCompressionStreamReuse.java eyang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431739 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/GzipCodec.java
          Hide
          Matt Foley added a comment -

          I think When Eric committed it, he intended to mark "fixVersion" to be 1.1.2 and 3.0.0, not just "targetVersions".
          I'm marking it fixed in 1.1.2. However, I'm concerned about the integration failure noted above for Hdfs-trunk and Mapreduce-trunk.
          So I'm not yet marking it fixed in 3.0.0. Can you please check into this? Thanks.

          Show
          Matt Foley added a comment - I think When Eric committed it, he intended to mark "fixVersion" to be 1.1.2 and 3.0.0, not just "targetVersions". I'm marking it fixed in 1.1.2. However, I'm concerned about the integration failure noted above for Hdfs-trunk and Mapreduce-trunk. So I'm not yet marking it fixed in 3.0.0. Can you please check into this? Thanks.
          Hide
          Amir Sanjar added a comment -

          Matt, please do not add this patch to the next release.. It causes regression on IBM S390x running zLinux.

          Show
          Amir Sanjar added a comment - Matt, please do not add this patch to the next release.. It causes regression on IBM S390x running zLinux.
          Hide
          Yu Li added a comment -

          Hi Matt, from the Husdson auto-comment I cannot locate the reason why integration with hdfs-trunk and mapred-trunk failed, can you give me some hints to resolve the issue? Thanks.

          Hi Amir, can you tell me more about the regression with detailed error/exception message? I think we should improve the patch rather than leave the known issue open, thanks.

          Show
          Yu Li added a comment - Hi Matt, from the Husdson auto-comment I cannot locate the reason why integration with hdfs-trunk and mapred-trunk failed, can you give me some hints to resolve the issue? Thanks. Hi Amir, can you tell me more about the regression with detailed error/exception message? I think we should improve the patch rather than leave the known issue open, thanks.
          Hide
          Eric Yang added a comment -

          Matt, the failures is not related to this patch in HDFS trunk failure case. HDFS test case is failing due to HADOOP-9067. I also reverted this patch completely and reran the test case to be sure that this patch was not trigger any strange hidden behavior.

          Amir, the test case has been tested on power platform on ppc Jenkins server. We did not see the regression issue that you mentioned. Could you provide IBM JDK version number that has this regression fixed in zLinux? This will help us to make sure the logic is only target specific version ranges of IBM jdk that have this bug. Since you are using the same ppc Jenkins server to do the test, I don't think you will see any regression. I am inclined to close this issue baring no objections.

          Show
          Eric Yang added a comment - Matt, the failures is not related to this patch in HDFS trunk failure case. HDFS test case is failing due to HADOOP-9067 . I also reverted this patch completely and reran the test case to be sure that this patch was not trigger any strange hidden behavior. Amir, the test case has been tested on power platform on ppc Jenkins server. We did not see the regression issue that you mentioned. Could you provide IBM JDK version number that has this regression fixed in zLinux? This will help us to make sure the logic is only target specific version ranges of IBM jdk that have this bug. Since you are using the same ppc Jenkins server to do the test, I don't think you will see any regression. I am inclined to close this issue baring no objections.
          Hide
          Eric Yang added a comment -

          Hadoop Commons and HDFS trunk builds have been stabilized. Mark this as fixed.

          Show
          Eric Yang added a comment - Hadoop Commons and HDFS trunk builds have been stabilized. Mark this as fixed.
          Hide
          Matt Foley added a comment -

          Closed upon successful release of 1.1.2.

          Show
          Matt Foley added a comment - Closed upon successful release of 1.1.2.
          Hide
          Suresh Srinivas added a comment -

          Eric Yang Is this fix committed to trunk. If so can you please mark the fixed version as such. If not, why is this in BUG FIXES section in CHANGES.txt in trunk.

          Show
          Suresh Srinivas added a comment - Eric Yang Is this fix committed to trunk. If so can you please mark the fixed version as such. If not, why is this in BUG FIXES section in CHANGES.txt in trunk.
          Hide
          Eric Yang added a comment -

          Yes this is committed to trunk.

          Show
          Eric Yang added a comment - Yes this is committed to trunk.

            People

            • Assignee:
              Yu Li
              Reporter:
              Luke Lu
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development