Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Cannot Reproduce
-
2.8.0
-
None
-
None
Description
logging for the sake of completeness; a transient failure of
ITestS3AInputStreamPerformance.testDecompressionSequential128K during a parallel test run over long-haul links.
What I suspect happened is that some network failure triggered the connection being closed and reopened. The test merely asserts that the number of opens == 1, ignoring the error count. If this failure keeps happening, maybe that should be used as part of the assertions, asserting that opencount - errorcount == 1. Though there, as the error count is used for more than just read+reopen errors, it could maybe be unreliable
org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformancetestDecompressionSequential128K(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance) Time elapsed: 24.583 sec <<< FAILURE! java.lang.AssertionError: open operations innull expected:<1> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.assertOpenOperationCount(ITestS3AInputStreamPerformance.java:188) at org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.assertStreamOpenedExactlyOnce(ITestS3AInputStreamPerformance.java:180) at org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testDecompressionSequential128K(ITestS3AInputStreamPerformance.java:283) testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance) Time elapsed: 7.428 sec <<< FAILURE!
Attachments
Issue Links
- is depended upon by
-
HADOOP-11694 Über-jira: S3a phase II: robustness, scale and performance
- Resolved