Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-763 Ozone S3 gateway (phase two)
  3. HDDS-894

Content-length should be set for ozone s3 ranged download

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 0.4.0
    • S3
    • None

    Description

      Some of the seek related s3a unit tests are failed when using ozone s3g as the destination endpoint.

      For example ITestS3ContractSeek.testRandomSeeks is failing with:

      org.apache.hadoop.fs.s3a.AWSClientIOException: read on s3a://buckettest/test/testrandomseeks.bin: com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9411; expectedLength=0; includeSkipped=true; in.getClass()=class com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0: Data read has a different length than the expected: dataLength=9411; expectedLength=0; includeSkipped=true; in.getClass()=class com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0
      
      	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:189)
      	at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
      	at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
      	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
      	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
      	at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
      	at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:446)
      	at java.io.DataInputStream.readFully(DataInputStream.java:195)
      	at java.io.DataInputStream.readFully(DataInputStream.java:169)
      	at org.apache.hadoop.fs.contract.ContractTestUtils.verifyRead(ContractTestUtils.java:256)
      	at org.apache.hadoop.fs.contract.AbstractContractSeekTest.testRandomSeeks(AbstractContractSeekTest.java:357)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
      	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
      	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
      	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
      	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
      	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
      	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
      	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
      	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.lang.Thread.run(Thread.java:745)
      

      With checking the requests/responses with mitm proxy I found that it works well under a given range length

      But if the response would be bigger than a specific size the response is chunked by the jetty server, which could be the problem.

      Response for the problematic request:

      Request                         Response                        Detail
      Date:                    Mon, 03 Dec 2018 11:27:55 GMT                                      
      Cache-Control:           no-cache                                                           
      Expires:                 Mon, 03 Dec 2018 11:27:55 GMT                                      
      Date:                    Mon, 03 Dec 2018 11:27:55 GMT                                      
      Pragma:                  no-cache                                                           
      X-Content-Type-Options:  nosniff                                                            
      X-FRAME-OPTIONS:         SAMEORIGIN                                                         
      X-XSS-Protection:        1; mode=block                                                      
      Content-Range:           bytes 208-10239/10240                                              
      Accept-Ranges:           bytes                                                              
      Content-Type:            application/octet-stream                                           
      Last-Modified:           Mon, 03 Dec 2018 11:27:54 GMT                                      
      Server:                  Ozone                                                              
      x-amz-id-2:              gk2CRdkmri0mc1                                                     
      x-amz-request-id:        eb60ee7f-55df-4439-b22a-7d92076f6eee                               
      Transfer-Encoding:       chunked 
      

      As you can see the Content-Length is missing and the Transfer-Enconding is missing.

      Based on this comment the solution is to explicit add the Content-Length to the response.

      Attachments

        1. HDDS-894.001.patch
          3 kB
          Marton Elek

        Activity

          People

            elek Marton Elek
            elek Marton Elek
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: