Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-5005

Multipart Upload fails due to partName mismatch

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 1.0.0
    • 1.2.0
    • OM, S3

    Description

      We're running the official Ozone 1.0.0 release and facing S3 Multipart Upload failures with large files. The error message looks similar to that is reported in HDDS-3554 but we'd like to report what we've found so far to help the further investigation of this issue.

      The error message recorded in OM log

      Please find the following error message excerpted from our OM. Forgive us we redacted some sensitive information such as username and keyname which imply our project's topic.

      2021-03-14 07:48:41,947 [IPC Server handler 88 on default port 9862] ERROR org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCompleteRequest: MultipartUpload Complete request failed for Key: <REDACTED_KEYNAME> in Volume/Bucket s3v/<BUCKETNAME>
      INVALID_PART org.apache.hadoop.ozone.om.exceptions.OMException: Complete Multipart Upload Failed: volume: s3v bucket: <BUCKETNAME> key: <REDACTED_KEYNAME>. Provided Part info is { /s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884795658268282, 4}, whereas OM has partName /s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791629180406
              at org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCompleteRequest.validateAndUpdateCache(S3MultipartUploadCompleteRequest.java:199)
              at org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
              at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:224)
              at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:145)
              at org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:74)
              at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:113)
              at org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
              at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
              at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:422)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)

      Anyway, OM thinks the partName for the partNumber 4 is /s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791629180406 but COMPLETE_MULTIPART_UPLOAD request think it must be /s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884795658268282. This discrepancy is the immediate cause for this error.

      OM audit log says both are correct

      Please find attached om-audit-HOSTNAME-2021-03-14-19-53-09-1.log.gz (also redacted, sorry), it contains filtered output of our OM audit log, the lines which include <REDACTED_KEYNAME> and multipartList entry are remain.

      Interestingly, according to the OM audit log, there're two COMMIT_MULTIPART_UPLOAD_PARTKEY operations exist for partNumber=4 and both operations were succeeded:

       

      % zgrep partNumber=4, om-audit-HOSTNAME-2021-03-14-19-53-09-1.log.gz
      2021-03-14 07:16:04,992 | INFO  | OMAudit | user=<REDACTED_UPN> | ip=10.192.17.172 | op=COMMIT_MULTIPART_UPLOAD_PARTKEY {volume=s3v, bucket=<BUCKETNAME>, key=<REDACTED_KEYNAME>, dataSize=8388608, replicationType=RATIS, replicationFactor=ONE, partNumber=4, partName=/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884795658268282} | ret=SUCCESS | 
      2021-03-14 07:18:11,828 | INFO  | OMAudit | user=<REDACTED_UPN> | ip=10.192.17.172 | op=COMMIT_MULTIPART_UPLOAD_PARTKEY {volume=s3v, bucket=<BUCKETNAME>, key=<REDACTED_KEYNAME>, dataSize=8388608, replicationType=RATIS, replicationFactor=ONE, partNumber=4, partName=/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791629180406} | ret=SUCCESS | 
      %
      

       

      OM seemed to have accepted both partName ending with 105884795658268282 and  105884791629180406 for partNumber 4. And COMPLETE_MULTIPART_UPLOAD operation was called with the prior partName but OM believed it had the latter partName for partNumber 4.

       

      2021-03-14 07:48:41,947 | ERROR | OMAudit | user=<REDACTED_UPN> | ip=10.192.17.172 | op=COMPLETE_MULTIPART_UPLOAD {volume=s3v, bucket=<BUCKETNAME>, key=<REDACTED_KEYNAME>, dataSize=0, replicationType=
      RATIS, replicationFactor=ONE, multipartList=[partNumber: 1
      partName: "/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791631605244"
      , partNumber: 2
      partName: "/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791631539707"
      , partNumber: 3
      partName: "/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791628262900"
      , partNumber: 4
      partName: "/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884795658268282"
      , partNumber: 5
      partName: "/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791629245944"
      , partNumber: 6
      partName: "/s3v/<BUCKETNAME>/<REDACTED_KEYNAME>105884791629245943"
      

      We can also find there're multiple COMMIT_MULTIPART_UPLOAD_PARTKEY operations for several partNumbers, such as partNumber 4, 13, 20, 45, 57, 67, 73, ... some partNumbers like 172 have more than three COMMIT_MULTIPART_UPLOAD_PARTKEY operations they're all succeeded.

       

      How to solve this issue?

      At first we thought this issue is caused by race condition, but noticed that there're enough time between each COMMIT_MULTIPART_UPLOAD_PARKEY operation. We're not sure but noticed that write operations to OmMetadataManager are isolated with omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK, volumeName, bucketName);

       

           multipartKey = omMetadataManager.getMultipartKey(volumeName,
               bucketName, keyName, uploadID);
      
           // TODO to support S3 ACL later.
      
           acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK,
               volumeName, bucketName);
      
           validateBucketAndVolume(omMetadataManager, volumeName, bucketName);
      
           String ozoneKey = omMetadataManager.getOzoneKey(
               volumeName, bucketName, keyName);
      
           OmMultipartKeyInfo multipartKeyInfo = omMetadataManager
               .getMultipartInfoTable().get(multipartKey);
      

       

      So our question is, is it normal to have multiple COMMIT_MULTIPART_UPLOAD_PARTKEY operations for a partNumber, with different partNames?

      Other findings

      This issue occurs less frequently with aws configure set default.s3.multipart_chunksize 256MB. Almost always fails with multipart_chunksize 8MB, 1GB in our environment.

       

       

      Attachments

        Issue Links

          Activity

            People

              bharat Bharat Viswanadham
              kmizumar Kiyoshi Mizumaru
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: