XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 3.3.1
    • 3.3.5
    • fs/s3

    Description

       

      Error:

      Rename operation fails during multi object delete of size more than 1000. We see an exception during multi object delete of more than 1000 keys in one go during rename operation.

       

      org.apache.hadoop.fs.s3a.AWSBadRequestException: rename  com.amazonaws.services.s3.model.AmazonS3Exception
      : The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: XZ8PGAQHP0FGHPYS; S3 Extended Request ID: vTG8c+koukzQ8yMRGd9BvWfmRwkCZ3fAs/EOiAV5S9E
      JjLqFTNCgDOKokuus5W600Z5iOa/iQBI=; Proxy: null), S3 Extended Request ID: vTG8c+koukzQ8yMRGd9BvWfmRwkCZ3fAs/EOiAV5S9EJjLqFTNCgDOKokuus5W600Z5iOa/iQBI=:MalformedXML: The XML you provided was not well-formed or did not validate against our published schema 
      (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: XZ8PGAQHP0FGHPYS; S3 Extended Request ID: vTG8c+koukzQ8yMRGd9BvWfmRwkCZ3fAs/EOiAV5S9EJjLqFTNCgDOKokuus5W600Z5iOa/iQBI=; Proxy: null)
              at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:247)
              at org.apache.hadoop.fs.s3a.s3guard.RenameTracker.convertToIOException(RenameTracker.java:267)
              at org.apache.hadoop.fs.s3a.s3guard.RenameTracker.deleteFailed(RenameTracker.java:198)
              at org.apache.hadoop.fs.s3a.impl.RenameOperation.removeSourceObjects(RenameOperation.java:706)
              at org.apache.hadoop.fs.s3a.impl.RenameOperation.completeActiveCopiesAndDeleteSources(RenameOperation.java:274)
              at org.apache.hadoop.fs.s3a.impl.RenameOperation.recursiveDirectoryRename(RenameOperation.java:484)
              at org.apache.hadoop.fs.s3a.impl.RenameOperation.execute(RenameOperation.java:312)
              at org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:1912)
              at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$rename$7(S3AFileSystem.java:1759)
              at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
              at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444)
              at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2250)
              at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:1757)
              at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1605)
              at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:186)
              at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:110)

       

      Solution:

      So implementing paging of requests to reduce the number of keys in a single request. Page size can be configured

      using "fs.s3a.bulk.delete.page.size"

      Attachments

        Issue Links

          Activity

            People

              mthakur Mukund Thakur
              mthakur Mukund Thakur
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 4h 10m
                  4h 10m