Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-15220 Über-jira: S3a phase V: Hadoop 3.2 features
  3. HADOOP-15469

S3A directory committer commit job fails if _temporary directory created under dest

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.1.0
    • Fix Version/s: 3.1.1
    • Component/s: fs/s3
    • Labels:
      None
    • Environment:

      spark test runs

    • Target Version/s:

      Description

      The directory staging committer fails in commit job if any temporary files/dirs have been created. Spark work can create such a dir for placement of absolute files.

      This is because commitJob() looks for the dest dir existing, not containing non-hidden files.
      As the comment says, "its kind of superfluous". More specifically, it means jobs which would commit with the classic committer & overwrite=false will fail

      Proposed fix: remove the check

        Attachments

          Activity

            People

            • Assignee:
              stevel@apache.org Steve Loughran
              Reporter:
              stevel@apache.org Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: