Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-15220 Über-jira: S3a phase V: Hadoop 3.2 features
  3. HADOOP-15469

S3A directory committer commit job fails if _temporary directory created under dest

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.1.0
    • 3.1.1
    • fs/s3
    • None
    • spark test runs

    Description

      The directory staging committer fails in commit job if any temporary files/dirs have been created. Spark work can create such a dir for placement of absolute files.

      This is because commitJob() looks for the dest dir existing, not containing non-hidden files.
      As the comment says, "its kind of superfluous". More specifically, it means jobs which would commit with the classic committer & overwrite=false will fail

      Proposed fix: remove the check

      Attachments

        1. HADOOP-15469-001.patch
          3 kB
          Steve Loughran

        Activity

          People

            stevel@apache.org Steve Loughran
            stevel@apache.org Steve Loughran
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: