Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-9577

Actual data loss using s3n (against US Standard region)

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Won't Fix
    • 1.0.3
    • None
    • fs/s3
    • None

    Description

      The implementation of needsTaskCommit() assumes that the FileSystem used for writing temporary outputs is consistent. That happens not to be the case when using the S3 native filesystem in the US Standard region. It is actually quite common in larger jobs for the exists() call to return false even if the task attempt wrote output minutes earlier, which essentially cancels the commit operation with no error. That's real life data loss right there, folks.

      The saddest part is that the Hadoop APIs do not seem to provide any legitimate means for the various RecordWriters to communicate with the OutputCommitter. In my projects I have created a static map of semaphores keyed by TaskAttemptID, which all my custom RecordWriters have to be aware of. That's pretty lame.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              j_caplan Joshua Caplan
              Votes:
              1 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: