Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-7337

Task fails while deleting spill files on slow disk

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • None
    • None
    • performance
    • None

    Description

      We sometimes have tasks fail when deleting spill files in this loop (line 2005 of MapTask.java):

      for(int i = 0; i < numSpills; i++) {
        rfs.delete(filename[i],true);
      }

      During this loop, there is no communication back to the master server, and hence if the loop takes too long, the master server assumes the child has timed out and tells the nodeagent to kill the yarn child.

      Typically this is linked to storage issues, and we've seen it most often due to an underlying bug in the filesystem (where there is contention in the filesystem delete path when deleting several files). But while there are usually underlying issues, it still wouldn't hurt to mark progress in the task during this loop periodically.

      Attachments

        Activity

          People

            Unassigned Unassigned
            scott.oaks@oracle.com Scott Oaks
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: