Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-134

TaskTracker startup fails if any mapred.local.dir entries don't exist

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Environment:

      ~30 node cluster, various size/number of disks, CPUs, memory

      Description

      This appears to have been introduced with the "check for enough free space" before startup.

      It's debatable how best to fix this bug. I will submit a patch which ignores directories for which the DF utility fails. This is letting me continue operation on my cluster (where the number of drives varies, so there are entries in mapred.local.dir for drives that aren't on all cluster nodes), but a cleaner solution is probably better. I'd lean towards "check for existence", and ignore the dir if it doesn't - but don't depend on DF to fail, since DF could fail for other reasons without meaning you're out of disk space. I argue that a TaskTracker should start up if all directories that can be written to in the list have enough space. Otherwise, a failed drive per cluster machine means no work ever gets done.

        Attachments

        1. fix.tasktracker.localdirs.patch.txt
          3 kB
          Bryan Pendleton
        2. fix-freespace-tasktracker-failure.txt
          0.8 kB
          Bryan Pendleton

          Issue Links

            Activity

              People

              • Assignee:
                ravidotg Ravi Gummadi
                Reporter:
                bpendleton Bryan Pendleton
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: