Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-442

slaves file should include an 'exclude' section, to prevent "bad" datanodes and tasktrackers from disrupting a cluster

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.12.0
    • conf
    • None

    Description

      I recently had a few nodes go bad, such that they were inaccessible to ssh, but were still running their java processes.
      tasks that executed on them were failing, causing jobs to fail.
      I couldn't stop the java processes, because of the ssh issue, so I was helpless until I could actually power down these nodes.
      restarting the cluster doesn't help, even when removing the bad nodes from the slaves file - they just reconnect and are accepted.
      while we plan to avoid tasks from launching on the same nodes over and over, what I'd like is to be able to prevent rogue processes from connecting to the masters.
      Ideally, the slaves file will contain an 'exclude' section, which will list nodes that shouldn't be accessed, and should be ignored if they try to connect. That would also help in configuring the slaves file for a large cluster - I'd list the full range of machines in the cluster, then list the ones that are down in the 'exclude' section

      Attachments

        1. hadoop-442-10.patch
          45 kB
          Wendy Chien
        2. hadoop-442-11.patch
          43 kB
          Wendy Chien
        3. hadoop-442-8.patch
          36 kB
          Wendy Chien

        Issue Links

          Activity

            People

              wchien Wendy Chien
              yarnon Yoram Arnon
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: