Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-205

Add ability to send "signals" to jobs and tasks

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Reopened
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None

    Description

      In some cases it would be useful to be able to "signal" a job and its tasks about some external condition, or to broadcast a specific message to all tasks in a job. Currently we can only send a single pseudo-signal, that is to kill a job.

      Example 1: some jobs may be gracefully terminated even if they didn't complete all their work, e.g. Fetcher in Nutch may be running for a very long time if it blocks on relatively few sites left over from the fetchlist. In such case it would be very useful to send it a message requesting that it discards the rest of its input and gracefully completes its map tasks.

      Example 2: available bandwidth for fetching may be different at different times of day, e.g. daytime vs. nighttime, or total external link usage by other applications. Fetcher jobs often run for several hours. It would be good to be able to send a "signal" to the Fetcher to throttle or un-throttle its bandwidth usage depending on external conditions.

      Job implementations could react to these messages either by implementing a method, or by registering a listener, whichever seems more natural.

      I'm not quite sure how to go about implementing it, I guess this would have to be a part of TaskUmbilicalProtocol but my knowledge here is a bit fuzzy ... Comments are welcome.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              ab Andrzej Bialecki
              Votes:
              1 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated: