Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-3

Set mapred.child.ulimit automatically to the value of the RAM limits for a job, if they are set

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Memory based monitoring and scheduling allow users to set memory limits for the tasks of their jobs. This parameter is the total memory taken by the task, and any children it may launch (for e.g. in the case of streaming). A related parameter is mapred.child.ulimit which is a hard limit on the memory used by a single process of the entire task tree. For user convenience, it would be sensible for the system to set the ulimit to atleast the memory required by the task, if the user has specified the latter.

        Activity

        Hide
        Aaron Kimball added a comment -

        A caution here is that mapred.child.ulimit needs to account for the memory overhead of the JVM itself. Merely setting mapred.child.ulimit to the same value as the -Xmxhhhhm in mapred.child.java.opts will fail to launch child tasks. You'll need some overhead room; I don't know exactly how much.

        Show
        Aaron Kimball added a comment - A caution here is that mapred.child.ulimit needs to account for the memory overhead of the JVM itself. Merely setting mapred.child.ulimit to the same value as the -Xmx hhhh m in mapred.child.java.opts will fail to launch child tasks. You'll need some overhead room; I don't know exactly how much.

          People

          • Assignee:
            Unassigned
            Reporter:
            Hemanth Yamijala
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:

              Development