Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-3954

Clean up passing HEAPSIZE to yarn and mapred commands.

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.23.2
    • Fix Version/s: 0.23.2
    • Component/s: mrv2
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Added new envs to separate heap size for different daemons started via bin scripts.
    • Target Version/s:

      Description

      Currently the heap size for all of these is set in yarn-env.sh. JAVA_HEAP_MAX is set to -Xmx1000m unless YARN_HEAPSIZE is set. If it is set it will override JAVA_HEAP_MAX. However, we do not always want to have the RM, NM, and HistoryServer with the exact same heap size. It would be logical to have inside of yarn and mapred to set JAVA_HEAP_MAX if YARN_RESOURCEMANAGER_HEAPSIZE, YARN_NODEMANAGER_HEAPSIZE or HADOOP_JOB_HISTORYSERVER_HEAPSIZE are set respectively. This is a bug because it is easy to configure the history server to store more entires then the heap can hold. It is also a performance issue if we do not allow the history server to cache many entries on a large cluster.

      1. MAPREDUCE-3954-20120305.txt
        6 kB
        Vinod Kumar Vavilapalli
      2. MR-3954.txt
        6 kB
        Robert Joseph Evans
      3. MR-3954.txt
        5 kB
        Robert Joseph Evans
      4. MR-3954.txt
        2 kB
        Robert Joseph Evans

        Issue Links

          Activity

            People

            • Assignee:
              Robert Joseph Evans
              Reporter:
              Robert Joseph Evans
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development