Uploaded image for project: 'Bigtop'
  1. Bigtop
  2. BIGTOP-2663

puppet hadoop module: Consolidate memory resource settings

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.1.0
    • Fix Version/s: 1.2.0
    • Component/s: None
    • Labels:
      None

      Description

      The memory resource settings for hadoop are outdated.

      Now the settings in mapred-site.xml should be used

      mapreduce.map.java.opts
      mapreduce.reduce.java.opts
      

      These are set now to -Xmx1024m (This was hardcoded before)

      Additionally one can now optionally set the maxmimum (resident) memory
      for map and reduce jobs

      mapreduce.map.memory.mb
      mapreduce.reduce.memory.mb
      

      And last but not least, will set yarn.nodemanager.vmem-pmem-ratio to 100:

      There is the public misconception that virtual memory is a limiting resource.
      That's only the case for 32Bit Adress spaces, not anymore.

      See for instance for http://stackoverflow.com/questions/561245/virtual-memory-usage-from-java-under-linux-too-much-memory-used
      for an rather up to date detailled explanation, why vmem doesn't matter.

      So we allow it to be tremendiously large. Why does it matter, anyhow? Java8 seems to use memory mapped I/O agressivly now, and the virtual memory in the hadoop mapred container became exhausted when the resident memory is only 15% used.

        Attachments

          Activity

            People

            • Assignee:
              oflebbe Olaf Flebbe
              Reporter:
              oflebbe Olaf Flebbe
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: