Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-383

Modify datanode configs to specify minimum JVM heapsize

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      >Y! 1524346

      Currently the Hadoop DataNodes are running with the option -Xmx1000m. They
      should and/or be running with the option -Xms1000m (if 1000m is correct; it
      seems high?)

      This turns out to be a sticky request. The place where Hadoop DFS is getting
      the definition of how to define that 1000m is the hadoop-env file. Read the
      code from bin/hadoop, which is used to start all hadoop processes:

      ) JAVA_HEAP_MAX=-Xmx1000m
      )
      ) # check envvars which might override default args
      ) if [ "$HADOOP_HEAPSIZE" != "" ]; then
      ) #echo "run with heapsize $HADOOP_HEAPSIZE"
      ) JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"
      ) #echo $JAVA_HEAP_MAX
      ) fi

      And here's the entry from hadoop-env.sh:
      ) # The maximum amount of heap to use, in MB. Default is 1000.
      ) export HADOOP_HEAPSIZE=1000

      The problem is that I believe we want to specify -Xms for datanodes ONLY. But
      the same script is used to start datanodes, tasktrackers, etc. This isn't
      trivially a matter of distributing different config files, the options provided
      are coded into the bin/hadoop executable. So this is an enhancement request.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                chansler Robert Chansler
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: