Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-94

The "Heap Size" in HDFS web ui may not be accurate

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      It seems that the Heap Size shown in HDFS web UI is not accurate. It keeps showing 100% of usage. e.g.

      Heap Size is 10.01 GB / 10.01 GB (100%) 
      
      1. HDFS-94.patch
        1 kB
        Dmytro Molkov

        Issue Links

          Activity

          Hide
          Tsz Wo Nicholas Sze added a comment -

          The codes use Runtime.getRuntime().totalMemory() to obtain the first number. However, totalMemory() does not represent the usage of the heap as described in the javadoc.

          Show
          Tsz Wo Nicholas Sze added a comment - The codes use Runtime.getRuntime().totalMemory() to obtain the first number. However, totalMemory() does not represent the usage of the heap as described in the javadoc.
          Hide
          dhruba borthakur added a comment -

          What is the most appropriate call then?

          Show
          dhruba borthakur added a comment - What is the most appropriate call then?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I think this is a presentation problem. The text "10.01 GB / 10.01 GB (100%)" seems saying that 100% of memory is being used but it may not be the case. Unfortunately, there is no easy way to obtain accurate memory usage. How about we change it to "maxMemory= 10.01 GB, totalMemory = 10.01 GB, freeMemory = xx". Then it will be less confusing.

          Show
          Tsz Wo Nicholas Sze added a comment - I think this is a presentation problem. The text "10.01 GB / 10.01 GB (100%)" seems saying that 100% of memory is being used but it may not be the case. Unfortunately, there is no easy way to obtain accurate memory usage. How about we change it to "maxMemory= 10.01 GB, totalMemory = 10.01 GB, freeMemory = xx". Then it will be less confusing.
          Hide
          dhruba borthakur added a comment -

          I have a machine on which the namenodeis running with an -Xmx20480m but the nameode ui shows:

          xxx files and directories, yyy blocks = zzz total. Heap Size is 15.27 GB / 17.78 GB (85%)

          I wonder why it shows a total of 17.78GB instead of 20GB

          Show
          dhruba borthakur added a comment - I have a machine on which the namenodeis running with an -Xmx20480m but the nameode ui shows: xxx files and directories, yyy blocks = zzz total. Heap Size is 15.27 GB / 17.78 GB (85%) I wonder why it shows a total of 17.78GB instead of 20GB
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > I wonder why it shows a total of 17.78GB instead of 20GB

          Would it be the case that you have hit some limit? The following is quoted from java man page:

          On Solaris 7 and Solaris 8 SPARC platforms, the upper limit for this value is approximately 4000m minus overhead amounts. On Solaris 2.6 and x86 platforms, the upper limit is approximately 2000m minus overhead amounts. On Linux platforms, the upper limit is approximately 2000m minus overhead amounts.

          Show
          Tsz Wo Nicholas Sze added a comment - > I wonder why it shows a total of 17.78GB instead of 20GB Would it be the case that you have hit some limit? The following is quoted from java man page : On Solaris 7 and Solaris 8 SPARC platforms, the upper limit for this value is approximately 4000m minus overhead amounts. On Solaris 2.6 and x86 platforms, the upper limit is approximately 2000m minus overhead amounts. On Linux platforms, the upper limit is approximately 2000m minus overhead amounts.
          Hide
          dhruba borthakur added a comment -

          I agree. Do you plan to do anything to this JIRA to make the reporting more accurate?

          Show
          dhruba borthakur added a comment - I agree. Do you plan to do anything to this JIRA to make the reporting more accurate?
          Hide
          dhruba borthakur added a comment -

          Currently, the code uses

          long totalMemory = Runtime.getRuntime().totalMemory();
          long maxMemory = Runtime.getRuntime().maxMemory();
          long used = (totalMemory * 100)/maxMemory;

          Is it better to use :

          MemoryMXBean memoryMXBean = ManagementFactory.getMemoryMXBean();
          MemoryUsage status = memoryMXBean.getHeapMemoryUsage();
          usedMemory = status.getUsed();
          maxMemory = status.getMax();

          Show
          dhruba borthakur added a comment - Currently, the code uses long totalMemory = Runtime.getRuntime().totalMemory(); long maxMemory = Runtime.getRuntime().maxMemory(); long used = (totalMemory * 100)/maxMemory; Is it better to use : MemoryMXBean memoryMXBean = ManagementFactory.getMemoryMXBean(); MemoryUsage status = memoryMXBean.getHeapMemoryUsage(); usedMemory = status.getUsed(); maxMemory = status.getMax();
          Hide
          Tsz Wo Nicholas Sze added a comment -

          It makes to replace the current codes by MemoryMXBean since it provides more information. I think it is better to show more numbers like non-heap usage, init, used, committed, max, etc.

          Show
          Tsz Wo Nicholas Sze added a comment - It makes to replace the current codes by MemoryMXBean since it provides more information. I think it is better to show more numbers like non-heap usage, init, used, committed, max, etc.
          Hide
          Dmytro Molkov added a comment -

          This patch uses JMX Memory beans instead of runtime.
          I was talking to Dhruba, he mentioned we might want to do this JIRA as a bugfix, so that it can be pulled into 21 and displaying more information can be a separate 'improvement' JIRA.

          Show
          Dmytro Molkov added a comment - This patch uses JMX Memory beans instead of runtime. I was talking to Dhruba, he mentioned we might want to do this JIRA as a bugfix, so that it can be pulled into 21 and displaying more information can be a separate 'improvement' JIRA.
          Hide
          dhruba borthakur added a comment -

          Thanks Dmytro for the patch.

          @Nicholas: please let me know if you agree that the additional metrics (e.g. non heap usage, used, committed) be done via another JIRA. Also, I do not see the requirement of a unit test for this case.

          Show
          dhruba borthakur added a comment - Thanks Dmytro for the patch. @Nicholas: please let me know if you agree that the additional metrics (e.g. non heap usage, used, committed) be done via another JIRA. Also, I do not see the requirement of a unit test for this case.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          +1 patch looks good. Thanks, Dmytro.

          Show
          Tsz Wo Nicholas Sze added a comment - +1 patch looks good. Thanks, Dmytro.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12428685/HDFS-94.patch
          against trunk revision 893066.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12428685/HDFS-94.patch against trunk revision 893066. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/156/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          The failed tests are
          org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18
          org.apache.hadoop.hdfs.TestReadWhileWriting.pipeline_02_03

          and are not related to this patch.

          i will commit this patch soon.

          Show
          dhruba borthakur added a comment - The failed tests are org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18 org.apache.hadoop.hdfs.TestReadWhileWriting.pipeline_02_03 and are not related to this patch. i will commit this patch soon.
          Hide
          Hairong Kuang added a comment -

          I filed HDFS-849 against the first error org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18 .

          Show
          Hairong Kuang added a comment - I filed HDFS-849 against the first error org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18 .
          Hide
          dhruba borthakur added a comment -

          I just committed this. Thanks Dmytro.

          Show
          dhruba borthakur added a comment - I just committed this. Thanks Dmytro.
          Hide
          Hudson added a comment -

          Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #159 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/159/)
          . The Heap Size printed in the NameNode WebUI is accurate.
          (Dmytro Molkov via dhruba)

          Show
          Hudson added a comment - Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #159 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/159/ ) . The Heap Size printed in the NameNode WebUI is accurate. (Dmytro Molkov via dhruba)
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #182 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #182 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #158 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/158/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #158 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/158/ )
          Hide
          Hudson added a comment -

          Integrated in Hdfs-Patch-h2.grid.sp2.yahoo.net #94 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/94/)

          Show
          Hudson added a comment - Integrated in Hdfs-Patch-h2.grid.sp2.yahoo.net #94 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/94/ )

            People

            • Assignee:
              Dmytro Molkov
              Reporter:
              Tsz Wo Nicholas Sze
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development