Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1163

Ganglia metrics reporting is misconfigured

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 0.12.1
    • 0.13.0
    • metrics
    • None
    • ganglia-3.0.3

    Description

      In hadoop-metrics.properties, I set mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext.

      If I then get the gmond xml feed from the gmond server, I get this:

      <METRIC NAME="load_one" VAL="1.04" TYPE="float" UNITS="" TN="28" TMAX="70" DMAX="0" SLOPE="both" SOURCE="gmond"/>
      ...
      <METRIC NAME="datanode.myhostname.bytes_read" VAL="657927" TYPE="int32" UNITS="" TN="5696" TMAX="60" DMAX="0" SLOPE="both" SOURCE="gmetric"/>

      Because the bytes_read metric has the datanode.hostname prefix, it will not aggregate with metrics from other hosts properly.

      Attachments

        1. hostname-not-part-of-ganglia-record.patch
          1 kB
          Michael Bieniosek
        2. hostname-not-part-of-ganglia-record-2.patch
          1 kB
          Michael Bieniosek

        Activity

          My patch (I can't attach in the standard way since this issue is still open):

          — src/java/org/apache/hadoop/metrics/ganglia/GangliaContext.java (revision 522712)
          +++ src/java/org/apache/hadoop/metrics/ganglia/GangliaContext.java (working copy)
          @@ -122,19 +122,6 @@
          public void emitRecord(String contextName, String recordName, OutputRecord outRec)
          throws IOException
          {

          • // metric name formed from record name and tag values
          • StringBuffer nameBuf = new StringBuffer(recordName);
          • // for (String tagName : outRec.getTagNames()) {
          • Iterator tagIt = outRec.getTagNames().iterator();
          • while (tagIt.hasNext()) { - String tagName = (String) tagIt.next(); - nameBuf.append('.'); - nameBuf.append(outRec.getTag(tagName)); - }
          • nameBuf.append('.');
          • String namePrefix = new String(nameBuf);
          • // emit each metric in turn
            // for (String metricName : outRec.getMetricNames())

            { Iterator metricIt = outRec.getMetricNames().iterator(); @@ -142,9 +129,8 @@ String metricName = (String) metricIt.next(); Object metric = outRec.getMetric(metricName); String type = (String) typeTable.get(metric.getClass()); - emitMetric(namePrefix + metricName, type, metric.toString()); + emitMetric(metricName, type, metric.toString()); }
          • }

          private void emitMetric(String name, String type, String value)

          bien Michael Bieniosek added a comment - My patch (I can't attach in the standard way since this issue is still open): — src/java/org/apache/hadoop/metrics/ganglia/GangliaContext.java (revision 522712) +++ src/java/org/apache/hadoop/metrics/ganglia/GangliaContext.java (working copy) @@ -122,19 +122,6 @@ public void emitRecord(String contextName, String recordName, OutputRecord outRec) throws IOException { // metric name formed from record name and tag values StringBuffer nameBuf = new StringBuffer(recordName); // for (String tagName : outRec.getTagNames()) { Iterator tagIt = outRec.getTagNames().iterator(); while (tagIt.hasNext()) { - String tagName = (String) tagIt.next(); - nameBuf.append('.'); - nameBuf.append(outRec.getTag(tagName)); - } nameBuf.append('.'); String namePrefix = new String(nameBuf); // emit each metric in turn // for (String metricName : outRec.getMetricNames()) { Iterator metricIt = outRec.getMetricNames().iterator(); @@ -142,9 +129,8 @@ String metricName = (String) metricIt.next(); Object metric = outRec.getMetric(metricName); String type = (String) typeTable.get(metric.getClass()); - emitMetric(namePrefix + metricName, type, metric.toString()); + emitMetric(metricName, type, metric.toString()); } } private void emitMetric(String name, String type, String value)
          dbowen David Bowen added a comment -

          +1 code reviewed. Looks good.

          dbowen David Bowen added a comment - +1 code reviewed. Looks good.

          Attach my patch the jira way

          bien Michael Bieniosek added a comment - Attach my patch the jira way
          hadoopqa Hadoop QA added a comment -

          -1, because the patch command could not apply the latest attachment http://issues.apache.org as a patch to trunk revision http://svn.apache.org/repos/asf/lucene/hadoop/trunk/524929. Please note that this message is automatically generated and may represent a problem with the automation system and not the patch. Results are at http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch

          hadoopqa Hadoop QA added a comment - -1, because the patch command could not apply the latest attachment http://issues.apache.org as a patch to trunk revision http://svn.apache.org/repos/asf/lucene/hadoop/trunk/524929 . Please note that this message is automatically generated and may represent a problem with the automation system and not the patch. Results are at http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch
          hadoopqa Hadoop QA added a comment - +1, because http://issues.apache.org/jira/secure/attachment/12354789/hostname-not-part-of-ganglia-record.patch applied and successfully tested against trunk revision http://svn.apache.org/repos/asf/lucene/hadoop/trunk/524929 . Results are at http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch
          tomwhite Thomas White added a comment -

          Michael,
          The patch no longer applies cleanly due to a conflict with HADOOP-1190 (which has been committed) - would you be able to regenerate it please? Thanks.

          tomwhite Thomas White added a comment - Michael, The patch no longer applies cleanly due to a conflict with HADOOP-1190 (which has been committed) - would you be able to regenerate it please? Thanks.

          attaching updated patch

          bien Michael Bieniosek added a comment - attaching updated patch
          tomwhite Thomas White added a comment -

          I've just committed this. Thanks Michael!

          tomwhite Thomas White added a comment - I've just committed this. Thanks Michael!
          hadoopqa Hadoop QA added a comment -
          hadoopqa Hadoop QA added a comment - Integrated in Hadoop-Nightly #48 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/48/ )

          People

            Unassigned Unassigned
            bien Michael Bieniosek
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: