Hadoop Common
  1. Hadoop Common
  2. HADOOP-7630

hadoop-metrics2.properties should have a property *.period set to a default value foe metrics

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.20.205.0, 0.23.0
    • Component/s: conf
    • Labels:
      None

      Description

      currently the hadoop-metrics2.properties file does not have a value set for *.period

      This property is useful for metrics to determine when the property will refresh. We should set it to default of 60

      1. HADOOP-7630-trunk.patch
        3 kB
        Eric Yang
      2. HADOOP-7630.patch
        2 kB
        Eric Yang

        Activity

        Hide
        Eric Yang added a comment -

        Added hadoop-metric2.properties template, and copy to destination, if file does not already exist.

        Show
        Eric Yang added a comment - Added hadoop-metric2.properties template, and copy to destination, if file does not already exist.
        Hide
        Eric Yang added a comment -

        Added *.period=60 for hadoop-metrics2.properties template. Copy the template to HADOOP_CONF_DIR, if it does not exist.

        Show
        Eric Yang added a comment - Added *.period=60 for hadoop-metrics2.properties template. Copy the template to HADOOP_CONF_DIR, if it does not exist.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12494300/HADOOP-7630-trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        -1 release audit. The applied patch generated 9 release audit warnings (more than the trunk's current 0 warnings).

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/172//testReport/
        Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/172//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/172//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12494300/HADOOP-7630-trunk.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. -1 release audit. The applied patch generated 9 release audit warnings (more than the trunk's current 0 warnings). +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/172//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/172//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/172//console This message is automatically generated.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12494302/HADOOP-7630-trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        -1 release audit. The applied patch generated 9 release audit warnings (more than the trunk's current 0 warnings).

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/175//testReport/
        Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/175//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/175//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12494302/HADOOP-7630-trunk.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. -1 release audit. The applied patch generated 9 release audit warnings (more than the trunk's current 0 warnings). +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/175//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/175//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/175//console This message is automatically generated.
        Hide
        Matt Foley added a comment -

        +1 for trunk and branch. Release audit results are unavailable currently, but unlikely to be due to this patch.

        Show
        Matt Foley added a comment - +1 for trunk and branch. Release audit results are unavailable currently, but unlikely to be due to this patch.
        Hide
        Devaraj Das added a comment -

        Committed this. Thanks, Eric!

        Show
        Devaraj Das added a comment - Committed this. Thanks, Eric!
        Hide
        Luke Lu added a comment -

        The default period is 10 seconds, which has been production for over a year now. 60 seconds is too long. Note, this has nothing to do with graphs, just the snapshot/publish period. Typical metrics aggregator (simon is configured at 60 seconds for graphing resolution) needs a few samples in the aggregate period for derived metrics.

        Show
        Luke Lu added a comment - The default period is 10 seconds, which has been production for over a year now. 60 seconds is too long. Note, this has nothing to do with graphs, just the snapshot/publish period. Typical metrics aggregator (simon is configured at 60 seconds for graphing resolution) needs a few samples in the aggregate period for derived metrics.
        Hide
        Eric Yang added a comment -

        Luke, It used to be 5 seconds blurb rate (Simon Plugin) and 60 seconds reporting rate (Simon Aggregator). Simon Aggregator calculates the wavelet transformation of the output samples. Unfortunately, Simon plugin udp datagram does not scale well for having large emission sources. Instead of losing blurb (udp datagram), I changed the blurb rate to 60 seconds around August 2008 on Y! clusters. The Simon plugin is only doing add and average of samples, it does not have discrete wavelet transformation algorithm built-in. Consequently, It is better to have the publish/subscribe rate to be the same as the blurb emission rate so the result is more accurate. More averaging computed at the MetricsSource, the less precision is retained at the source.

        Are you concerning that the metrics might overflow if the publish rate is at 60 seconds? In 2008, I had also audited all metrics are using double to prevent overflow in short fraction of time. There are few metrics added, but the new metrics do not look like they would overflow neither. As a side benefit, by reducing the period, it is less amount of cycle spend in metrics monitoring, which makes the system more efficient.

        Show
        Eric Yang added a comment - Luke, It used to be 5 seconds blurb rate (Simon Plugin) and 60 seconds reporting rate (Simon Aggregator). Simon Aggregator calculates the wavelet transformation of the output samples. Unfortunately, Simon plugin udp datagram does not scale well for having large emission sources. Instead of losing blurb (udp datagram), I changed the blurb rate to 60 seconds around August 2008 on Y! clusters. The Simon plugin is only doing add and average of samples, it does not have discrete wavelet transformation algorithm built-in. Consequently, It is better to have the publish/subscribe rate to be the same as the blurb emission rate so the result is more accurate. More averaging computed at the MetricsSource, the less precision is retained at the source. Are you concerning that the metrics might overflow if the publish rate is at 60 seconds? In 2008, I had also audited all metrics are using double to prevent overflow in short fraction of time. There are few metrics added, but the new metrics do not look like they would overflow neither. As a side benefit, by reducing the period, it is less amount of cycle spend in metrics monitoring, which makes the system more efficient.
        Hide
        Luke Lu added a comment -

        I changed the blurb rate to 60 seconds around August 2008 on Y! clusters.

        The blurb period (for metrics, config blurb is on another period) was actually still 5 seconds in metrics1, when we were deploying metrics2 (where we use the default blurb period 10 second) in 2010 on Y clusters. Rajiv can confirm this. Are you saying simon aggregator could not process less than 1k udp packets per second? In any case, the throughput I saw (a few months ago) on the simon aggregator is way more than that. Rajiv said that the limiting factor is not the udp packets processing at aggregator level but the iops to store the data.

        The Simon plugin is only doing add and average of samples.

        I'm sure you meant simon aggregator. It also does user defined calculations (defined in the simon config file), if you lose the sole udp packet in the reporting period, the derived metrics will not be correct, so you need a couple of samples at least in the reporting period. While MetricVaryingRate in metrics1 and MutableRate in metrics2 do averaging and compute throughput, which are used mostly in rpc related metrics, most metrics in mapred are counters and gauages and almost all the mapred throughput metrics (*PerSec) are actually derived metrics from the simon config. This approach half the packet size vs using the *Rate metrics in metrics sources. Simon sinks send one packet per update, unlike ganglia, which sends one packet per metric per update.

        Are you concerning that the metrics might overflow if the publish rate is at 60 seconds?

        No. Even if some of them do, it's easy to see and explain on the graphs. All metrics backend with rrdtools should handle counter wraps automatically.

        As a side benefit, by reducing the period, it is less amount of cycle spend in metrics monitoring, which makes the system more efficient.

        At least with metrics2, which is more efficient than metrics1, even if the period is 1 second, it has no noticeable impact on system performance last time I checked, as the additional a few hundred additional objects per second in the timer thread is mostly noise compared with overall gc and context switching throughput on busy servers.

        My point is that you should not change the current default that has potential impact on production monitoring without actually testing it at scale.

        Show
        Luke Lu added a comment - I changed the blurb rate to 60 seconds around August 2008 on Y! clusters. The blurb period (for metrics, config blurb is on another period) was actually still 5 seconds in metrics1, when we were deploying metrics2 (where we use the default blurb period 10 second) in 2010 on Y clusters. Rajiv can confirm this. Are you saying simon aggregator could not process less than 1k udp packets per second? In any case, the throughput I saw (a few months ago) on the simon aggregator is way more than that. Rajiv said that the limiting factor is not the udp packets processing at aggregator level but the iops to store the data. The Simon plugin is only doing add and average of samples. I'm sure you meant simon aggregator. It also does user defined calculations (defined in the simon config file), if you lose the sole udp packet in the reporting period, the derived metrics will not be correct, so you need a couple of samples at least in the reporting period. While MetricVaryingRate in metrics1 and MutableRate in metrics2 do averaging and compute throughput, which are used mostly in rpc related metrics, most metrics in mapred are counters and gauages and almost all the mapred throughput metrics (*PerSec) are actually derived metrics from the simon config. This approach half the packet size vs using the *Rate metrics in metrics sources. Simon sinks send one packet per update, unlike ganglia, which sends one packet per metric per update. Are you concerning that the metrics might overflow if the publish rate is at 60 seconds? No. Even if some of them do, it's easy to see and explain on the graphs. All metrics backend with rrdtools should handle counter wraps automatically. As a side benefit, by reducing the period, it is less amount of cycle spend in metrics monitoring, which makes the system more efficient. At least with metrics2, which is more efficient than metrics1, even if the period is 1 second, it has no noticeable impact on system performance last time I checked, as the additional a few hundred additional objects per second in the timer thread is mostly noise compared with overall gc and context switching throughput on busy servers. My point is that you should not change the current default that has potential impact on production monitoring without actually testing it at scale.
        Hide
        Eric Yang added a comment -

        Are you saying simon aggregator could not process less than 1k udp packets per second?

        No, that is not what I was saying. On all production cluster, on the status page, it shows 93% packets lost for disk metrics. Disk metrics are emitted per disk. On a typical 2000 nodes cluster, there used to be 4 disk, which turns out to be 8k metrics per 5 seconds. Single simon aggregator has problem to handle aggregation load at this scale. Hadoop metrics is supposedly smaller than system metrics, but multiply the type of metrics (jvm, roc, mapped, hdfs), the number of output udp packets would reach the same scale of disk metrics, if something is not done to reduce the repeated noise.

        I'm sure you meant simon aggregator.

        No I mean the simon plugin, we want the gauge like metrics to be in sync at the source (MetricsContext) as well as the plugins. Internally in simon aggregator, it will use the last know value, or calculate the missing gap, if there is packet lost. I wrote the code to handle missing udp packets for Simon aggregator per management's request.

        My point is that you should not change the current default that has potential impact on production monitoring without actually testing it at scale.

        This configuration has been verified to be working at 40 nodes scale. I am sure that it would not cause any harm but reduce the potential breaking point.

        Show
        Eric Yang added a comment - Are you saying simon aggregator could not process less than 1k udp packets per second? No, that is not what I was saying. On all production cluster, on the status page, it shows 93% packets lost for disk metrics. Disk metrics are emitted per disk. On a typical 2000 nodes cluster, there used to be 4 disk, which turns out to be 8k metrics per 5 seconds. Single simon aggregator has problem to handle aggregation load at this scale. Hadoop metrics is supposedly smaller than system metrics, but multiply the type of metrics (jvm, roc, mapped, hdfs), the number of output udp packets would reach the same scale of disk metrics, if something is not done to reduce the repeated noise. I'm sure you meant simon aggregator. No I mean the simon plugin, we want the gauge like metrics to be in sync at the source (MetricsContext) as well as the plugins. Internally in simon aggregator, it will use the last know value, or calculate the missing gap, if there is packet lost. I wrote the code to handle missing udp packets for Simon aggregator per management's request. My point is that you should not change the current default that has potential impact on production monitoring without actually testing it at scale. This configuration has been verified to be working at 40 nodes scale. I am sure that it would not cause any harm but reduce the potential breaking point.
        Hide
        Luke Lu added a comment -

        Single simon aggregator has problem to handle aggregation load at this scale.

        That's why we use multiple aggregators for different group/context of metrics. Hadoop metrics are always sent at 5 or 10 second period with no problems at scale.

        No I mean the simon plugin. we want the gauge like metrics to be in sync at the source (MetricsContext) as well as the plugins

        Please look at the title of the jira. This is for metrics2. There is no MetricsContext. A metrics2 plugin is a MetricsSink implementation and it only pushes out metrics to aggregators. It doesn't do addition or average, unless I misunderstood your sentence: "The Simon plugin is only doing add and average of samples".

        This configuration has been verified to be working at 40 nodes scale. I am sure that it would not cause any harm but reduce the potential breaking point.

        10 second period has been verified to be working at 4000 nodes scale. With the current change, you're relying on zero udp packet loss, which is OK for small clusters. To give an example why this is a problem: for derived throughput metrics, which is calculated with (counter-current - counter-last)/period, if you are missing a few packets, you will see zero throughput in 60 second windows, which is clearly wrong for many metrics.

        There is simply no need to change the period.

        In any case, make sure Rajiv know about this (just added Rajiv to the watchers).

        Show
        Luke Lu added a comment - Single simon aggregator has problem to handle aggregation load at this scale. That's why we use multiple aggregators for different group/context of metrics. Hadoop metrics are always sent at 5 or 10 second period with no problems at scale. No I mean the simon plugin. we want the gauge like metrics to be in sync at the source (MetricsContext) as well as the plugins Please look at the title of the jira. This is for metrics2. There is no MetricsContext. A metrics2 plugin is a MetricsSink implementation and it only pushes out metrics to aggregators. It doesn't do addition or average, unless I misunderstood your sentence: "The Simon plugin is only doing add and average of samples". This configuration has been verified to be working at 40 nodes scale. I am sure that it would not cause any harm but reduce the potential breaking point. 10 second period has been verified to be working at 4000 nodes scale. With the current change, you're relying on zero udp packet loss, which is OK for small clusters. To give an example why this is a problem: for derived throughput metrics, which is calculated with (counter-current - counter-last)/period, if you are missing a few packets, you will see zero throughput in 60 second windows, which is clearly wrong for many metrics. There is simply no need to change the period. In any case, make sure Rajiv know about this (just added Rajiv to the watchers).
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #921 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/921/)
        HADOOP-7630. hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang.

        mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #921 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/921/ ) HADOOP-7630 . hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang. mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #998 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/998/)
        HADOOP-7630. hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang.

        mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #998 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/998/ ) HADOOP-7630 . hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang. mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Hide
        Matt Foley added a comment -

        Two items: First, this bug is marked for trunk/v0.23 as well as security/v0.20.205. But it wasn't previously committed to trunk. I have now committed it to trunk and v0.23

        Second, in order to make the Release Notes generation work right for the next release candidate for 0.20.205, I need to close this bug. No disrespect is intended regarding the on-going conversation. May I suggest opening a new bug in which to resolve the concern? Alternatively, feel free to re-open this bug in a few days after 0.20.205-rc1 is created. Thank you.

        Show
        Matt Foley added a comment - Two items: First, this bug is marked for trunk/v0.23 as well as security/v0.20.205. But it wasn't previously committed to trunk. I have now committed it to trunk and v0.23 Second, in order to make the Release Notes generation work right for the next release candidate for 0.20.205, I need to close this bug. No disrespect is intended regarding the on-going conversation. May I suggest opening a new bug in which to resolve the concern? Alternatively, feel free to re-open this bug in a few days after 0.20.205-rc1 is created. Thank you.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #938 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/938/)
        HADOOP-7630. hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang.

        mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #938 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/938/ ) HADOOP-7630 . hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang. mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Build #15 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/15/)
        HADOOP-7630. merge to v0.23

        mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173405
        Files :

        • /hadoop/common/branches/branch-0.23
        • /hadoop/common/branches/branch-0.23/hadoop-common-project
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf/capacity-scheduler.xml.template
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/capacity-scheduler
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/dynamic-scheduler
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/fairscheduler
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
        • /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #15 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/15/ ) HADOOP-7630 . merge to v0.23 mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173405 Files : /hadoop/common/branches/branch-0.23 /hadoop/common/branches/branch-0.23/hadoop-common-project /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf/capacity-scheduler.xml.template /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/capacity-scheduler /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/dynamic-scheduler /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/fairscheduler /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #837 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/837/)
        HADOOP-7630. hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang.

        mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #837 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/837/ ) HADOOP-7630 . hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang. mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #807 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/807/)
        HADOOP-7630. hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang.

        mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #807 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/807/ ) HADOOP-7630 . hadoop-metrics2.properties should have a property *.period set to a default value for metrics. Contributed by Eric Yang. mattf : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1173402 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-metrics2.properties
        Hide
        Rajiv Chittajallu added a comment -

        We are discussing an internal tool that is not yet release to public. Having said that, the drop rate which Eric is mentioning is on the simonweb, which writes rrds to local drives. The drives used to be SATA. There was never a drop rate on the aggregator.

        60s seems to be reasonable though.

        Show
        Rajiv Chittajallu added a comment - We are discussing an internal tool that is not yet release to public. Having said that, the drop rate which Eric is mentioning is on the simonweb, which writes rrds to local drives. The drives used to be SATA. There was never a drop rate on the aggregator. 60s seems to be reasonable though.
        Hide
        Rajiv Chittajallu added a comment -

        60s seems to be reasonable though.

        ..for general usage. Each site would probably have their own settings and a way to gather metrics.

        Luke, thanks for adding me the thread.

        Show
        Rajiv Chittajallu added a comment - 60s seems to be reasonable though. ..for general usage. Each site would probably have their own settings and a way to gather metrics. Luke, thanks for adding me the thread.
        Hide
        Matt Foley added a comment -

        Closed upon release of 0.20.205.0

        Show
        Matt Foley added a comment - Closed upon release of 0.20.205.0

          People

          • Assignee:
            Eric Yang
            Reporter:
            Arpit Gupta
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development