Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: metrics
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      New server web page .../metrics allows convenient access to metrics data via JSON and text.

      Description

      Implement a "/metrics" URL on the HTTP server of Hadoop daemons, to expose metrics data to users via their web browsers, in plain-text and JSON.

      1. HADOOP-5469.patch
        24 kB
        Philip Zeyliger
      2. HADOOP-5469.patch
        14 kB
        Philip Zeyliger

        Activity

        Philip Zeyliger created issue -
        Hide
        Philip Zeyliger added a comment -

        Attaching patch. Hudson will complain about missing unit tests, and it will be right. The patch is small enough that I hope having it around will help discussion.

        Show
        Philip Zeyliger added a comment - Attaching patch. Hudson will complain about missing unit tests, and it will be right. The patch is small enough that I hope having it around will help discussion.
        Philip Zeyliger made changes -
        Field Original Value New Value
        Attachment HADOOP-5469.patch [ 12402000 ]
        Philip Zeyliger made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12402000/HADOOP-5469.patch
        against trunk revision 752984.

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no tests are needed for this patch.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        -1 findbugs. The patch appears to introduce 1 new Findbugs warnings.

        +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

        -1 release audit. The applied patch generated 647 release audit warnings (more than the trunk's current 645 warnings).

        -1 core tests. The patch failed core unit tests.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/testReport/
        Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/artifact/trunk/current/releaseAuditDiffWarnings.txt
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12402000/HADOOP-5469.patch against trunk revision 752984. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no tests are needed for this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. -1 release audit. The applied patch generated 647 release audit warnings (more than the trunk's current 645 warnings). -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/artifact/trunk/current/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/78/console This message is automatically generated.
        Hide
        Steve Loughran added a comment -

        HtmlUnit would be the JAR to use to write tests for this

        Show
        Steve Loughran added a comment - HtmlUnit would be the JAR to use to write tests for this
        Hide
        Philip Zeyliger added a comment -

        I've been schooled that descriptions ought to be short, and comments lengthy. The original description follows, and the description has been shortened.

        I'd like to be able to query Hadoop's metrics via HTTP, e.g., by going to "/metrics" on any Hadoop daemon that has an HttpServer. My motivation is pretty simple--if you're running on a lot of machines, tracking down the relevant metrics files is pretty time-consuming; this would be a useful debugging utility. I'd also like the output to be parseable, so I could write a quick web app to query the metrics dynamically.

        This is similar in spirit, but different, from just using JMX. (See also HADOOP-4756.) JMX requires a client, and, more annoyingly, JMX requires setting up authentication. If you just disable authentication, someone can do Bad Things, and if you enable it, you have to worry about yet another password. It's also more complete--JMX require separate instrumentation, so, for example, the JobTracker's metrics aren't exposed via JMX.

        To start the discussion going, I've attached a patch. I had to add a method to ContextFactory to get all the active MetrixContexts, implement a do-little MetricsContext that simply inherits from AbstractMetricsContext, add a method to MetricsContext to get all the records, expose copy methods for the maps in OutputRecord, and implemented an easy servlet. I ended up removing some
        common code from all MetricsContexts, for setting the period; I'm open to taking that out if it muddies the patch significantly.

        I'd love to hear your suggestions. There's a bug in the JSON representation, and there's some gross type-handling.

        The patch is missing tests. I wanted to post to gather feedback before I got too far, but tests are forthcoming.

        Here's a sample output for a job tracker, while it was running a "pi" job:

        jvm
          metrics
            {hostName=doorstop.local, processName=JobTracker, sessionId=}
              gcCount=22
              gcTimeMillis=68
              logError=0
              logFatal=0
              logInfo=52
              logWarn=0
              memHeapCommittedM=7.4375
              memHeapUsedM=4.2150116
              memNonHeapCommittedM=23.1875
              memNonHeapUsedM=18.438614
              threadsBlocked=0
              threadsNew=0
              threadsRunnable=7
              threadsTerminated=0
              threadsTimedWaiting=8
              threadsWaiting=15
        mapred
          job
            {counter=Map input records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=2.0
            {counter=Map output records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=Data-local map tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=Map input bytes, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=48.0
            {counter=FILE_BYTES_WRITTEN, group=FileSystemCounters, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=148.0
            {counter=Combine output records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=0.0
            {counter=Launched map tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=HDFS_BYTES_READ, group=FileSystemCounters, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=236.0
            {counter=Map output bytes, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=64.0
            {counter=Launched reduce tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=1.0
            {counter=Spilled Records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=Combine input records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=0.0
          jobtracker
            {hostName=doorstop.local, sessionId=}
              jobs_completed=0
              jobs_submitted=1
              maps_completed=2
              maps_launched=5
              reduces_completed=0
              reduces_launched=1
        rpc
          metrics
            {hostName=doorstop.local, port=50030}
              NumOpenConnections=2
              RpcProcessingTime_avg_time=0
              RpcProcessingTime_num_ops=84
              RpcQueueTime_avg_time=1
              RpcQueueTime_num_ops=84
              callQueueLen=0
              getBuildVersion_avg_time=0
              getBuildVersion_num_ops=1
              getJobProfile_avg_time=0
              getJobProfile_num_ops=17
              getJobStatus_avg_time=0
              getJobStatus_num_ops=32
              getNewJobId_avg_time=0
              getNewJobId_num_ops=1
              getProtocolVersion_avg_time=0
              getProtocolVersion_num_ops=2
              getSystemDir_avg_time=0
              getSystemDir_num_ops=2
              getTaskCompletionEvents_avg_time=0
              getTaskCompletionEvents_num_ops=19
              heartbeat_avg_time=5
              heartbeat_num_ops=9
              submitJob_avg_time=0
              submitJob_num_ops=1
        
        Show
        Philip Zeyliger added a comment - I've been schooled that descriptions ought to be short, and comments lengthy. The original description follows, and the description has been shortened. I'd like to be able to query Hadoop's metrics via HTTP, e.g., by going to "/metrics" on any Hadoop daemon that has an HttpServer. My motivation is pretty simple--if you're running on a lot of machines, tracking down the relevant metrics files is pretty time-consuming; this would be a useful debugging utility. I'd also like the output to be parseable, so I could write a quick web app to query the metrics dynamically. This is similar in spirit, but different, from just using JMX. (See also HADOOP-4756 .) JMX requires a client, and, more annoyingly, JMX requires setting up authentication. If you just disable authentication, someone can do Bad Things, and if you enable it, you have to worry about yet another password. It's also more complete--JMX require separate instrumentation, so, for example, the JobTracker's metrics aren't exposed via JMX. To start the discussion going, I've attached a patch. I had to add a method to ContextFactory to get all the active MetrixContexts, implement a do-little MetricsContext that simply inherits from AbstractMetricsContext, add a method to MetricsContext to get all the records, expose copy methods for the maps in OutputRecord, and implemented an easy servlet. I ended up removing some common code from all MetricsContexts, for setting the period; I'm open to taking that out if it muddies the patch significantly. I'd love to hear your suggestions. There's a bug in the JSON representation, and there's some gross type-handling. The patch is missing tests. I wanted to post to gather feedback before I got too far, but tests are forthcoming. Here's a sample output for a job tracker, while it was running a "pi" job: jvm metrics {hostName=doorstop.local, processName=JobTracker, sessionId=} gcCount=22 gcTimeMillis=68 logError=0 logFatal=0 logInfo=52 logWarn=0 memHeapCommittedM=7.4375 memHeapUsedM=4.2150116 memNonHeapCommittedM=23.1875 memNonHeapUsedM=18.438614 threadsBlocked=0 threadsNew=0 threadsRunnable=7 threadsTerminated=0 threadsTimedWaiting=8 threadsWaiting=15 mapred job {counter=Map input records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=2.0 {counter=Map output records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=4.0 {counter=Data-local map tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=4.0 {counter=Map input bytes, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=48.0 {counter=FILE_BYTES_WRITTEN, group=FileSystemCounters, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=148.0 {counter=Combine output records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=0.0 {counter=Launched map tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=4.0 {counter=HDFS_BYTES_READ, group=FileSystemCounters, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=236.0 {counter=Map output bytes, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=64.0 {counter=Launched reduce tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=1.0 {counter=Spilled Records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=4.0 {counter=Combine input records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip} value=0.0 jobtracker {hostName=doorstop.local, sessionId=} jobs_completed=0 jobs_submitted=1 maps_completed=2 maps_launched=5 reduces_completed=0 reduces_launched=1 rpc metrics {hostName=doorstop.local, port=50030} NumOpenConnections=2 RpcProcessingTime_avg_time=0 RpcProcessingTime_num_ops=84 RpcQueueTime_avg_time=1 RpcQueueTime_num_ops=84 callQueueLen=0 getBuildVersion_avg_time=0 getBuildVersion_num_ops=1 getJobProfile_avg_time=0 getJobProfile_num_ops=17 getJobStatus_avg_time=0 getJobStatus_num_ops=32 getNewJobId_avg_time=0 getNewJobId_num_ops=1 getProtocolVersion_avg_time=0 getProtocolVersion_num_ops=2 getSystemDir_avg_time=0 getSystemDir_num_ops=2 getTaskCompletionEvents_avg_time=0 getTaskCompletionEvents_num_ops=19 heartbeat_avg_time=5 heartbeat_num_ops=9 submitJob_avg_time=0 submitJob_num_ops=1
        Philip Zeyliger made changes -
        Description I'd like to be able to query Hadoop's metrics via HTTP, e.g., by going to "/metrics" on any Hadoop daemon that has an HttpServer. My motivation is pretty simple--if you're running on a lot of machines, tracking down the relevant metrics files is pretty time-consuming; this would be a useful debugging utility. I'd also like the output to be parseable, so I could write a quick web app to query the metrics dynamically.

        This is similar in spirit, but different, from just using JMX. (See also HADOOP-4756.) JMX requires a client, and, more annoyingly, JMX requires setting up authentication. If you just disable authentication, someone can do Bad Things, and if you enable it, you have to worry about yet another password. It's also more complete--JMX require separate instrumentation, so, for example, the JobTracker's metrics aren't exposed via JMX.

        To start the discussion going, I've attached a patch. I had to add a method to ContextFactory to get all the active MetrixContexts, implement a do-little MetricsContext that simply inherits from AbstractMetricsContext, add a method to MetricsContext to get all the records, expose copy methods for the maps in OutputRecord, and implemented an easy servlet. I ended up removing some
        common code from all MetricsContexts, for setting the period; I'm open to taking that out if it muddies the patch significantly.

        I'd love to hear your suggestions. There's a bug in the JSON representation, and there's some gross type-handling.

        The patch is missing tests. I wanted to post to gather feedback before I got too far, but tests are forthcoming.

        Here's a sample output for a job tracker, while it was running a "pi" job:

        {noformat}
        jvm
          metrics
            {hostName=doorstop.local, processName=JobTracker, sessionId=}
              gcCount=22
              gcTimeMillis=68
              logError=0
              logFatal=0
              logInfo=52
              logWarn=0
              memHeapCommittedM=7.4375
              memHeapUsedM=4.2150116
              memNonHeapCommittedM=23.1875
              memNonHeapUsedM=18.438614
              threadsBlocked=0
              threadsNew=0
              threadsRunnable=7
              threadsTerminated=0
              threadsTimedWaiting=8
              threadsWaiting=15
        mapred
          job
            {counter=Map input records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=2.0
            {counter=Map output records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=Data-local map tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=Map input bytes, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=48.0
            {counter=FILE_BYTES_WRITTEN, group=FileSystemCounters, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=148.0
            {counter=Combine output records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=0.0
            {counter=Launched map tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=HDFS_BYTES_READ, group=FileSystemCounters, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=236.0
            {counter=Map output bytes, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=64.0
            {counter=Launched reduce tasks, group=Job Counters , hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=1.0
            {counter=Spilled Records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=4.0
            {counter=Combine input records, group=Map-Reduce Framework, hostName=doorstop.local, jobId=job_200903101702_0001, jobName=test-mini-mr, sessionId=, user=philip}
              value=0.0
          jobtracker
            {hostName=doorstop.local, sessionId=}
              jobs_completed=0
              jobs_submitted=1
              maps_completed=2
              maps_launched=5
              reduces_completed=0
              reduces_launched=1
        rpc
          metrics
            {hostName=doorstop.local, port=50030}
              NumOpenConnections=2
              RpcProcessingTime_avg_time=0
              RpcProcessingTime_num_ops=84
              RpcQueueTime_avg_time=1
              RpcQueueTime_num_ops=84
              callQueueLen=0
              getBuildVersion_avg_time=0
              getBuildVersion_num_ops=1
              getJobProfile_avg_time=0
              getJobProfile_num_ops=17
              getJobStatus_avg_time=0
              getJobStatus_num_ops=32
              getNewJobId_avg_time=0
              getNewJobId_num_ops=1
              getProtocolVersion_avg_time=0
              getProtocolVersion_num_ops=2
              getSystemDir_avg_time=0
              getSystemDir_num_ops=2
              getTaskCompletionEvents_avg_time=0
              getTaskCompletionEvents_num_ops=19
              heartbeat_avg_time=5
              heartbeat_num_ops=9
              submitJob_avg_time=0
              submitJob_num_ops=1
        {noformat}
        Implement a "/metrics" URL on the HTTP server of Hadoop daemons, to expose metrics data to users via their web browsers, in plain-text and JSON.
        Hide
        Philip Zeyliger added a comment -

        Uploading a new patch.

        This one fixes the JSON generation, and includes tests for MetricsServlet, as well as the new functionality I added to OutputRecord.

        Reviews appreciated!

        Show
        Philip Zeyliger added a comment - Uploading a new patch. This one fixes the JSON generation, and includes tests for MetricsServlet, as well as the new functionality I added to OutputRecord. Reviews appreciated!
        Philip Zeyliger made changes -
        Attachment HADOOP-5469.patch [ 12404254 ]
        Philip Zeyliger made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Philip Zeyliger made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Philip Zeyliger logged work - 31/Mar/09 21:18
        • Time Spent:
          2h
           
          I've thrown this up on the Hadoop JIRA. Now I'm blocked until someone reviews it, which might be Tom.

          For my future reference, here's how you run the "test-patch" task on ant. It takes a long time and spits out esoteric error messages. (Turns out you need a block of the Apache license at the top of every file; who knew!)
          {noformat}
          ANT_HOME=/usr/share/ant ant -Dpatch.file=../hadoop-trunk2/HADOOP-5469.patch -Dforrest.home=$HOME/pub/apache-forrest-0.8 -Dfindbugs.home=$HOME/pub/findbugs-1.3.8 -Djava5.home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5/Home -Dscratch.dir=/tmp/philip test-patch
          {noformat}

          I expect this will take some more time after the review.
        Philip Zeyliger made changes -
        Remaining Estimate 1.5h [ 5400 ]
        Time Spent 2h [ 7200 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12404254/HADOOP-5469.patch
        against trunk revision 760651.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs warnings.

        +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed core unit tests.

        -1 contrib tests. The patch failed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12404254/HADOOP-5469.patch against trunk revision 760651. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-minerva.apache.org/85/console This message is automatically generated.
        Hide
        Philip Zeyliger added a comment -

        The same two tests seem to be failing in trunk, according to http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-trunk/792/testReport/ . The tests are as follows, and don't relate to this patch.

        • org.apache.hadoop.mapred.TestCapacityScheduler.testHighMemoryJobWithInvalidRequirements
        • org.apache.hadoop.mapred.TestCapacityScheduler.testClusterBlockingForLackOfMemory
        Show
        Philip Zeyliger added a comment - The same two tests seem to be failing in trunk, according to http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-trunk/792/testReport/ . The tests are as follows, and don't relate to this patch. org.apache.hadoop.mapred.TestCapacityScheduler.testHighMemoryJobWithInvalidRequirements org.apache.hadoop.mapred.TestCapacityScheduler.testClusterBlockingForLackOfMemory
        Owen O'Malley made changes -
        Assignee Philip Zeyliger [ philip ]
        Hide
        Doug Cutting added a comment -

        I just committed this. Thanks, Philip!

        Show
        Doug Cutting added a comment - I just committed this. Thanks, Philip!
        Doug Cutting made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Hadoop Flags [Reviewed]
        Fix Version/s 0.21.0 [ 12313563 ]
        Resolution Fixed [ 1 ]
        Hide
        Chris Douglas added a comment -

        I reverted this because trunk no longer compiled

        Show
        Chris Douglas added a comment - I reverted this because trunk no longer compiled
        Chris Douglas made changes -
        Resolution Fixed [ 1 ]
        Status Resolved [ 5 ] Reopened [ 4 ]
        Hide
        Chris Douglas added a comment -

        The changes to src/saveVersion.sh and VersionInfo seem unrelated to this issue...

        Show
        Chris Douglas added a comment - The changes to src/saveVersion.sh and VersionInfo seem unrelated to this issue...
        Hide
        Doug Cutting added a comment -

        Oops. Forgot to add the new files. Fixed.

        Show
        Doug Cutting added a comment - Oops. Forgot to add the new files. Fixed.
        Doug Cutting made changes -
        Status Reopened [ 4 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]
        Hide
        Allen Wittenauer added a comment -

        So how do we protect this new interface from prying eyes?

        Show
        Allen Wittenauer added a comment - So how do we protect this new interface from prying eyes?
        Hide
        Philip Zeyliger added a comment -

        The same way we protect the various status pages, the RPC ports, the sockets that the data nodes will happily send you blocks over? (Namely, not at all, until Hadoop has a security framework.)

        Show
        Philip Zeyliger added a comment - The same way we protect the various status pages, the RPC ports, the sockets that the data nodes will happily send you blocks over? (Namely, not at all, until Hadoop has a security framework.)
        Hide
        Marco Nicosia added a comment -

        Opened HADOOP-5722 to make this a configurable feature. And yes, we continue to lobby for better protection of all of Hadoop's ports. In the meantime, we prefer not to open additional holes if possible.

        Show
        Marco Nicosia added a comment - Opened HADOOP-5722 to make this a configurable feature. And yes, we continue to lobby for better protection of all of Hadoop's ports. In the meantime, we prefer not to open additional holes if possible.
        Hide
        Robert Chansler added a comment -

        Editorial pass over all release notes prior to publication of 0.21.

        Show
        Robert Chansler added a comment - Editorial pass over all release notes prior to publication of 0.21.
        Robert Chansler made changes -
        Release Note New server web page .../metrics allows convenient access to metrics data via JSON and text.
        Tom White made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Hide
        Yevgen Yampolskiy added a comment -

        Is it available with hadoop-metrics2? It looks like /metrics page is missing in hadoop-1.0.4

        Show
        Yevgen Yampolskiy added a comment - Is it available with hadoop-metrics2? It looks like /metrics page is missing in hadoop-1.0.4
        Hide
        Luke Lu added a comment -

        Everything is available at /jmx (including generic JVM properties and Hadoop metrics) now, as all metrics in metrics2 is published to JMX.

        Show
        Luke Lu added a comment - Everything is available at /jmx (including generic JVM properties and Hadoop metrics) now, as all metrics in metrics2 is published to JMX.
        Transition Time In Source Status Execution Times Last Executer Last Execution Date
        Patch Available Patch Available Open Open
        19d 1h 18m 1 Philip Zeyliger 31/Mar/09 19:04
        Open Open Patch Available Patch Available
        15h 8m 2 Philip Zeyliger 31/Mar/09 19:04
        Patch Available Patch Available Resolved Resolved
        7d 2h 35m 1 Doug Cutting 07/Apr/09 21:40
        Resolved Resolved Reopened Reopened
        14m 23s 1 Chris Douglas 07/Apr/09 21:54
        Reopened Reopened Resolved Resolved
        1h 6m 1 Doug Cutting 07/Apr/09 23:00
        Resolved Resolved Closed Closed
        503d 22h 35m 1 Tom White 24/Aug/10 21:36

          People

          • Assignee:
            Philip Zeyliger
            Reporter:
            Philip Zeyliger
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Time Tracking

              Estimated:
              Original Estimate - Not Specified
              Not Specified
              Remaining:
              Time Spent - 2h Remaining Estimate - 1.5h
              1.5h
              Logged:
              Time Spent - 2h Remaining Estimate - 1.5h
              2h

                Development