Details

      Description

      The NodeManager should provide a way for an AM to tell it that either the logs should not be aggregated, that they should be aggregated with a high priority, or that they should be aggregated but with a lower priority. The AM should be able to do this in the ContainerLaunch context to provide a default value, but should also be able to update the value when the container is released.

      This would allow for the NM to not aggregate logs in some cases, and avoid connection to the NN at all.

      1. YARN-221-trunk-v5.patch
        89 kB
        Ming Ma
      2. YARN-221-trunk-v4.patch
        88 kB
        Ming Ma
      3. YARN-221-trunk-v3.patch
        88 kB
        Ming Ma
      4. YARN-221-trunk-v2.patch
        77 kB
        Ming Ma
      5. YARN-221-trunk-v1.patch
        51 kB
        Chris Trezzo
      6. YARN-221-addendum.1.patch
        4 kB
        Xuan Gong
      7. YARN-221-9.patch
        106 kB
        Ming Ma
      8. YARN-221-8.patch
        102 kB
        Ming Ma
      9. YARN-221-7.patch
        101 kB
        Ming Ma
      10. YARN-221-6.patch
        88 kB
        Ming Ma

        Issue Links

          Activity

          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2227 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2227/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2227 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2227/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #289 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/289/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #289 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/289/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2246 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2246/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2246 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2246/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #297 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/297/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #297 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/297/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #301 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/301/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #301 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/301/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #1030 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1030/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #1030 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1030/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          Hide
          xgong Xuan Gong added a comment -

          Committed the addendum patch into trunk/branch-2. Thanks for the review, Ming

          Show
          xgong Xuan Gong added a comment - Committed the addendum patch into trunk/branch-2. Thanks for the review, Ming
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8341 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8341/)
          YARN-221. Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8341 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8341/ ) YARN-221 . Addendum patch to compilation issue which is caused by missing (xgong: rev b71c6006f579ac6f0755975a9b908b0062618b46) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AllContainerLogAggregationPolicy.java
          Hide
          mingma Ming Ma added a comment -

          +1 on the addendum patch.

          Show
          mingma Ming Ma added a comment - +1 on the addendum patch.
          Hide
          xgong Xuan Gong added a comment -

          Yes, reopen this and attach a addendum patch to fix the compilation issue

          Show
          xgong Xuan Gong added a comment - Yes, reopen this and attach a addendum patch to fix the compilation issue
          Hide
          asuresh Arun Suresh added a comment -

          Looks like trunk does not compile correctly after this..

          Show
          asuresh Arun Suresh added a comment - Looks like trunk does not compile correctly after this..
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #288 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/288/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #288 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/288/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2245 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2245/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2245 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2245/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2226 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2226/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2226 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2226/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #296 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/296/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #296 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/296/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #1029 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1029/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #1029 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1029/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #300 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/300/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #300 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/300/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8340 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8340/)
          YARN-221. NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8340 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8340/ ) YARN-221 . NM should provide a way for AM to tell it not to aggregate (xgong: rev 37e1c3d82a96d781e1c9982988b7de4aa5242d0c) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/SampleContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LogAggregationContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOnlyLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ContainerLogContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregator.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedOrKilledContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestAuxServices.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AbstractContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LogAggregationContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/NoneContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/event/LogHandlerAppStartedEvent.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/FailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AMOrFailedContainerLogAggregationPolicy.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
          Hide
          xgong Xuan Gong added a comment -

          Committed into trunk/branch-2. Thanks, Ming.

          Show
          xgong Xuan Gong added a comment - Committed into trunk/branch-2. Thanks, Ming.
          Hide
          xgong Xuan Gong added a comment -

          Okay. Thanks. Checking this in

          Show
          xgong Xuan Gong added a comment - Okay. Thanks. Checking this in
          Hide
          mingma Ming Ma added a comment -

          Thanks Xuan. I have linked the newly created MR jira.

          Show
          mingma Ming Ma added a comment - Thanks Xuan. I have linked the newly created MR jira.
          Hide
          xgong Xuan Gong added a comment -

          +1. The last patch looks good to me. Let us wait for several days. If there are no other comments, I will commit this on this weekend.

          Ming Ma At the mean time, could you open a related MR ticket and link it here, please ?

          Show
          xgong Xuan Gong added a comment - +1. The last patch looks good to me. Let us wait for several days. If there are no other comments, I will commit this on this weekend. Ming Ma At the mean time, could you open a related MR ticket and link it here, please ?
          Hide
          mingma Ming Ma added a comment -

          The unit test failures aren't related. The tests pass on the local machine.

          Another thing Xuan and I discussed is how other frameworks on YARN such as MR, Tez can use this feature; for example if they need to make config and/or code change to allow framework applications specify the policy at per application basis. There are several approaches.

          • Have MR define its own configurations to config these policies. Make code change at YarnRunner to retrieve these configurations and set the values at ASC. That means Tez needs to do the same thing.
          • Define some common YARN configurations such as yarn.logaggregation.policy.class. YarnRunner still needs to retrieve these configurations and set the values at ASC. But at least MR and Tez can share the same configuration names.
          • Define some common YARN configurations such as yarn.logaggregation.policy.class. YarnClientImpl take care of fixing up ASC based on the configurations. In that way, no code change is required at the MR or Tez layer.

          Eventually, we prefer to go with the first approach, which is used by other existing MR properties. If we want to define some common YARN properties used by different YARN applications, we can have a separate jira for it.

          Show
          mingma Ming Ma added a comment - The unit test failures aren't related. The tests pass on the local machine. Another thing Xuan and I discussed is how other frameworks on YARN such as MR, Tez can use this feature; for example if they need to make config and/or code change to allow framework applications specify the policy at per application basis. There are several approaches. Have MR define its own configurations to config these policies. Make code change at YarnRunner to retrieve these configurations and set the values at ASC. That means Tez needs to do the same thing. Define some common YARN configurations such as yarn.logaggregation.policy.class. YarnRunner still needs to retrieve these configurations and set the values at ASC. But at least MR and Tez can share the same configuration names. Define some common YARN configurations such as yarn.logaggregation.policy.class. YarnClientImpl take care of fixing up ASC based on the configurations. In that way, no code change is required at the MR or Tez layer. Eventually, we prefer to go with the first approach, which is used by other existing MR properties. If we want to define some common YARN properties used by different YARN applications, we can have a separate jira for it.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 21m 44s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 4 new or modified test files.
          +1 javac 7m 38s There were no new javac warning messages.
          +1 javadoc 9m 37s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 3m 20s The applied patch generated 1 new checkstyle issues (total was 212, now 212).
          +1 whitespace 2m 27s The patch has no lines that end in whitespace.
          +1 install 1m 24s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 7m 44s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          -1 common tests 22m 19s Tests failed in hadoop-common.
          +1 yarn tests 0m 23s Tests passed in hadoop-yarn-api.
          -1 yarn tests 1m 56s Tests failed in hadoop-yarn-common.
          +1 yarn tests 7m 35s Tests passed in hadoop-yarn-server-nodemanager.
          -1 yarn tests 53m 19s Tests failed in hadoop-yarn-server-resourcemanager.
              141m 10s  



          Reason Tests
          Failed unit tests hadoop.ha.TestZKFailoverController
            hadoop.net.TestNetUtils
            hadoop.yarn.util.TestRackResolver
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12750361/YARN-221-9.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / b73181f
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-common.txt
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8845/testReport/
          Java 1.7.0_55
          uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/8845/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 21m 44s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 4 new or modified test files. +1 javac 7m 38s There were no new javac warning messages. +1 javadoc 9m 37s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 3m 20s The applied patch generated 1 new checkstyle issues (total was 212, now 212). +1 whitespace 2m 27s The patch has no lines that end in whitespace. +1 install 1m 24s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 7m 44s The patch does not introduce any new Findbugs (version 3.0.0) warnings. -1 common tests 22m 19s Tests failed in hadoop-common. +1 yarn tests 0m 23s Tests passed in hadoop-yarn-api. -1 yarn tests 1m 56s Tests failed in hadoop-yarn-common. +1 yarn tests 7m 35s Tests passed in hadoop-yarn-server-nodemanager. -1 yarn tests 53m 19s Tests failed in hadoop-yarn-server-resourcemanager.     141m 10s   Reason Tests Failed unit tests hadoop.ha.TestZKFailoverController   hadoop.net.TestNetUtils   hadoop.yarn.util.TestRackResolver   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12750361/YARN-221-9.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / b73181f checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-common.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8845/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8845/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8845/console This message was automatically generated.
          Hide
          mingma Ming Ma added a comment -

          I had offline discussion with Xuan about the API. To support this as public interface just like AuxiliaryService so that YARN framework developers can develop some customized policy, it might be better to have its own ContainerLogContext.

          The latest patch has the following updates.

          • Use ContainerLogContext.
          • Move ContainerLogAggregationPolicy to yarn.server.api package.
          • Fix the documentation in AppLogAggregatorImpl.
          Show
          mingma Ming Ma added a comment - I had offline discussion with Xuan about the API. To support this as public interface just like AuxiliaryService so that YARN framework developers can develop some customized policy, it might be better to have its own ContainerLogContext . The latest patch has the following updates. Use ContainerLogContext. Move ContainerLogAggregationPolicy to yarn.server.api package. Fix the documentation in AppLogAggregatorImpl.
          Hide
          mingma Ming Ma added a comment -

          My main motivation of reusing ContainerTerminationContext is to simplify YARN API if possible. The context around container could be abstracted into a common interface for both AuxiliaryService and ContainerLogAggregationPolicy. It is possible some YARN application might want to create its own ContainerLogAggregationPolicy; in which case they just need to know about ContainerTerminationContext.

          Sometimes there is no right or wrong answer when it comes to API design. In this case, ContainerTokenIdentifier or ContainerTerminationContext or to-be-defined ContainerLogContext will work for the current scenarios. Given this isn't a public interface, we can use ContainerTokenIdentifier until new scenarios come up. Thoughts?

          Show
          mingma Ming Ma added a comment - My main motivation of reusing ContainerTerminationContext is to simplify YARN API if possible. The context around container could be abstracted into a common interface for both AuxiliaryService and ContainerLogAggregationPolicy. It is possible some YARN application might want to create its own ContainerLogAggregationPolicy; in which case they just need to know about ContainerTerminationContext. Sometimes there is no right or wrong answer when it comes to API design. In this case, ContainerTokenIdentifier or ContainerTerminationContext or to-be-defined ContainerLogContext will work for the current scenarios. Given this isn't a public interface, we can use ContainerTokenIdentifier until new scenarios come up. Thoughts?
          Hide
          xgong Xuan Gong added a comment -

          Looks like ContainerTerminationContext is for AuxiliaryService. That might be confusing. May be better to create a new api.

          Show
          xgong Xuan Gong added a comment - Looks like ContainerTerminationContext is for AuxiliaryService. That might be confusing. May be better to create a new api.
          Hide
          mingma Ming Ma added a comment -

          That sounds a good idea. How about using the existing ContainerTerminationContext? We can extend that to include exitCode. In that way, we don't need to introduce another somewhat similar Context class.

          Show
          mingma Ming Ma added a comment - That sounds a good idea. How about using the existing ContainerTerminationContext? We can extend that to include exitCode. In that way, we don't need to introduce another somewhat similar Context class.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 21m 47s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 4 new or modified test files.
          +1 javac 7m 42s There were no new javac warning messages.
          +1 javadoc 9m 41s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 3m 21s The applied patch generated 1 new checkstyle issues (total was 212, now 212).
          +1 whitespace 1m 49s The patch has no lines that end in whitespace.
          +1 install 1m 22s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 7m 39s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 common tests 22m 29s Tests passed in hadoop-common.
          +1 yarn tests 0m 23s Tests passed in hadoop-yarn-api.
          +1 yarn tests 1m 55s Tests passed in hadoop-yarn-common.
          +1 yarn tests 7m 35s Tests passed in hadoop-yarn-server-nodemanager.
          +1 yarn tests 52m 48s Tests passed in hadoop-yarn-server-resourcemanager.
              140m 16s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12748921/YARN-221-8.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / ba2313d
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-common.txt
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8776/testReport/
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/8776/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 21m 47s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 4 new or modified test files. +1 javac 7m 42s There were no new javac warning messages. +1 javadoc 9m 41s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 3m 21s The applied patch generated 1 new checkstyle issues (total was 212, now 212). +1 whitespace 1m 49s The patch has no lines that end in whitespace. +1 install 1m 22s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 7m 39s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 22m 29s Tests passed in hadoop-common. +1 yarn tests 0m 23s Tests passed in hadoop-yarn-api. +1 yarn tests 1m 55s Tests passed in hadoop-yarn-common. +1 yarn tests 7m 35s Tests passed in hadoop-yarn-server-nodemanager. +1 yarn tests 52m 48s Tests passed in hadoop-yarn-server-resourcemanager.     140m 16s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12748921/YARN-221-8.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / ba2313d checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-common.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8776/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8776/testReport/ Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8776/console This message was automatically generated.
          Hide
          xgong Xuan Gong added a comment -

          Thanks for the latest patch. I think that we are close. The patch looks good overall. One nit:

          • could we modify this doc in AppLogAggregatorImpl, too
                // Create a set of Containers whose logs will be uploaded in this cycle.
                // It includes:
                // a) all containers in pendingContainers: those containers are finished
                //    and satisfy the retentionPolicy.
                // b) some set of running containers: For all the Running containers,
                // we have ContainerLogsRetentionPolicy.AM_AND_FAILED_CONTAINERS_ONLY,
                // so simply set wasContainerSuccessful as true to
                // bypass FAILED_CONTAINERS check and find the running containers 
                // which satisfy the retentionPolicy.
            

          Also, I realized that ContainerTokenIdentifier is used here

          boolean shouldDoLogAggregation(ContainerTokenIdentifier containerToken,  int exitCode);
          

          Currently, it is fine. But if in future, we might need other information which the ContainerTokenIdentifier can not provide. So, probably, we could have our own ContainerLogContext instead of using ContainerTokenIdentifier ? In that case, if we have requirement to use other information, we could add.

          Thoughts ?

          Show
          xgong Xuan Gong added a comment - Thanks for the latest patch. I think that we are close. The patch looks good overall. One nit: could we modify this doc in AppLogAggregatorImpl, too // Create a set of Containers whose logs will be uploaded in this cycle. // It includes: // a) all containers in pendingContainers: those containers are finished // and satisfy the retentionPolicy. // b) some set of running containers: For all the Running containers, // we have ContainerLogsRetentionPolicy.AM_AND_FAILED_CONTAINERS_ONLY, // so simply set wasContainerSuccessful as true to // bypass FAILED_CONTAINERS check and find the running containers // which satisfy the retentionPolicy. Also, I realized that ContainerTokenIdentifier is used here boolean shouldDoLogAggregation(ContainerTokenIdentifier containerToken, int exitCode); Currently, it is fine. But if in future, we might need other information which the ContainerTokenIdentifier can not provide. So, probably, we could have our own ContainerLogContext instead of using ContainerTokenIdentifier ? In that case, if we have requirement to use other information, we could add. Thoughts ?
          Hide
          mingma Ming Ma added a comment -

          The javac warning isn't related to this patch. That is due to TestAuxServices cast Object to ArrayList<Integer>. Updated the patch to take care of that anyway. The new patch also addresses the checkstyle and the whitespace issue.

          Show
          mingma Ming Ma added a comment - The javac warning isn't related to this patch. That is due to TestAuxServices cast Object to ArrayList<Integer>. Updated the patch to take care of that anyway. The new patch also addresses the checkstyle and the whitespace issue.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 21m 44s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          -1 javac 7m 43s The applied patch generated 1 additional warning messages.
          +1 javadoc 9m 45s There were no new javadoc warning messages.
          +1 release audit 0m 21s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 3m 19s The applied patch generated 2 new checkstyle issues (total was 212, now 213).
          -1 whitespace 1m 46s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 21s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 findbugs 7m 31s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 common tests 22m 21s Tests passed in hadoop-common.
          +1 yarn tests 0m 21s Tests passed in hadoop-yarn-api.
          +1 yarn tests 1m 54s Tests passed in hadoop-yarn-common.
          +1 yarn tests 7m 23s Tests passed in hadoop-yarn-server-nodemanager.
          +1 yarn tests 52m 27s Tests passed in hadoop-yarn-server-resourcemanager.
              139m 15s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12748767/YARN-221-7.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / d540374
          javac https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/diffJavacWarnings.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/whitespace.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-common.txt
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8770/testReport/
          Java 1.7.0_55
          uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/8770/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 21m 44s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. -1 javac 7m 43s The applied patch generated 1 additional warning messages. +1 javadoc 9m 45s There were no new javadoc warning messages. +1 release audit 0m 21s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 3m 19s The applied patch generated 2 new checkstyle issues (total was 212, now 213). -1 whitespace 1m 46s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 21s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 findbugs 7m 31s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 22m 21s Tests passed in hadoop-common. +1 yarn tests 0m 21s Tests passed in hadoop-yarn-api. +1 yarn tests 1m 54s Tests passed in hadoop-yarn-common. +1 yarn tests 7m 23s Tests passed in hadoop-yarn-server-nodemanager. +1 yarn tests 52m 27s Tests passed in hadoop-yarn-server-resourcemanager.     139m 15s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12748767/YARN-221-7.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / d540374 javac https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/diffJavacWarnings.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/whitespace.txt hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-common.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8770/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8770/testReport/ Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8770/console This message was automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Thanks Xuan Gong! Here is the updated patch with your suggestions. ContainerLogAggregationPolicy is changed to use ContainerTokenIdentifier so that the policy can get the ContainerType of the container.

          Show
          mingma Ming Ma added a comment - Thanks Xuan Gong ! Here is the updated patch with your suggestions. ContainerLogAggregationPolicy is changed to use ContainerTokenIdentifier so that the policy can get the ContainerType of the container.
          Hide
          xgong Xuan Gong added a comment -

          Thanks for the patch, Ming Ma. This patch looks good overall. Two comments:

          • Can we not use container to check whether this container is AM ?
            return (containerId.getContainerId()& ContainerId.CONTAINER_ID_BITMASK) == 1;
            

          I think in this jira : https://issues.apache.org/jira/browse/YARN-3116, we have a way to determine which container is AM. Could we use that ?

          • Documentation.
            • I think that we might need documentations for the new logaggregationpolicy class. Maybe in LogAggregationContext, we could add more documentations, such as which logaggregationpolicy class we currently have ?
            • For these two newly added configurations.
              public static final String NM_LOG_AGG_POLICY_CLASS = NM_PREFIX + "log-aggregation.policy.class";		  
              public static final String NM_LOG_AGG_POLICY_CLASS_PARAMETERS = NM_PREFIX	 + "log-aggregation.policy.parameters";
              

              , can we explain it more clearly. For example, the users will be confuse why we need these two configuration in yarn-site.xml, and at the same time, they can set logaggregationpolicy in ASC ?

          Show
          xgong Xuan Gong added a comment - Thanks for the patch, Ming Ma . This patch looks good overall. Two comments: Can we not use container to check whether this container is AM ? return (containerId.getContainerId()& ContainerId.CONTAINER_ID_BITMASK) == 1; I think in this jira : https://issues.apache.org/jira/browse/YARN-3116 , we have a way to determine which container is AM. Could we use that ? Documentation. I think that we might need documentations for the new logaggregationpolicy class. Maybe in LogAggregationContext, we could add more documentations, such as which logaggregationpolicy class we currently have ? For these two newly added configurations. public static final String NM_LOG_AGG_POLICY_CLASS = NM_PREFIX + "log-aggregation.policy.class" ; public static final String NM_LOG_AGG_POLICY_CLASS_PARAMETERS = NM_PREFIX + "log-aggregation.policy.parameters" ; , can we explain it more clearly. For example, the users will be confuse why we need these two configuration in yarn-site.xml, and at the same time, they can set logaggregationpolicy in ASC ?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 21m 39s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 3 new or modified test files.
          -1 javac 7m 38s The applied patch generated 1 additional warning messages.
          +1 javadoc 9m 34s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          -1 checkstyle 3m 15s The applied patch generated 1 new checkstyle issues (total was 212, now 212).
          -1 whitespace 1m 21s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 24s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 7m 40s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
          +1 common tests 22m 22s Tests passed in hadoop-common.
          +1 yarn tests 0m 23s Tests passed in hadoop-yarn-api.
          +1 yarn tests 1m 55s Tests passed in hadoop-yarn-common.
          +1 yarn tests 7m 17s Tests passed in hadoop-yarn-server-nodemanager.
          +1 yarn tests 52m 23s Tests passed in hadoop-yarn-server-resourcemanager.
              138m 35s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12748108/YARN-221-6.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 88d8736
          javac https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/diffJavacWarnings.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/whitespace.txt
          hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-common.txt
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8728/testReport/
          Java 1.7.0_55
          uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/8728/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 21m 39s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 3 new or modified test files. -1 javac 7m 38s The applied patch generated 1 additional warning messages. +1 javadoc 9m 34s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 3m 15s The applied patch generated 1 new checkstyle issues (total was 212, now 212). -1 whitespace 1m 21s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 24s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 7m 40s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 common tests 22m 22s Tests passed in hadoop-common. +1 yarn tests 0m 23s Tests passed in hadoop-yarn-api. +1 yarn tests 1m 55s Tests passed in hadoop-yarn-common. +1 yarn tests 7m 17s Tests passed in hadoop-yarn-server-nodemanager. +1 yarn tests 52m 23s Tests passed in hadoop-yarn-server-resourcemanager.     138m 35s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12748108/YARN-221-6.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 88d8736 javac https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/diffJavacWarnings.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/whitespace.txt hadoop-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-common.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt hadoop-yarn-server-resourcemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8728/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8728/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8728/console This message was automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Xuan Gong and others, here is the draft patch based on the new design. Besides the above discussions,

          • If the application specifies an invalid log aggregation policy class, the current implementation will fallback to the default policy instead of the failing the application. Alternative approach is to have NM fail the application instead.
          • For each new application, a new policy object will be created and used only by that application. This should be ok from memory footprint as well as runtime perf point of view. Alternative approach is to have applications share the same policy object if they use the same policy class and same policy parameters.
          Show
          mingma Ming Ma added a comment - Xuan Gong and others, here is the draft patch based on the new design. Besides the above discussions, If the application specifies an invalid log aggregation policy class, the current implementation will fallback to the default policy instead of the failing the application. Alternative approach is to have NM fail the application instead. For each new application, a new policy object will be created and used only by that application. This should be ok from memory footprint as well as runtime perf point of view. Alternative approach is to have applications share the same policy object if they use the same policy class and same policy parameters.
          Hide
          mingma Ming Ma added a comment -

          Thanks. Vinod Kumar Vavilapalli and others, any additional suggestions for the design?

          Show
          mingma Ming Ma added a comment - Thanks. Vinod Kumar Vavilapalli and others, any additional suggestions for the design?
          Hide
          xgong Xuan Gong added a comment -

          Here is the scenario. a) no applications want to over the default. b) Administrators of the cluster want to make a cluster-side global change from sample rate of 20 percent to 50 percent.

          OK. This makes sense. Thanks for explanation.

          Show
          xgong Xuan Gong added a comment - Here is the scenario. a) no applications want to over the default. b) Administrators of the cluster want to make a cluster-side global change from sample rate of 20 percent to 50 percent. OK. This makes sense. Thanks for explanation.
          Hide
          mingma Ming Ma added a comment -

          Here is the scenario. a) no applications want to over the default. b) Administrators of the cluster want to make a cluster-side global change from sample rate of 20 percent to 50 percent.

          Show
          mingma Ming Ma added a comment - Here is the scenario. a) no applications want to over the default. b) Administrators of the cluster want to make a cluster-side global change from sample rate of 20 percent to 50 percent.
          Hide
          xgong Xuan Gong added a comment -

          we want to be able to config the sample rate without code change. If it isn't in yarn-site.xml, where should we store the value?

          If the default policy is SampleRateContainerLogAggregationPolicy, we already have the default value. If the users want to change the value (sample rate), they could set through ASC#logaggregationContext#setParameter().

          If we set the parameter in yarn-site.xml, all the applications will be affected. Since this is per application, so I think that this probably will be suitable. Thoughts ?

          Show
          xgong Xuan Gong added a comment - we want to be able to config the sample rate without code change. If it isn't in yarn-site.xml, where should we store the value? If the default policy is SampleRateContainerLogAggregationPolicy, we already have the default value. If the users want to change the value (sample rate), they could set through ASC#logaggregationContext#setParameter(). If we set the parameter in yarn-site.xml, all the applications will be affected. Since this is per application, so I think that this probably will be suitable. Thoughts ?
          Hide
          mingma Ming Ma added a comment -

          Thanks Xuan! Regarding the default value for the policy, we want to be able to config the sample rate without code change. If it isn't in yarn-site.xml, where should we store the value? Agree with you that we also need to have ContainerLogAggregationPolicy.parseParameters.

          Show
          mingma Ming Ma added a comment - Thanks Xuan! Regarding the default value for the policy, we want to be able to config the sample rate without code change. If it isn't in yarn-site.xml, where should we store the value? Agree with you that we also need to have ContainerLogAggregationPolicy.parseParameters.
          Hide
          xgong Xuan Gong added a comment -

          I think that we could have this configuration

          <property>
              <name>yarn.container-log-aggregation-policy.class</name>
              <value>org.apache.hadoop.yarn.container-log-aggregation-policy.SampleRateContainerLogAggregationPolicy</value>
          </property>
          

          which can be used as default log-aggregation-policy. If the users do not specify the policy class in ASC, the default policy will be used

          But maybe we do not need this one to specify the policy parameters:

          <property>
              <name>yarn.container-log-aggregation-policy.class.SampleRateContainerLogAggregationPolicy</name>
              <value>SR:0.2</value>
          </property>
          

          Instead, we could set the default value for the policy.

          Also, in AppLogAggregator.java (From NM), after we parse the policy from ASC, we should do ContainerLogAggregationPolicy.parseParamter(ASC.logAggregationContext.getParamters()).

          Others are fine to me.

          Show
          xgong Xuan Gong added a comment - I think that we could have this configuration <property> <name>yarn.container-log-aggregation-policy.class</name> <value>org.apache.hadoop.yarn.container-log-aggregation-policy.SampleRateContainerLogAggregationPolicy</value> </property> which can be used as default log-aggregation-policy. If the users do not specify the policy class in ASC, the default policy will be used But maybe we do not need this one to specify the policy parameters: <property> <name>yarn.container-log-aggregation-policy.class.SampleRateContainerLogAggregationPolicy</name> <value>SR:0.2</value> </property> Instead, we could set the default value for the policy. Also, in AppLogAggregator.java (From NM), after we parse the policy from ASC, we should do ContainerLogAggregationPolicy.parseParamter(ASC.logAggregationContext.getParamters()). Others are fine to me.
          Hide
          mingma Ming Ma added a comment -

          Thanks Xuan Gong. How about the followings?

          • Allow applications to specify the policy parameter via LogAggregationContext along with the policy class.
          public abstract class LogAggregationContext {
              public void setContainerLogPolicyClass(Class<? extends ContainerLogAggregationPolicy> logPolicy);
              public Class<? extends ContainerLogAggregationPolicy> getContainerLogPolicyClass();
              public void setParameters(String parameters);
              public String getParameters();
          }
          
          • NM uses default cluster-wide settings via the following configurations. MR can override these configurations on per-application basis.
          <property>
              <name>yarn.container-log-aggregation-policy.class</name>
              <value>org.apache.hadoop.yarn.container-log-aggregation-policy.SampleRateContainerLogAggregationPolicy</value>
          </property>
          <property>
              <name>yarn.container-log-aggregation-policy.class.SampleRateContainerLogAggregationPolicy</name>
              <value>SR:0.2</value>
          </property>
          
          • To support per-application policy, modify MR YarnRunner. We can also modify YarnClientImpl to read these configurations and set ApplicationSubmissionContext accordingly.
          • The log aggregation policy object loaded in NM can be shared among different applications as long as they belong to same policy class with the same parameters.
          Show
          mingma Ming Ma added a comment - Thanks Xuan Gong . How about the followings? Allow applications to specify the policy parameter via LogAggregationContext along with the policy class. public abstract class LogAggregationContext { public void setContainerLogPolicyClass(Class<? extends ContainerLogAggregationPolicy> logPolicy); public Class<? extends ContainerLogAggregationPolicy> getContainerLogPolicyClass(); public void setParameters(String parameters); public String getParameters(); } NM uses default cluster-wide settings via the following configurations. MR can override these configurations on per-application basis. <property> <name>yarn.container-log-aggregation-policy.class</name> <value>org.apache.hadoop.yarn.container-log-aggregation-policy.SampleRateContainerLogAggregationPolicy</value> </property> <property> <name>yarn.container-log-aggregation-policy.class.SampleRateContainerLogAggregationPolicy</name> <value>SR:0.2</value> </property> To support per-application policy, modify MR YarnRunner. We can also modify YarnClientImpl to read these configurations and set ApplicationSubmissionContext accordingly. The log aggregation policy object loaded in NM can be shared among different applications as long as they belong to same policy class with the same parameters.
          Hide
          xgong Xuan Gong added a comment -

          All the known policies will be part of YARN including SampleRateContainerLogAggregationPolicy. So we still need to config sample rate for that policy. If we don't put it in YarnConfiguration, where can we put it? It seems we already have a bunch of configuration properties in YarnConfiguration that are specific the plugin implementation such as container executor properties.

          I thought about this. How about adding a new protocol field: String ContainerLogAggregationPolicyParameter along with ContainerLogAggregationPolicy in logAggregationContext. In ContainerLogAggregationPolicyParameter, users can define any parameter format which their ContainerLogAggregationPolicy can understand. For example, we could define ContainerLogAggregationPolicyParameter as "SR:0.2" and in SampleRateContainerLogAggregationPolicy, we could add implementation to understand and parse the parameter.
          Also, we could change to

          public interface ContainerLogAggregationPolicy {
              public boolean shouldDoLogAggregation(ContainerId containerId,  int exitCode);
              public void parseParameters(String parameters)
          }
          

          How MR overrides the default policy. Maybe we can have YarnRunner at MR level honor yarn property "yarn.container-log-aggregation-policy.class" on per job level when it creates the ApplicationSubmissionContext with the proper LogAggregationContext. In that way we don't have to create extra log aggregation properties specific at MR layer.

          Good question. Another possible solution could be "parsing them from command-line" if users use ToolRunner.run to launch their MR application.

          Show
          xgong Xuan Gong added a comment - All the known policies will be part of YARN including SampleRateContainerLogAggregationPolicy. So we still need to config sample rate for that policy. If we don't put it in YarnConfiguration, where can we put it? It seems we already have a bunch of configuration properties in YarnConfiguration that are specific the plugin implementation such as container executor properties. I thought about this. How about adding a new protocol field: String ContainerLogAggregationPolicyParameter along with ContainerLogAggregationPolicy in logAggregationContext. In ContainerLogAggregationPolicyParameter, users can define any parameter format which their ContainerLogAggregationPolicy can understand. For example, we could define ContainerLogAggregationPolicyParameter as "SR:0.2" and in SampleRateContainerLogAggregationPolicy, we could add implementation to understand and parse the parameter. Also, we could change to public interface ContainerLogAggregationPolicy { public boolean shouldDoLogAggregation(ContainerId containerId, int exitCode); public void parseParameters( String parameters) } How MR overrides the default policy. Maybe we can have YarnRunner at MR level honor yarn property "yarn.container-log-aggregation-policy.class" on per job level when it creates the ApplicationSubmissionContext with the proper LogAggregationContext. In that way we don't have to create extra log aggregation properties specific at MR layer. Good question. Another possible solution could be "parsing them from command-line" if users use ToolRunner.run to launch their MR application.
          Hide
          mingma Ming Ma added a comment -

          Thanks Xuan Gong. You raise some valid points about abstraction. Here are my takes on this.

          It appears the main requirements are:

          • There needs to be a cluster-wide default log aggregation policy at YARN layer. That should be extensible. To change it and add a new policy, it is ok to require NM restart given NM needs to load the policy object.
          • Any YARN application can override the default YARN policy with its own the log aggregation policy. This application specific policy can come from the list of available policies provided at YARN layer. There is no need to provide the ability for the application to submit a new policy implementation on the fly.

          Given these:

          • Abstraction via interface seem like a good idea. ContainerLogAggregationPolicy interface can include the following method to address all the policies that we know of so far. However, it seems we might end up with many policies given the possible permutation, e.g., AMContainerLogAndFailWorkerContainerOnlyLogAggregationPolicy, AMContainerLogAndFailOrKilledWorkerContainerOnlyLogAggregationPolicy, etc.
          public interface ContainerLogAggregationPolicy {
              public boolean shouldDoLogAggregation(ContainerId containerId,  int exitCode);
          }
          
          • The cluster-wide default policy at YARN layer is configurable.
          <property>
              <name>yarn.nodemanager.container-log-aggregation-policy.class</name>
              <value>org.apache.hadoop.yarn.server.nodemanager.container-log-aggregation-policy.AllContainerLogAggregationPolicy</value>
          </property>
          
          • All the known policies will be part of YARN including SampleRateContainerLogAggregationPolicy. So we still need to config sample rate for that policy. If we don't put it in YarnConfiguration, where can we put it? It seems we already have a bunch of configuration properties in YarnConfiguration that are specific the plugin implementation such as container executor properties.
          • Should ContainerLogAggregationPolicy be part of ContainerLaunchContext or LogAggregationContext. It seems LogAggregationContext is a better fit. That also means ContainerLogAggregationPolicy will be specified as part of ApplicationSubmissionContext. For application to specify a log policy, the policy class needs to be loadable by NM. So the LogAggregationContext will have new methods like:
          public abstract class LogAggregationContext {
              public void setContainerLogPolicyClass(Class<? extends ContainerLogAggregationPolicy> logPolicy);
              public Class<? extends ContainerLogAggregationPolicy> getContainerLogPolicyClass();
          }
          
          • How MR overrides the default policy. Maybe we can have YarnRunner at MR level honor yarn property "yarn.container-log-aggregation-policy.class" on per job level when it creates the ApplicationSubmissionContext with the proper LogAggregationContext. In that way we don't have to create extra log aggregation properties specific at MR layer.
          Show
          mingma Ming Ma added a comment - Thanks Xuan Gong . You raise some valid points about abstraction. Here are my takes on this. It appears the main requirements are: There needs to be a cluster-wide default log aggregation policy at YARN layer. That should be extensible. To change it and add a new policy, it is ok to require NM restart given NM needs to load the policy object. Any YARN application can override the default YARN policy with its own the log aggregation policy. This application specific policy can come from the list of available policies provided at YARN layer. There is no need to provide the ability for the application to submit a new policy implementation on the fly. Given these: Abstraction via interface seem like a good idea. ContainerLogAggregationPolicy interface can include the following method to address all the policies that we know of so far. However, it seems we might end up with many policies given the possible permutation, e.g., AMContainerLogAndFailWorkerContainerOnlyLogAggregationPolicy, AMContainerLogAndFailOrKilledWorkerContainerOnlyLogAggregationPolicy, etc. public interface ContainerLogAggregationPolicy { public boolean shouldDoLogAggregation(ContainerId containerId, int exitCode); } The cluster-wide default policy at YARN layer is configurable. <property> <name>yarn.nodemanager.container-log-aggregation-policy.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.container-log-aggregation-policy.AllContainerLogAggregationPolicy</value> </property> All the known policies will be part of YARN including SampleRateContainerLogAggregationPolicy. So we still need to config sample rate for that policy. If we don't put it in YarnConfiguration, where can we put it? It seems we already have a bunch of configuration properties in YarnConfiguration that are specific the plugin implementation such as container executor properties. Should ContainerLogAggregationPolicy be part of ContainerLaunchContext or LogAggregationContext. It seems LogAggregationContext is a better fit. That also means ContainerLogAggregationPolicy will be specified as part of ApplicationSubmissionContext. For application to specify a log policy, the policy class needs to be loadable by NM. So the LogAggregationContext will have new methods like: public abstract class LogAggregationContext { public void setContainerLogPolicyClass(Class<? extends ContainerLogAggregationPolicy> logPolicy); public Class<? extends ContainerLogAggregationPolicy> getContainerLogPolicyClass(); } How MR overrides the default policy. Maybe we can have YarnRunner at MR level honor yarn property "yarn.container-log-aggregation-policy.class" on per job level when it creates the ApplicationSubmissionContext with the proper LogAggregationContext. In that way we don't have to create extra log aggregation properties specific at MR layer.
          Hide
          xgong Xuan Gong added a comment -

          Ming Ma Thanks for working on this. I have some general comments and want to discuss with you.
          We could have a common interface called ContainerLogAggregationPolicy which can include at least this function:

          • doLogAggregationForContainer (You might need a better name.). And this function will be called by AppLogAggregator to check whether the log for this container need to be aggregated.

          So, instead of creating a enum type: ContainerLogAggregationPolicy

          AGGREGATE, DO_NOT_AGGREGATE, AGGREGATE_FAILED, AGGREGATE_FAILED_OR_KILLED
          

          We could create some basic policy which implements the common interface ContainerLogAggregationPolicy, such as AllContainerLogAggregationPolicy, NonContainerLogAggregationPolicy, AMContainerOnlyLogAggregationPolicy, FailContainerOnlyLogAggregationPolicy, SampleRateContainerLogAggregationPolicy, etc.
          I think that this way might be more extendible. And in the future, clients can implement their own ContainerLogAggregationPolicy which can be more complex.
          With this, we do not need add any new configurations in service side.

          +  public static final String LOG_AGGREGATION_SAMPLE_PERCENT = NM_PREFIX
          +      + "log-aggregation.worker-sample-percent";
          +  public static final float DEFAULT_LOG_AGGREGATION_SAMPLE_PERCENT = 1.0f;
          +
          +  public static final String LOG_AGGREGATION_AM_LOGS = NM_PREFIX
          +      + "log-aggregation.am-enable";
          +  public static final boolean DEFAULT_LOG_AGGREGATION_AM_LOGS = true;
          

          can be removed

          Also, instead of adding ContainerLogAggregationPolicy into CLC, we could add ContainerLogAggregationPolicy into LogAggregationContext which already can be accessed by NM.

          Thoughts ?

          Show
          xgong Xuan Gong added a comment - Ming Ma Thanks for working on this. I have some general comments and want to discuss with you. We could have a common interface called ContainerLogAggregationPolicy which can include at least this function: doLogAggregationForContainer (You might need a better name.). And this function will be called by AppLogAggregator to check whether the log for this container need to be aggregated. So, instead of creating a enum type: ContainerLogAggregationPolicy AGGREGATE, DO_NOT_AGGREGATE, AGGREGATE_FAILED, AGGREGATE_FAILED_OR_KILLED We could create some basic policy which implements the common interface ContainerLogAggregationPolicy, such as AllContainerLogAggregationPolicy, NonContainerLogAggregationPolicy, AMContainerOnlyLogAggregationPolicy, FailContainerOnlyLogAggregationPolicy, SampleRateContainerLogAggregationPolicy, etc. I think that this way might be more extendible. And in the future, clients can implement their own ContainerLogAggregationPolicy which can be more complex. With this, we do not need add any new configurations in service side. + public static final String LOG_AGGREGATION_SAMPLE_PERCENT = NM_PREFIX + + "log-aggregation.worker-sample-percent" ; + public static final float DEFAULT_LOG_AGGREGATION_SAMPLE_PERCENT = 1.0f; + + public static final String LOG_AGGREGATION_AM_LOGS = NM_PREFIX + + "log-aggregation.am-enable" ; + public static final boolean DEFAULT_LOG_AGGREGATION_AM_LOGS = true ; can be removed Also, instead of adding ContainerLogAggregationPolicy into CLC, we could add ContainerLogAggregationPolicy into LogAggregationContext which already can be accessed by NM. Thoughts ?
          Hide
          xgong Xuan Gong added a comment -

          Canceling the patch for discussion..

          Show
          xgong Xuan Gong added a comment - Canceling the patch for discussion..
          Hide
          xgong Xuan Gong added a comment -

          Thanks for working on this, Ming Ma. I will take a look this one shortly,

          Show
          xgong Xuan Gong added a comment - Thanks for working on this, Ming Ma . I will take a look this one shortly,
          Hide
          hadoopqa Hadoop QA added a comment -



          +1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 48s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 7m 35s There were no new javac warning messages.
          +1 javadoc 9m 38s There were no new javadoc warning messages.
          +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
          +1 checkstyle 2m 14s There were no new checkstyle issues.
          +1 whitespace 0m 49s The patch has no lines that end in whitespace.
          +1 install 1m 40s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 3m 45s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          +1 yarn tests 0m 25s Tests passed in hadoop-yarn-api.
          +1 yarn tests 1m 56s Tests passed in hadoop-yarn-common.
          +1 yarn tests 7m 56s Tests passed in hadoop-yarn-server-nodemanager.
              51m 46s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12732060/YARN-221-trunk-v5.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 444836b
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/7869/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/7869/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7869/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7869/testReport/
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7869/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 48s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 7m 35s There were no new javac warning messages. +1 javadoc 9m 38s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 2m 14s There were no new checkstyle issues. +1 whitespace 0m 49s The patch has no lines that end in whitespace. +1 install 1m 40s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 3m 45s The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 yarn tests 0m 25s Tests passed in hadoop-yarn-api. +1 yarn tests 1m 56s Tests passed in hadoop-yarn-common. +1 yarn tests 7m 56s Tests passed in hadoop-yarn-server-nodemanager.     51m 46s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12732060/YARN-221-trunk-v5.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 444836b hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/7869/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/7869/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7869/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7869/testReport/ Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7869/console This message was automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Here is the new patch with updated unit tests.

          Show
          mingma Ming Ma added a comment - Here is the new patch with updated unit tests.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 38s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          +1 javac 7m 34s There were no new javac warning messages.
          +1 javadoc 9m 34s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          +1 checkstyle 2m 10s There were no new checkstyle issues.
          +1 whitespace 0m 47s The patch has no lines that end in whitespace.
          +1 install 1m 38s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 findbugs 3m 46s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          +1 yarn tests 0m 25s Tests passed in hadoop-yarn-api.
          +1 yarn tests 1m 56s Tests passed in hadoop-yarn-common.
          -1 yarn tests 5m 57s Tests failed in hadoop-yarn-server-nodemanager.
              49m 26s  



          Reason Tests
          Failed unit tests hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12731684/YARN-221-trunk-v4.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 02a4a22
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7846/testReport/
          Java 1.7.0_55
          uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7846/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 38s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. +1 javac 7m 34s There were no new javac warning messages. +1 javadoc 9m 34s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 2m 10s There were no new checkstyle issues. +1 whitespace 0m 47s The patch has no lines that end in whitespace. +1 install 1m 38s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 3m 46s The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 yarn tests 0m 25s Tests passed in hadoop-yarn-api. +1 yarn tests 1m 56s Tests passed in hadoop-yarn-common. -1 yarn tests 5m 57s Tests failed in hadoop-yarn-server-nodemanager.     49m 26s   Reason Tests Failed unit tests hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12731684/YARN-221-trunk-v4.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 02a4a22 hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7846/testReport/ Java 1.7.0_55 uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7846/console This message was automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Updated patch to fix warnings.

          Show
          mingma Ming Ma added a comment - Updated patch to fix warnings.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 38s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 tests included 0m 0s The patch appears to include 2 new or modified test files.
          -1 javac 7m 31s The applied patch generated 122 additional warning messages.
          +1 javadoc 9m 37s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          +1 checkstyle 1m 58s There were no new checkstyle issues.
          -1 whitespace 1m 2s The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 install 1m 37s mvn install still works.
          +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse.
          +1 findbugs 3m 46s The patch does not introduce any new Findbugs (version 2.0.3) warnings.
          +1 yarn tests 0m 22s Tests passed in hadoop-yarn-api.
          -1 yarn tests 1m 55s Tests failed in hadoop-yarn-common.
          -1 yarn tests 5m 59s Tests failed in hadoop-yarn-server-nodemanager.
              49m 26s  



          Reason Tests
          Failed unit tests hadoop.yarn.conf.TestYarnConfigurationFields
            hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12731667/YARN-221-trunk-v3.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / 6471d18
          javac https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/diffJavacWarnings.txt
          whitespace https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/whitespace.txt
          hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/testrun_hadoop-yarn-api.txt
          hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/testrun_hadoop-yarn-common.txt
          hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7841/testReport/
          Java 1.7.0_55
          uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7841/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 38s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 2 new or modified test files. -1 javac 7m 31s The applied patch generated 122 additional warning messages. +1 javadoc 9m 37s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 1m 58s There were no new checkstyle issues. -1 whitespace 1m 2s The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 37s mvn install still works. +1 eclipse:eclipse 0m 32s The patch built with eclipse:eclipse. +1 findbugs 3m 46s The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 yarn tests 0m 22s Tests passed in hadoop-yarn-api. -1 yarn tests 1m 55s Tests failed in hadoop-yarn-common. -1 yarn tests 5m 59s Tests failed in hadoop-yarn-server-nodemanager.     49m 26s   Reason Tests Failed unit tests hadoop.yarn.conf.TestYarnConfigurationFields   hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12731667/YARN-221-trunk-v3.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 6471d18 javac https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/diffJavacWarnings.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/whitespace.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-common test log https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/testrun_hadoop-yarn-common.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/7841/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/7841/testReport/ Java 1.7.0_55 uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/7841/console This message was automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Thanks Li Lu. Here is the rebased patch.

          Show
          mingma Ming Ma added a comment - Thanks Li Lu . Here is the rebased patch.
          Hide
          gtCarrera9 Li Lu added a comment -

          I verified that the latest patch does not apply on current trunk. Cancel this patch for now. Ming Ma would you mind to update it? Thanks!

          Show
          gtCarrera9 Li Lu added a comment - I verified that the latest patch does not apply on current trunk. Cancel this patch for now. Ming Ma would you mind to update it? Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 0s The patch command could not apply the patch during dryrun.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / f1a152c
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7654/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 0s The patch command could not apply the patch during dryrun. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / f1a152c Console output https://builds.apache.org/job/PreCommit-YARN-Build/7654/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 0s The patch command could not apply the patch during dryrun.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision trunk / f1a152c
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/7640/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 0s The patch command could not apply the patch during dryrun. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / f1a152c Console output https://builds.apache.org/job/PreCommit-YARN-Build/7640/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch
          against trunk revision a655973.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5940//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch against trunk revision a655973. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5940//console This message is automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3493//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3493//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12637905/YARN-221-trunk-v2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3493//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3493//console This message is automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Here is the patch to support log aggregation sampling at yarn layer. Yarn applications can choose to override the default behavior. Without any change at MR layer to specify per-container log aggregation policy, yarn log aggregation sampling policy at cluster level will be applied.

          Show
          mingma Ming Ma added a comment - Here is the patch to support log aggregation sampling at yarn layer. Yarn applications can choose to override the default behavior. Without any change at MR layer to specify per-container log aggregation policy, yarn log aggregation sampling policy at cluster level will be applied.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12613251/YARN-221-trunk-v1.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3224//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613251/YARN-221-trunk-v1.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3224//console This message is automatically generated.
          Hide
          mingma Ming Ma added a comment -

          Thanks, Jason.

          To fix the racing between container's exit by itself and MRAppMaster's stopContainer, I will upload the patch to https://issues.apache.org/jira/browse/MAPREDUCE-5465.

          To support the feature of being able to have X% of container logs aggregated, we can do it at the yarn layer instead at MR layer. In that way, other applications can get it from free.

          If AM doesn't specify any log aggregation policy as part of ContainerLaunchContext, yarn's default log aggregation policy will be applied. The default polices could be like:

          For worker containers,
          1. Always aggregate logs of failed or killed containers.
          2. Aggregate subset of container logs. The sample rate is configurable and it is specific to the application.

          For AM containers,
          1. Always aggregate logs of failed or killed containers.
          2. By default, AM log will be aggregated regardless of the status. It can be disabled via configuration and that will only impact succeeded container.

          Comments?

          Show
          mingma Ming Ma added a comment - Thanks, Jason. To fix the racing between container's exit by itself and MRAppMaster's stopContainer, I will upload the patch to https://issues.apache.org/jira/browse/MAPREDUCE-5465 . To support the feature of being able to have X% of container logs aggregated, we can do it at the yarn layer instead at MR layer. In that way, other applications can get it from free. If AM doesn't specify any log aggregation policy as part of ContainerLaunchContext, yarn's default log aggregation policy will be applied. The default polices could be like: For worker containers, 1. Always aggregate logs of failed or killed containers. 2. Aggregate subset of container logs. The sample rate is configurable and it is specific to the application. For AM containers, 1. Always aggregate logs of failed or killed containers. 2. By default, AM log will be aggregated regardless of the status. It can be disabled via configuration and that will only impact succeeded container. Comments?
          Hide
          jlowe Jason Lowe added a comment -

          We can have RM AM wait for notification as in container exit -> NM notifies RM -> RM notifies AM. That will create some delay for AM to declare the job is done. With the NM -> RM heartbeat value used in big clusters, it could add couple seconds delay for the job. That might not be a big deal for regular MR jobs.

          The NM does out-of-band heartbeats when containers exit, so the turnaround time can be shorter than a full NM heartbeat interval.

          If we're really concerned about any additional time added for graceful task exit we can also have the AM unregister when the job succeeds/fails but before all tasks exit, and eventually the RM will kill all containers of the application when the AM eventually exits (or times out waiting). In that sense it would not add any time from the job client's perspective, as the job could report completion at the same time it did before. However it would add some time from the YARN perspective, as the application is lingering on the cluster a few extra seconds in the FINISHING state than it did before.

          One thing to add we need the definition and policy on how to handle those tasks that are in the finishing state and MR AM ends up stopping them as they don't exit by themselves.

          I don't think we need to get too tricky here. The NM will see the container return a non-zero exit code and assume that's failure. If tasks are succeeding but returning non-zero exit codes then that's probably a bug and arguably a good thing we're grabbing the logs to show what went wrong when it tried to tear down. IMHO we should fix what's causing the non-zero exit code rather than try to add a mechanism to prevent logs from being aggregated in what should be a rare and abnormal case.

          Show
          jlowe Jason Lowe added a comment - We can have RM AM wait for notification as in container exit -> NM notifies RM -> RM notifies AM. That will create some delay for AM to declare the job is done. With the NM -> RM heartbeat value used in big clusters, it could add couple seconds delay for the job. That might not be a big deal for regular MR jobs. The NM does out-of-band heartbeats when containers exit, so the turnaround time can be shorter than a full NM heartbeat interval. If we're really concerned about any additional time added for graceful task exit we can also have the AM unregister when the job succeeds/fails but before all tasks exit, and eventually the RM will kill all containers of the application when the AM eventually exits (or times out waiting). In that sense it would not add any time from the job client's perspective, as the job could report completion at the same time it did before. However it would add some time from the YARN perspective, as the application is lingering on the cluster a few extra seconds in the FINISHING state than it did before. One thing to add we need the definition and policy on how to handle those tasks that are in the finishing state and MR AM ends up stopping them as they don't exit by themselves. I don't think we need to get too tricky here. The NM will see the container return a non-zero exit code and assume that's failure. If tasks are succeeding but returning non-zero exit codes then that's probably a bug and arguably a good thing we're grabbing the logs to show what went wrong when it tried to tear down. IMHO we should fix what's causing the non-zero exit code rather than try to add a mechanism to prevent logs from being aggregated in what should be a rare and abnormal case.
          Hide
          mingma Ming Ma added a comment -

          One thing to add we need the definition and policy on how to handle those tasks that are in the finishing state and MR AM ends up stopping them as they don't exit by themselves. From customers's point of view, the task is considered to be successful. For log aggregation point of view, if we want to aggregate only failed task, then MR AM still needs to tell NM not to do log aggregation for such kind of tasks. Maybe this isn't important if we believe most of the tasks will exit by themselves. But it is still useful to provide the proper definition and policy for it.

          Show
          mingma Ming Ma added a comment - One thing to add we need the definition and policy on how to handle those tasks that are in the finishing state and MR AM ends up stopping them as they don't exit by themselves. From customers's point of view, the task is considered to be successful. For log aggregation point of view, if we want to aggregate only failed task, then MR AM still needs to tell NM not to do log aggregation for such kind of tasks. Maybe this isn't important if we believe most of the tasks will exit by themselves. But it is still useful to provide the proper definition and policy for it.
          Hide
          mingma Ming Ma added a comment -

          Jason, that is a good point. I wondered about the reason behind the design of MR AM trying to stopContainer while task containers exist by themselves. The jiras you mentioned provide good background info.

          We can have RM AM wait for notification as in container exit -> NM notifies RM -> RM notifies AM. That will create some delay for AM to declare the job is done. With the NM -> RM heartbeat value used in big clusters, it could add couple seconds delay for the job. That might not be a big deal for regular MR jobs.

          Another thing is maybe MR AM don't need to call stopContainer on completed containers notified by RM.

          We still have a scenario where we want to sample X% of successful tasks. We can't specify it up front during ContainLaunchContext given we don't know the status of tasks at that point. Somehow AM needs to adjust the log aggregation policy at runtime based on the number of successful tasks so far. For that, we need something like updateContainer.

          Show
          mingma Ming Ma added a comment - Jason, that is a good point. I wondered about the reason behind the design of MR AM trying to stopContainer while task containers exist by themselves. The jiras you mentioned provide good background info. We can have RM AM wait for notification as in container exit -> NM notifies RM -> RM notifies AM. That will create some delay for AM to declare the job is done. With the NM -> RM heartbeat value used in big clusters, it could add couple seconds delay for the job. That might not be a big deal for regular MR jobs. Another thing is maybe MR AM don't need to call stopContainer on completed containers notified by RM. We still have a scenario where we want to sample X% of successful tasks. We can't specify it up front during ContainLaunchContext given we don't know the status of tasks at that point. Somehow AM needs to adjust the log aggregation policy at runtime based on the number of successful tasks so far. For that, we need something like updateContainer.
          Hide
          jlowe Jason Lowe added a comment -

          Personally I think the AM racing to kill tasks that have indicated they are done is a bug. It causes all sorts of problems:

          • Occasional "Container killed by ApplicationMaster" messages on otherwise normal tasks confuses users into thinking something went wrong for some of their tasks
          • Trying to take a java profile for a task can fail if the profile dump takes too long or the kill arrives too quickly (see MAPREDUCE-5465)
          • Killing a task that should otherwise be exiting on its own creates a constant race-condition scenario that has caused problems in other similar setups (see MAPREDUCE-4157 for a similar situation where the RM was killing AMs too early and causing problems).

          I think we should fix these races by implementing a reasonable delay between a task reporting a terminal state and a kill being issued by the AM. That allows the task to complete on its own with an appropriate exit code, eliminating the need to specify log states on stop as a workaround.

          Show
          jlowe Jason Lowe added a comment - Personally I think the AM racing to kill tasks that have indicated they are done is a bug. It causes all sorts of problems: Occasional "Container killed by ApplicationMaster" messages on otherwise normal tasks confuses users into thinking something went wrong for some of their tasks Trying to take a java profile for a task can fail if the profile dump takes too long or the kill arrives too quickly (see MAPREDUCE-5465 ) Killing a task that should otherwise be exiting on its own creates a constant race-condition scenario that has caused problems in other similar setups (see MAPREDUCE-4157 for a similar situation where the RM was killing AMs too early and causing problems). I think we should fix these races by implementing a reasonable delay between a task reporting a terminal state and a kill being issued by the AM. That allows the task to complete on its own with an appropriate exit code, eliminating the need to specify log states on stop as a workaround.
          Hide
          mingma Ming Ma added a comment -

          Chris Trezzo and Gera Shegalov and I discussed more on this. We would like to give some updates and get feedback from others. Similar to what Robert suggested originally, we need to provide a way for AM to update the log aggregation policy when it stops the container.

          One likely log aggregation policy for MRAppMaster is to log all failed tasks and sample logs of some successful tasks. What we found is container exitcode isn't a reliable indication whether a MR task finishes successfully. That is due to the fact MRAppMaster calls stopContainer while the YarnChild JVM exits by itself. Depending on the timing, you might get non-zero exitcode for successful tasks. So specifying the log aggregation policy up front during ContainerLaunchContext isn't enough.

          The mechanism for AM to pass log aggregation policy to YARN needs to address different scenarios.

          1. Containers exit by themselves. DistributedShell belongs to this category.
          2. AM has to explicitly stop the containers. MR belongs to this category.
          3. AM might want to inform NM to do on-demand log aggregation without stopping the container. This might be useful for some long running applications.

          To support #1, we have to specify the log aggregation policy as part of startContainer call. Chris' patch handles that.

          To support #2, AM has to indicate to NM whether the log aggregation is needed during stopContainer call. AM can uses different types of policies such as successful tasks sampling. For that, AM will specify the log aggregation policy as part of StopContainerRequest.

          StopContainerRequest.java
          ...
          
            /**
             * Get the <code>ContainerLogAggregationPolicy</code> for the container.
             *
             * @return The <code>ContainerLogAggregationPolicy</code> for the container.
             */  
            @Public
            @Stable
            public ContainerLogAggregationPolicy getLogAggregationPolicy();
          
            /**
             * Set the <code>ContainerLogAggregationPolicy</code> for the container.
             *
             * @param policy The <code>ContainerLogAggregationPolicy</code> for the container.
             */
            @Public
            @Stable
            public void setLogAggregationPolicy(ContainerLogAggregationPolicy policy);
          

          Alternatively we can define a new interface called ContainerStopContext to capture log aggregation policy and other information we want to include later, etc.

          StopContainerRequest.java
            @Public
            @Stable
            public abstract ContainerStopContext getContainerStopContext();
          
            @Public
            @Stable
            public abstract void setContainerStopContext(ContainerStopContext context);
          
          

          To support #3, we need some new API such as updateContainer so that AM can ask NM to roll container log and update the log aggregation policy, etc.

          Show
          mingma Ming Ma added a comment - Chris Trezzo and Gera Shegalov and I discussed more on this. We would like to give some updates and get feedback from others. Similar to what Robert suggested originally, we need to provide a way for AM to update the log aggregation policy when it stops the container. One likely log aggregation policy for MRAppMaster is to log all failed tasks and sample logs of some successful tasks. What we found is container exitcode isn't a reliable indication whether a MR task finishes successfully. That is due to the fact MRAppMaster calls stopContainer while the YarnChild JVM exits by itself. Depending on the timing, you might get non-zero exitcode for successful tasks. So specifying the log aggregation policy up front during ContainerLaunchContext isn't enough. The mechanism for AM to pass log aggregation policy to YARN needs to address different scenarios. 1. Containers exit by themselves. DistributedShell belongs to this category. 2. AM has to explicitly stop the containers. MR belongs to this category. 3. AM might want to inform NM to do on-demand log aggregation without stopping the container. This might be useful for some long running applications. To support #1, we have to specify the log aggregation policy as part of startContainer call. Chris' patch handles that. To support #2, AM has to indicate to NM whether the log aggregation is needed during stopContainer call. AM can uses different types of policies such as successful tasks sampling. For that, AM will specify the log aggregation policy as part of StopContainerRequest. StopContainerRequest.java ... /** * Get the <code>ContainerLogAggregationPolicy</code> for the container. * * @ return The <code>ContainerLogAggregationPolicy</code> for the container. */ @Public @Stable public ContainerLogAggregationPolicy getLogAggregationPolicy(); /** * Set the <code>ContainerLogAggregationPolicy</code> for the container. * * @param policy The <code>ContainerLogAggregationPolicy</code> for the container. */ @Public @Stable public void setLogAggregationPolicy(ContainerLogAggregationPolicy policy); Alternatively we can define a new interface called ContainerStopContext to capture log aggregation policy and other information we want to include later, etc. StopContainerRequest.java @Public @Stable public abstract ContainerStopContext getContainerStopContext(); @Public @Stable public abstract void setContainerStopContext(ContainerStopContext context); To support #3, we need some new API such as updateContainer so that AM can ask NM to roll container log and update the log aggregation policy, etc.
          Hide
          acmurthy Arun C Murthy added a comment -

          Chris Trezzo - I assigned this to you, thanks for working on this!

          Show
          acmurthy Arun C Murthy added a comment - Chris Trezzo - I assigned this to you, thanks for working on this!
          Hide
          hadoopqa Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12613251/YARN-221-trunk-v1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2419//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2419//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613251/YARN-221-trunk-v1.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2419//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2419//console This message is automatically generated.
          Hide
          ctrezzo Chris Trezzo added a comment -

          Submitting patch for a HadoopQA run.

          Show
          ctrezzo Chris Trezzo added a comment - Submitting patch for a HadoopQA run.
          Hide
          ctrezzo Chris Trezzo added a comment -

          Attached is a patch that provides the ability to configure log aggregation on a per container basis. All changes are at the yarn level. The main changes this patch makes are as follows:
          1. Addition of a new set of methods to the ContainerLaunchContext that lets a yarn client set the log aggregation policy for that container.
          2. A new set of log aggregation policies listed in the ContainerLogAggregationPolicy enum.
          3. Modifications to the LogAggregationService and associated code paths to allow for per container configuration.
          4. Addition of new unit tests and modification to existing tests to incorporate changes.

          I am going to follow this patch with another patch in YARN-85 that will make the necessary changes at the MapReduce level to provide per job configuration of log aggregation.

          Also, which repository should I list for yarn patches on reviews.apache.org? I see hdfs/common/mapreduce, but no yarn.

          Feedback/+1's would be much appreciated. Robert Joseph Evans Sandy Ryza

          Thanks!

          Show
          ctrezzo Chris Trezzo added a comment - Attached is a patch that provides the ability to configure log aggregation on a per container basis. All changes are at the yarn level. The main changes this patch makes are as follows: 1. Addition of a new set of methods to the ContainerLaunchContext that lets a yarn client set the log aggregation policy for that container. 2. A new set of log aggregation policies listed in the ContainerLogAggregationPolicy enum. 3. Modifications to the LogAggregationService and associated code paths to allow for per container configuration. 4. Addition of new unit tests and modification to existing tests to incorporate changes. I am going to follow this patch with another patch in YARN-85 that will make the necessary changes at the MapReduce level to provide per job configuration of log aggregation. Also, which repository should I list for yarn patches on reviews.apache.org? I see hdfs/common/mapreduce, but no yarn. Feedback/+1's would be much appreciated. Robert Joseph Evans Sandy Ryza Thanks!
          Hide
          ctrezzo Chris Trezzo added a comment -

          I have started looking at this and will hopefully have a patch in the next few days. Would someone mind adding me as a contributor so I can assign the JIRA to myself? Thanks!

          Show
          ctrezzo Chris Trezzo added a comment - I have started looking at this and will hopefully have a patch in the next few days. Would someone mind adding me as a contributor so I can assign the JIRA to myself? Thanks!
          Hide
          sseth Siddharth Seth added a comment -

          This is related to container re-use as well. Depending on how log files will be generated in case of re-use, it may be useful to provide a list of files to be aggregated.

          Show
          sseth Siddharth Seth added a comment - This is related to container re-use as well. Depending on how log files will be generated in case of re-use, it may be useful to provide a list of files to be aggregated.

            People

            • Assignee:
              mingma Ming Ma
              Reporter:
              revans2 Robert Joseph Evans
            • Votes:
              0 Vote for this issue
              Watchers:
              22 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development