Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.0, 3.0.0-alpha1
    • Component/s: timelineserver
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      We are introducing an early preview (alpha 1) of a major revision of YARN Timeline Service: v.2. YARN Timeline Service v.2 addresses two major challenges: improving scalability and reliability of Timeline Service, and enhancing usability by introducing flows and aggregation.

      YARN Timeline Service v.2 alpha 1 is provided so that users and developers can test it and provide feedback and suggestions for making it a ready replacement for Timeline Service v.1.x. It should be used only in a test capacity. Most importantly, security is not enabled. Do not set up or use Timeline Service v.2 until security is implemented if security is a critical requirement.

      More details are available in the [YARN Timeline Service v.2](./hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html) documentation.
      Show
      We are introducing an early preview (alpha 1) of a major revision of YARN Timeline Service: v.2. YARN Timeline Service v.2 addresses two major challenges: improving scalability and reliability of Timeline Service, and enhancing usability by introducing flows and aggregation. YARN Timeline Service v.2 alpha 1 is provided so that users and developers can test it and provide feedback and suggestions for making it a ready replacement for Timeline Service v.1.x. It should be used only in a test capacity. Most importantly, security is not enabled. Do not set up or use Timeline Service v.2 until security is implemented if security is a critical requirement. More details are available in the [YARN Timeline Service v.2](./hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html) documentation.

      Description

      We have the application timeline server implemented in yarn per YARN-1530 and YARN-321. Although it is a great feature, we have recognized several critical issues and features that need to be addressed.

      This JIRA proposes the design and implementation changes to address those. This is phase 1 of this effort.

      1. ATSv2.rev1.pdf
        249 kB
        Sangjin Lee
      2. ATSv2.rev2.pdf
        252 kB
        Sangjin Lee
      3. Data model proposal v1.pdf
        89 kB
        Zhijie Shen
      4. Timeline Service Next Gen - Planning - ppt.pptx
        345 kB
        Vinod Kumar Vavilapalli
      5. TimelineServiceStoragePerformanceTestSummaryYARN-2928.pdf
        179 kB
        Vrushali C
      6. ATSv2BackendHBaseSchemaproposal.pdf
        259 kB
        Sangjin Lee
      7. timeline_service_v2_next_milestones.pdf
        129 kB
        Sangjin Lee
      8. The YARN Timeline Service v.2 Documentation.pdf
        743 kB
        Sangjin Lee
      9. YARN-2928.01.patch
        2.36 MB
        Sangjin Lee
      10. YARN-2928.02.patch
        2.37 MB
        Sangjin Lee
      11. YARN-2928.03.patch
        2.37 MB
        Sangjin Lee

        Issue Links

        1.
        [Collector wireup] Set up timeline collector with basic request serving structure and lifecycle Sub-task Resolved Sangjin Lee
         
        2.
        [Data Model] create overall data objects of TS next gen Sub-task Resolved Zhijie Shen
         
        3.
        [Collector wireup] Implement RM starting its timeline collector Sub-task Resolved Naganarasimha G R
         
        4.
        [Storage abstraction] Create backing storage write interface for timeline collectors Sub-task Resolved Vrushali C
         
        5.
        [Storage implementation] Create a test-only backing storage implementation for ATS writes Sub-task Resolved Sangjin Lee
         
        6.
        [Storage implementation] Create standalone HBase backing storage implementation for ATS writes Sub-task Resolved Zhijie Shen
         
        7.
        [Storage implementation] Create HBase cluster backing storage implementation for ATS writes Sub-task Resolved Vrushali C
         
        8.
        [Collector wireup] Implement timeline app-level collector service discovery Sub-task Resolved Junping Du
         
        9.
        [Data Model] Make putEntities operation be aware of the app's context Sub-task Resolved Zhijie Shen
         
        10.
        [Data Model] Create ATS metrics API Sub-task Resolved Unassigned
         
        11.
        [Data Model] Create ATS configuration, metadata, etc. as part of entities Sub-task Resolved Unassigned
         
        12.
        [Event producers] Implement RM writing app lifecycle events to ATS Sub-task Resolved Naganarasimha G R
         
        13.
        [Event producers] Implement NM writing container lifecycle events to ATS Sub-task Resolved Naganarasimha G R
         
        14.
        [Data Serving] Set up ATS reader with basic request serving structure and lifecycle Sub-task Resolved Varun Saxena
         
        15.
        [Data Serving] Handle how to set up and start/stop ATS reader instances Sub-task Resolved Varun Saxena
         
        16.
        [Storage Implementation] Implement storage reader interface to fetch raw data from HBase backend Sub-task Resolved Zhijie Shen
         
        17.
        [Storage abstraction] Create backing storage read interface for ATS readers Sub-task Resolved Varun Saxena
         
        18.
        [Data Serving] Provide a very simple POC html ATS UI Sub-task Resolved Sangjin Lee
         
        19.
        Bootstrap TimelineServer Next Gen Module Sub-task Resolved Zhijie Shen
         
        20.
        [Collector implementation] the REST server (web server) for per-node collector does not work if it runs inside node manager Sub-task Resolved Li Lu
         
        21.
        [Collector wireup] We need an assured way to determine if a container is an AM container on NM Sub-task Resolved Giovanni Matteo Fumarola
         
        22.
        [Event producers] Change distributed shell to use new timeline service Sub-task Resolved Junping Du
         
        23.
        [Storage implementation] Exploiting the option of using Phoenix to access HBase backend Sub-task Resolved Li Lu
         
        24.
        [Documentation] Documenting the timeline service v2 Sub-task Resolved Sangjin Lee
         
        25.
        [Collector implementation] Implement the core functionality of the timeline collector Sub-task Resolved Vrushali C
         
        26.
        [Source organization] Refactor timeline collector according to new code organization Sub-task Resolved Li Lu
         
        27.
        [Data Mode] Implement client API to put generic entities Sub-task Resolved Zhijie Shen
         
        28.
        [Storage implementation] Create backing storage write interface and a POC only file based storage implementation Sub-task Resolved Vrushali C
         
        29.
        Refactor and fix null casting in some map cast for TimelineEntity (old and new) and fix findbug warnings Sub-task Resolved Junping Du
         
        30.
        rename TimelineAggregator etc. to TimelineCollector Sub-task Resolved Sangjin Lee
         
        31.
        [Event Producers] NM TimelineClient container metrics posting to new timeline service. Sub-task Resolved Junping Du
         
        32.
        Replace starting a separate thread for post entity with event loop in TimelineClient Sub-task Resolved Naganarasimha G R
         
        33.
        Collector's web server should randomly bind an available port Sub-task Resolved Zhijie Shen
         
        34.
        TestTimelineServiceClientIntegration fails Sub-task Resolved Sangjin Lee
         
        35.
        Reuse TimelineCollectorManager for RM Sub-task Resolved Zhijie Shen
         
        36.
        Clearly define flow ID/ flow run / flow version in API and storage Sub-task Resolved Zhijie Shen
         
        37.
        Security support for new timeline service. Sub-task Resolved Unassigned
         
        38.
        [Storage implementation] explore & create the native HBase schema for writes Sub-task Resolved Vrushali C
         
        39.
        Sub resources of timeline entity needs to be passed to a separate endpoint. Sub-task Resolved Zhijie Shen
         
        40.
        Cache runningApps in RMNode for getting running apps on given NodeId Sub-task Resolved Junping Du
         
        41.
        Consolidate flow name/version/run defaults Sub-task Resolved Sangjin Lee
         
        42.
        Add miniHBase cluster and Phoenix support to ATS v2 unit tests Sub-task Resolved Li Lu
         
        43.
        Consolidate data model change according to the backend implementation Sub-task Resolved Zhijie Shen
         
        44.
        unit tests failures and issues found from findbug from earlier ATS checkins Sub-task Resolved Naganarasimha G R
         
        45.
        HttpServer2 Max threads in TimelineCollectorManager should be more than 10 Sub-task Resolved Varun Saxena
         
        46.
        RM only get back addresses of Collectors that NM needs to know. Sub-task Resolved Junping Du
         
        47.
        Performance optimization using connection cache of Phoenix timeline writer Sub-task Resolved Li Lu
         
        48.
        TestMRTimelineEventHandling and TestApplication are broken Sub-task Resolved Sangjin Lee
         
        49.
        Decide if flow version should be part of row key or column Sub-task Resolved Unassigned
         
        50.
        Generalize native HBase writer for additional tables Sub-task Resolved Joep Rottinghuis
         
        51.
        build is broken on YARN-2928 branch due to possible dependency cycle Sub-task Resolved Li Lu
         
        52.
        Fix TestHBaseTimelineWriterImpl unit test failure by fixing it's test data Sub-task Resolved Vrushali C
         
        53.
        Test case failures in TestDistributedShell and some issue fixes related to ATSV2 Sub-task Resolved Naganarasimha G R
         
        54.
        [JDK-8][YARN-2928] Exclude jdk.tools from hbase-client and hbase-testing-util Sub-task Resolved Tsuyoshi Ozawa
         
        55.
        REST API implementation for getting raw entities in TimelineReader Sub-task Resolved Varun Saxena
         
        56.
        [Aggregation] App-level aggregation and accumulation for YARN system metrics Sub-task Resolved Li Lu
         
        57.
        add equals and hashCode to TimelineEntity and other classes in the data model Sub-task Resolved Li Lu
         
        58.
        Support for fetching specific configs and metrics based on prefixes Sub-task Resolved Varun Saxena
         
        59.
        Support complex filters in TimelineReader Sub-task Resolved Varun Saxena
         
        60.
        Implement support for querying single app and all apps for a flow run Sub-task Resolved Varun Saxena
         
        61.
        Add equals and hashCode to TimelineEntity Sub-task Resolved Li Lu
         
        62.
        Populate flow run data in the flow_run & flow activity tables Sub-task Resolved Vrushali C
         
        63.
        Refactor timelineservice.storage to add support to online and offline aggregation writers Sub-task Resolved Li Lu
         
        64.
        split the application table from the entity table Sub-task Resolved Sangjin Lee
         
        65.
        Bugs in HBaseTimelineWriterImpl Sub-task Resolved Vrushali C
         
        66.
        ensure timely flush of timeline writes Sub-task Resolved Sangjin Lee
         
        67.
        Fix new findbugs warnings in resourcemanager in YARN-2928 branch Sub-task Resolved Varun Saxena
         
        68.
        Rethink event column key issue Sub-task Resolved Vrushali C
         
        69.
        Change to use the AM flag in ContainerContext determine AM container Sub-task Resolved Sunil G
         
        70.
        Some of the NM events are not getting published due race condition when AM container finishes in NM Sub-task Resolved Naganarasimha G R
         
        71.
        Change the way metric values are stored in HBase Storage Sub-task Resolved Varun Saxena
         
        72.
        Publisher V2 should write the unmanaged AM flag and application priority Sub-task Resolved Sunil G
         
        73.
        Deal with byte representations of Longs in writer code Sub-task Resolved Sangjin Lee
         
        74.
        Miscellaneous issues in NodeManager project Sub-task Resolved Naganarasimha G R
         
        75.
        Add the flush and compaction functionality via coprocessors and scanners for flow run table Sub-task Resolved Vrushali C
         
        76.
        Populate the flow activity table Sub-task Resolved Vrushali C
         
        77.
        build is broken at TestHBaseTimelineWriterImpl.java Sub-task Resolved Sangjin Lee
         
        78.
        Support appUpdated event in TimelineV2 to publish details for movetoqueue, change in priority Sub-task Resolved Sunil G
         
        79.
        [timeline reader] implement support for querying for flows and flow runs Sub-task Resolved Sangjin Lee
         
        80.
        [reader REST API] implement support for querying for flows and flow runs Sub-task Resolved Varun Saxena
         
        81.
        Add a "skip existing table" mode for timeline schema creator Sub-task Resolved Li Lu
         
        82.
        Refactor the SystemMetricPublisher in RM to better support newer events Sub-task Resolved Naganarasimha G R
         
        83.
        Fix javadoc warnings floating up from hbase Sub-task Resolved Sangjin Lee
         
        84.
        [storage implementation] app id as string in row keys can cause incorrect ordering Sub-task Resolved Varun Saxena
         
        85.
        [reader implementation] support flow activity queries based on time Sub-task Resolved Varun Saxena
         
        86.
        Refactor reader classes in storage to nest under hbase specific package name Sub-task Resolved Li Lu
         
        87.
        Add request/response logging & timing for each REST endpoint call Sub-task Resolved Varun Saxena
         
        88.
        HBase reader throws NPE if Get returns no rows Sub-task Resolved Varun Saxena
         
        89.
        Store user in app to flow table Sub-task Resolved Varun Saxena
         
        90.
        Support fetching entities by UID and change the REST interface to conform to current REST APIs' in YARN Sub-task Resolved Varun Saxena
         
        91.
        Support additional queries for ATSv2 Web UI Sub-task Resolved Varun Saxena
         
        92.
        correctly set createdTime and remove modifiedTime when publishing entities Sub-task Resolved Varun Saxena
         
        93.
        TestJobHistoryEventHandler and TestRMContainerAllocator failing on YARN-2928 branch Sub-task Resolved Varun Saxena
         
        94.
        TestDistributedShell fails for V2 scenarios Sub-task Resolved Naganarasimha G R
         
        95.
        ensure the timeline service v.2 is disabled cleanly and has no impact when it's turned off Sub-task Resolved Sangjin Lee
         
        96.
        Fix javadoc and checkstyle issues in timelineservice code Sub-task Resolved Varun Saxena
         
        97.
        Unify the term flowId and flowName in timeline v2 codebase Sub-task Resolved Zhan Zhang
         
        98.
        Refactor reader API for better extensibility Sub-task Resolved Varun Saxena
         
        99.
        Provide a mechanism to represent complex filters and parse them at the REST layer Sub-task Resolved Varun Saxena
         
        100.
        TestTimelineAuthenticationFilter and TestYarnConfigurationFields fail Sub-task Resolved Sangjin Lee
         
        101.
        [Bug fix] RM fails to start when SMP is enabled Sub-task Resolved Li Lu
         
        102.
        TestDistributedShell fails for v2 test cases after modifications for 1.5 Sub-task Resolved Naganarasimha G R
         
        103.
        TestRMRestart fails and findbugs issue in YARN-2928 branch Sub-task Resolved Varun Saxena
         
        104.
        New findbugs warning in resourcemanager in YARN-2928 branch Sub-task Resolved Varun Saxena
         
        105.
        ATS storage has one extra record each time the RM got restarted Sub-task Resolved Naganarasimha G R
         
        106.
        NM is going down with NPE's due to single thread processing of events by Timeline client Sub-task Resolved Naganarasimha G R
         
        107.
        CPU Usage Metric is not captured properly in YARN-2928 Sub-task Resolved Naganarasimha G R
         
        108.
        Add a check in the coprocessor for table to operated on Sub-task Resolved Vrushali C
         
        109.
        Ensure non-metric values are returned as is for flow run table from the coprocessor Sub-task Resolved Vrushali C
         
        110.
        Online aggregation logic should not run immediately after collectors got started Sub-task Resolved Li Lu
         
        111.
        hbase unit tests fail due to dependency issues Sub-task Resolved Sangjin Lee
         
        112.
        Code cleanup for TestDistributedShell Sub-task Resolved Li Lu
         
        113.
        [Documentation] Update timeline service v2 documentation to capture information about filters Sub-task Resolved Varun Saxena
         
        114.
        upgrade HBase version for first merge Sub-task Resolved Vrushali C
         
        115.
        created time shows 0 in most REST output Sub-task Resolved Varun Saxena
         
        116.
        flow activities and flow runs are populated with wrong timestamp when RM restarts w/ recovery enabled Sub-task Resolved Varun Saxena
         
        117.
        timelinereader has a lot of logging that's not useful Sub-task Resolved Sangjin Lee
         
        118.
        NPE in Separator.joinEncoded() Sub-task Resolved Vrushali C
         
        119.
        timeline service build fails with java 8 Sub-task Resolved Sangjin Lee
         
        120.
        entire time series is returned for YARN container system metrics (CPU and memory) Sub-task Resolved Varun Saxena
         
        121.
        timestamps are stored unencoded causing parse errors Sub-task Resolved Varun Saxena
         
        122.
        YARN container system metrics are not aggregated to application Sub-task Resolved Naganarasimha G R
         
        123.
        fix "no findbugs output file" error for hadoop-yarn-server-timelineservice-hbase-tests Sub-task Resolved Vrushali C
         
        124.
        fix findbugs warnings/errors for hadoop-yarn-server-timelineservice-hbase-tests Sub-task Resolved Vrushali C
         
        125.
        Escaping occurences of encodedValues Sub-task Resolved Sangjin Lee
         
        126.
        Eliminate singleton converters and static method access Sub-task Resolved Joep Rottinghuis
         
        127.
        [documentation] several updates/corrections to timeline service documentation Sub-task Resolved Sangjin Lee
         
        128.
        Make HBaseTimeline[Reader|Writer]Impl default and move FileSystemTimeline*Impl Sub-task Resolved Joep Rottinghuis
         
        129.
        NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues Sub-task Resolved Varun Saxena
         
        130.
        Avoid re-creation of EventColumnNameConverter in HBaseTimelineWriterImpl#storeEvents Sub-task Resolved Joep Rottinghuis
         
        131.
        Eliminate unused imports checkstyle warnings Sub-task Resolved Joep Rottinghuis
         
        132.
        fix several rebase and other miscellaneous issues before merge Sub-task Resolved Sangjin Lee
         
        133.
        Store node information for finished containers in timeline v2 Sub-task Resolved Unassigned
         
        134.
        fix hadoop-aws pom not to do the exclusion Sub-task Resolved Sangjin Lee
         

          Activity

          Hide
          sjlee0 Sangjin Lee added a comment -

          I added the release notes contents mostly along the lines of other major features in the release notes. I kept it brief since most of the details are available in the main documentation.

          Andrew Wang, could you double check the hyperlink there to ensure it is correct? That's how it's referenced in the main site html, but I just want to make sure that's correct.

          Show
          sjlee0 Sangjin Lee added a comment - I added the release notes contents mostly along the lines of other major features in the release notes. I kept it brief since most of the details are available in the main documentation. Andrew Wang , could you double check the hyperlink there to ensure it is correct? That's how it's referenced in the main site html, but I just want to make sure that's correct.
          Hide
          andrew.wang Andrew Wang added a comment -

          I think the content can be the same for the release notes here and at HADOOP-13383. I can handle modifying HADOOP-13383 if you provide the content. Thanks Sangjin!

          Show
          andrew.wang Andrew Wang added a comment - I think the content can be the same for the release notes here and at HADOOP-13383 . I can handle modifying HADOOP-13383 if you provide the content. Thanks Sangjin!
          Hide
          sjlee0 Sangjin Lee added a comment -

          Andrew Wang, thanks for the reminder. I'll take that. I also saw your JIRA at HADOOP-13383. Is that satisfied by the release notes here, or do you want me to add the same/similar contents there in addition to this?

          Show
          sjlee0 Sangjin Lee added a comment - Andrew Wang , thanks for the reminder. I'll take that. I also saw your JIRA at HADOOP-13383 . Is that satisfied by the release notes here, or do you want me to add the same/similar contents there in addition to this?
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi, could someone add release notes for this feature? Thanks!

          Show
          andrew.wang Andrew Wang added a comment - Hi, could someone add release notes for this feature? Thanks!
          Hide
          sjlee0 Sangjin Lee added a comment -

          YARN-5354 and MAPREDUCE-6731 filed for the unit tests.

          Show
          sjlee0 Sangjin Lee added a comment - YARN-5354 and MAPREDUCE-6731 filed for the unit tests.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Absolutely. I am going to cut the next phase umbrella JIRAs and move tasks over to them shortly.

          Show
          sjlee0 Sangjin Lee added a comment - Absolutely. I am going to cut the next phase umbrella JIRAs and move tasks over to them shortly.
          Hide
          djp Junping Du added a comment -

          That's a great milestone! Thanks Sangjin Lee and all for outstanding work here.
          Given we are resolving issue for phase 1, may be the next step is to create a second phase umbrella JIRA and move all unresolved issues there for tracking?

          Show
          djp Junping Du added a comment - That's a great milestone! Thanks Sangjin Lee and all for outstanding work here. Given we are resolving issue for phase 1, may be the next step is to create a second phase umbrella JIRA and move all unresolved issues there for tracking?
          Hide
          sjlee0 Sangjin Lee added a comment -

          It's been merged to trunk. Huge thanks to the contributors who worked on this feature (Joep Rottinghuis, Junping Du, Li Lu, Naganarasimha G R, Varun Saxena, Vinod Kumar Vavilapalli, Vrushali C, and Zhijie Shen) and everyone who participated in reviews and feedback!

          Show
          sjlee0 Sangjin Lee added a comment - It's been merged to trunk. Huge thanks to the contributors who worked on this feature ( Joep Rottinghuis , Junping Du , Li Lu , Naganarasimha G R , Varun Saxena , Vinod Kumar Vavilapalli , Vrushali C , and Zhijie Shen ) and everyone who participated in reviews and feedback!
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #10074 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10074/)
          YARN-3721. build is broken on YARN-2928 branch due to possible (sjlee: rev f6682125297bfb2da0f72fe6c3c1812716800b91)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
          • hadoop-project/pom.xml
            YARN-4644. TestRMRestart fails and findbugs issue in YARN-2928 branch (sjlee: rev 06f0b50a284455ffd5857cb42f386e92d121d0e6)
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
            YARN-4712. CPU Usage Metric is not captured properly in YARN-2928. (sjlee: rev 6f6cc647d6e77f6cc4c66e0534f8c73bc1612a1b)
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/TestNMTimelinePublisher.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/NMTimelinePublisher.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #10074 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10074/ ) YARN-3721 . build is broken on YARN-2928 branch due to possible (sjlee: rev f6682125297bfb2da0f72fe6c3c1812716800b91) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml hadoop-project/pom.xml YARN-4644 . TestRMRestart fails and findbugs issue in YARN-2928 branch (sjlee: rev 06f0b50a284455ffd5857cb42f386e92d121d0e6) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java YARN-4712 . CPU Usage Metric is not captured properly in YARN-2928 . (sjlee: rev 6f6cc647d6e77f6cc4c66e0534f8c73bc1612a1b) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/TestNMTimelinePublisher.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/NMTimelinePublisher.java
          Hide
          sjlee0 Sangjin Lee added a comment -

          I think we're good.

          I think the test failure with TestMRTimelineEventHandling was caused by concurrent builds. The test uses a fixed timeline service data location which doesn't bode well. Furthermore, it uses "/" as the separator which needs to be fixed too. TestDistributedShell has the same problem. I'll file a JIRA to fix those tests after merging this.

          I'll merge it shortly.

          Show
          sjlee0 Sangjin Lee added a comment - I think we're good. I think the test failure with TestMRTimelineEventHandling was caused by concurrent builds. The test uses a fixed timeline service data location which doesn't bode well. Furthermore, it uses "/" as the separator which needs to be fixed too. TestDistributedShell has the same problem. I'll file a JIRA to fix those tests after merging this. I'll merge it shortly.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 36s Docker mode activated.
          0 shelldocs 0m 0s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 84 new or modified test files.
          0 mvndep 0m 24s Maven dependency ordering for branch
          +1 mvninstall 6m 23s trunk passed
          +1 compile 6m 35s trunk passed
          +1 checkstyle 2m 25s trunk passed
          +1 mvnsite 10m 43s trunk passed
          +1 mvneclipse 4m 30s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 8m 56s trunk passed
          +1 javadoc 6m 43s trunk passed
          0 mvndep 0m 18s Maven dependency ordering for patch
          +1 mvninstall 10m 15s the patch passed
          +1 compile 7m 0s the patch passed
          +1 cc 7m 0s the patch passed
          -1 javac 7m 0s root generated 2 new + 708 unchanged - 0 fixed = 710 total (was 708)
          -1 checkstyle 2m 30s root: The patch generated 103 new + 3267 unchanged - 128 fixed = 3370 total (was 3395)
          +1 mvnsite 14m 51s the patch passed
          +1 mvneclipse 6m 21s the patch passed
          +1 shellcheck 0m 12s There were no new shellcheck issues.
          +1 whitespace 0m 2s The patch has no whitespace issues.
          +1 xml 0m 17s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 12m 34s the patch passed
          +1 javadoc 0m 14s hadoop-project in the patch passed.
          +1 javadoc 0m 52s hadoop-common in the patch passed.
          +1 javadoc 1m 27s hadoop-yarn-project_hadoop-yarn generated 0 new + 6621 unchanged - 1 fixed = 6621 total (was 6622)
          +1 javadoc 0m 20s hadoop-yarn-api in the patch passed.
          +1 javadoc 0m 31s hadoop-yarn-common in the patch passed.
          +1 javadoc 0m 52s hadoop-yarn-server in the patch passed.
          +1 javadoc 0m 17s hadoop-yarn-server-common in the patch passed.
          +1 javadoc 0m 20s hadoop-yarn-server-nodemanager in the patch passed.
          +1 javadoc 0m 20s hadoop-yarn-server-timelineservice in the patch passed.
          +1 javadoc 0m 24s hadoop-yarn-server-resourcemanager in the patch passed.
          +1 javadoc 0m 14s hadoop-yarn-server-tests in the patch passed.
          +1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156)
          +1 javadoc 0m 14s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 javadoc 0m 16s hadoop-yarn-applications-distributedshell in the patch passed.
          +1 javadoc 0m 12s hadoop-yarn-site in the patch passed.
          +1 javadoc 0m 26s hadoop-mapreduce-client-core in the patch passed.
          +1 javadoc 0m 20s hadoop-mapreduce-client-app in the patch passed.
          +1 javadoc 0m 16s hadoop-mapreduce-client-jobclient in the patch passed.
          +1 unit 0m 12s hadoop-project in the patch passed.
          +1 unit 8m 17s hadoop-common in the patch passed.
          -1 unit 65m 59s hadoop-yarn in the patch failed.
          +1 unit 0m 29s hadoop-yarn-api in the patch passed.
          +1 unit 2m 19s hadoop-yarn-common in the patch passed.
          -1 unit 64m 26s hadoop-yarn-server in the patch failed.
          +1 unit 0m 30s hadoop-yarn-server-common in the patch passed.
          +1 unit 13m 9s hadoop-yarn-server-nodemanager in the patch passed.
          +1 unit 0m 51s hadoop-yarn-server-timelineservice in the patch passed.
          +1 unit 36m 39s hadoop-yarn-server-resourcemanager in the patch passed.
          -1 unit 4m 31s hadoop-yarn-server-tests in the patch failed.
          -1 unit 8m 27s hadoop-yarn-client in the patch failed.
          +1 unit 5m 5s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 unit 10m 8s hadoop-yarn-applications-distributedshell in the patch passed.
          +1 unit 0m 13s hadoop-yarn-site in the patch passed.
          +1 unit 2m 5s hadoop-mapreduce-client-core in the patch passed.
          +1 unit 8m 54s hadoop-mapreduce-client-app in the patch passed.
          -1 unit 116m 56s hadoop-mapreduce-client-jobclient in the patch failed.
          -1 asflicense 0m 38s The patch generated 2 ASF License warnings.
          462m 51s



          Reason Tests
          Failed junit tests hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.mapred.TestMRCJCFileOutputCommitter



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12817009/YARN-2928.03.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux bddafbf96c67 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9bdb5be
          Default Java 1.8.0_91
          shellcheck v0.4.4
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12262/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12262/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 36s Docker mode activated. 0 shelldocs 0m 0s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 84 new or modified test files. 0 mvndep 0m 24s Maven dependency ordering for branch +1 mvninstall 6m 23s trunk passed +1 compile 6m 35s trunk passed +1 checkstyle 2m 25s trunk passed +1 mvnsite 10m 43s trunk passed +1 mvneclipse 4m 30s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 8m 56s trunk passed +1 javadoc 6m 43s trunk passed 0 mvndep 0m 18s Maven dependency ordering for patch +1 mvninstall 10m 15s the patch passed +1 compile 7m 0s the patch passed +1 cc 7m 0s the patch passed -1 javac 7m 0s root generated 2 new + 708 unchanged - 0 fixed = 710 total (was 708) -1 checkstyle 2m 30s root: The patch generated 103 new + 3267 unchanged - 128 fixed = 3370 total (was 3395) +1 mvnsite 14m 51s the patch passed +1 mvneclipse 6m 21s the patch passed +1 shellcheck 0m 12s There were no new shellcheck issues. +1 whitespace 0m 2s The patch has no whitespace issues. +1 xml 0m 17s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 12m 34s the patch passed +1 javadoc 0m 14s hadoop-project in the patch passed. +1 javadoc 0m 52s hadoop-common in the patch passed. +1 javadoc 1m 27s hadoop-yarn-project_hadoop-yarn generated 0 new + 6621 unchanged - 1 fixed = 6621 total (was 6622) +1 javadoc 0m 20s hadoop-yarn-api in the patch passed. +1 javadoc 0m 31s hadoop-yarn-common in the patch passed. +1 javadoc 0m 52s hadoop-yarn-server in the patch passed. +1 javadoc 0m 17s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 20s hadoop-yarn-server-nodemanager in the patch passed. +1 javadoc 0m 20s hadoop-yarn-server-timelineservice in the patch passed. +1 javadoc 0m 24s hadoop-yarn-server-resourcemanager in the patch passed. +1 javadoc 0m 14s hadoop-yarn-server-tests in the patch passed. +1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156) +1 javadoc 0m 14s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 javadoc 0m 16s hadoop-yarn-applications-distributedshell in the patch passed. +1 javadoc 0m 12s hadoop-yarn-site in the patch passed. +1 javadoc 0m 26s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 20s hadoop-mapreduce-client-app in the patch passed. +1 javadoc 0m 16s hadoop-mapreduce-client-jobclient in the patch passed. +1 unit 0m 12s hadoop-project in the patch passed. +1 unit 8m 17s hadoop-common in the patch passed. -1 unit 65m 59s hadoop-yarn in the patch failed. +1 unit 0m 29s hadoop-yarn-api in the patch passed. +1 unit 2m 19s hadoop-yarn-common in the patch passed. -1 unit 64m 26s hadoop-yarn-server in the patch failed. +1 unit 0m 30s hadoop-yarn-server-common in the patch passed. +1 unit 13m 9s hadoop-yarn-server-nodemanager in the patch passed. +1 unit 0m 51s hadoop-yarn-server-timelineservice in the patch passed. +1 unit 36m 39s hadoop-yarn-server-resourcemanager in the patch passed. -1 unit 4m 31s hadoop-yarn-server-tests in the patch failed. -1 unit 8m 27s hadoop-yarn-client in the patch failed. +1 unit 5m 5s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 unit 10m 8s hadoop-yarn-applications-distributedshell in the patch passed. +1 unit 0m 13s hadoop-yarn-site in the patch passed. +1 unit 2m 5s hadoop-mapreduce-client-core in the patch passed. +1 unit 8m 54s hadoop-mapreduce-client-app in the patch passed. -1 unit 116m 56s hadoop-mapreduce-client-jobclient in the patch failed. -1 asflicense 0m 38s The patch generated 2 ASF License warnings. 462m 51s Reason Tests Failed junit tests hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.client.cli.TestLogsCLI   hadoop.mapred.TestMRCJCFileOutputCommitter Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12817009/YARN-2928.03.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux bddafbf96c67 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9bdb5be Default Java 1.8.0_91 shellcheck v0.4.4 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12262/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12262/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12262/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          sjlee0 Sangjin Lee added a comment -

          The latest run still produces a number of known unit test failures. TestMRTimelineEventHandling also failed, and I suspect this might have something to do with multiple builds that ran at the same time. I just kicked off another jenkins run to see if we can get a cleaner run. At any rate, I think we should go ahead and merge it soon unless there is a clear indication there is an issue.

          Show
          sjlee0 Sangjin Lee added a comment - The latest run still produces a number of known unit test failures. TestMRTimelineEventHandling also failed, and I suspect this might have something to do with multiple builds that ran at the same time. I just kicked off another jenkins run to see if we can get a cleaner run. At any rate, I think we should go ahead and merge it soon unless there is a clear indication there is an issue.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 36s Docker mode activated.
          0 shelldocs 0m 0s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 1s The patch appears to include 84 new or modified test files.
          0 mvndep 0m 13s Maven dependency ordering for branch
          +1 mvninstall 7m 51s trunk passed
          +1 compile 8m 2s trunk passed
          +1 checkstyle 2m 25s trunk passed
          +1 mvnsite 12m 6s trunk passed
          +1 mvneclipse 4m 55s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 10m 39s trunk passed
          +1 javadoc 7m 29s trunk passed
          0 mvndep 0m 54s Maven dependency ordering for patch
          -1 mvninstall 0m 21s hadoop-mapreduce-client-app in the patch failed.
          +1 compile 8m 20s the patch passed
          +1 cc 8m 20s the patch passed
          -1 javac 8m 20s root generated 2 new + 708 unchanged - 0 fixed = 710 total (was 708)
          -1 checkstyle 2m 30s root: The patch generated 103 new + 3267 unchanged - 128 fixed = 3370 total (was 3395)
          +1 mvnsite 15m 5s the patch passed
          +1 mvneclipse 6m 37s the patch passed
          +1 shellcheck 0m 13s There were no new shellcheck issues.
          +1 whitespace 0m 2s The patch has no whitespace issues.
          +1 xml 0m 20s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 14m 33s the patch passed
          +1 javadoc 0m 17s hadoop-project in the patch passed.
          +1 javadoc 1m 0s hadoop-common in the patch passed.
          +1 javadoc 1m 55s hadoop-yarn-project_hadoop-yarn generated 0 new + 6621 unchanged - 1 fixed = 6621 total (was 6622)
          +1 javadoc 0m 24s hadoop-yarn-api in the patch passed.
          +1 javadoc 0m 41s hadoop-yarn-common in the patch passed.
          +1 javadoc 1m 5s hadoop-yarn-server in the patch passed.
          +1 javadoc 0m 22s hadoop-yarn-server-common in the patch passed.
          +1 javadoc 0m 23s hadoop-yarn-server-nodemanager in the patch passed.
          +1 javadoc 0m 19s hadoop-yarn-server-timelineservice in the patch passed.
          +1 javadoc 0m 22s hadoop-mapreduce-client-app in the patch passed.
          +1 javadoc 0m 28s hadoop-mapreduce-client-core in the patch passed.
          +1 javadoc 0m 17s hadoop-mapreduce-client-jobclient in the patch passed.
          +1 javadoc 0m 17s hadoop-yarn-applications-distributedshell in the patch passed.
          +1 javadoc 0m 21s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156)
          +1 javadoc 0m 31s hadoop-yarn-server-resourcemanager in the patch passed.
          +1 javadoc 0m 20s hadoop-yarn-server-tests in the patch passed.
          +1 javadoc 0m 19s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 javadoc 0m 15s hadoop-yarn-site in the patch passed.
          +1 unit 0m 14s hadoop-project in the patch passed.
          +1 unit 10m 20s hadoop-common in the patch passed.
          -1 unit 63m 28s hadoop-yarn in the patch failed.
          +1 unit 0m 33s hadoop-yarn-api in the patch passed.
          +1 unit 2m 23s hadoop-yarn-common in the patch passed.
          -1 unit 66m 20s hadoop-yarn-server in the patch failed.
          +1 unit 0m 31s hadoop-yarn-server-common in the patch passed.
          -1 unit 13m 16s hadoop-yarn-server-nodemanager in the patch failed.
          +1 unit 1m 3s hadoop-yarn-server-timelineservice in the patch passed.
          +1 unit 9m 16s hadoop-mapreduce-client-app in the patch passed.
          +1 unit 2m 24s hadoop-mapreduce-client-core in the patch passed.
          -1 unit 111m 10s hadoop-mapreduce-client-jobclient in the patch failed.
          +1 unit 9m 19s hadoop-yarn-applications-distributedshell in the patch passed.
          -1 unit 8m 37s hadoop-yarn-client in the patch failed.
          -1 unit 37m 5s hadoop-yarn-server-resourcemanager in the patch failed.
          -1 unit 4m 43s hadoop-yarn-server-tests in the patch failed.
          +1 unit 5m 3s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 unit 0m 24s hadoop-yarn-site in the patch passed.
          -1 asflicense 0m 37s The patch generated 4 ASF License warnings.
          475m 17s



          Reason Tests
          Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
            hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.server.resourcemanager.TestRMRestart
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
            hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
            hadoop.mapred.TestMRCJCFileOutputCommitter
            hadoop.mapred.TestMRTimelineEventHandling
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.server.resourcemanager.TestRMRestart
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12817009/YARN-2928.03.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux 190f8f06f837 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9bdb5be
          Default Java 1.8.0_91
          shellcheck v0.4.4
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12259/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12259/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 36s Docker mode activated. 0 shelldocs 0m 0s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 1s The patch appears to include 84 new or modified test files. 0 mvndep 0m 13s Maven dependency ordering for branch +1 mvninstall 7m 51s trunk passed +1 compile 8m 2s trunk passed +1 checkstyle 2m 25s trunk passed +1 mvnsite 12m 6s trunk passed +1 mvneclipse 4m 55s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 10m 39s trunk passed +1 javadoc 7m 29s trunk passed 0 mvndep 0m 54s Maven dependency ordering for patch -1 mvninstall 0m 21s hadoop-mapreduce-client-app in the patch failed. +1 compile 8m 20s the patch passed +1 cc 8m 20s the patch passed -1 javac 8m 20s root generated 2 new + 708 unchanged - 0 fixed = 710 total (was 708) -1 checkstyle 2m 30s root: The patch generated 103 new + 3267 unchanged - 128 fixed = 3370 total (was 3395) +1 mvnsite 15m 5s the patch passed +1 mvneclipse 6m 37s the patch passed +1 shellcheck 0m 13s There were no new shellcheck issues. +1 whitespace 0m 2s The patch has no whitespace issues. +1 xml 0m 20s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 14m 33s the patch passed +1 javadoc 0m 17s hadoop-project in the patch passed. +1 javadoc 1m 0s hadoop-common in the patch passed. +1 javadoc 1m 55s hadoop-yarn-project_hadoop-yarn generated 0 new + 6621 unchanged - 1 fixed = 6621 total (was 6622) +1 javadoc 0m 24s hadoop-yarn-api in the patch passed. +1 javadoc 0m 41s hadoop-yarn-common in the patch passed. +1 javadoc 1m 5s hadoop-yarn-server in the patch passed. +1 javadoc 0m 22s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 23s hadoop-yarn-server-nodemanager in the patch passed. +1 javadoc 0m 19s hadoop-yarn-server-timelineservice in the patch passed. +1 javadoc 0m 22s hadoop-mapreduce-client-app in the patch passed. +1 javadoc 0m 28s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 17s hadoop-mapreduce-client-jobclient in the patch passed. +1 javadoc 0m 17s hadoop-yarn-applications-distributedshell in the patch passed. +1 javadoc 0m 21s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156) +1 javadoc 0m 31s hadoop-yarn-server-resourcemanager in the patch passed. +1 javadoc 0m 20s hadoop-yarn-server-tests in the patch passed. +1 javadoc 0m 19s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 javadoc 0m 15s hadoop-yarn-site in the patch passed. +1 unit 0m 14s hadoop-project in the patch passed. +1 unit 10m 20s hadoop-common in the patch passed. -1 unit 63m 28s hadoop-yarn in the patch failed. +1 unit 0m 33s hadoop-yarn-api in the patch passed. +1 unit 2m 23s hadoop-yarn-common in the patch passed. -1 unit 66m 20s hadoop-yarn-server in the patch failed. +1 unit 0m 31s hadoop-yarn-server-common in the patch passed. -1 unit 13m 16s hadoop-yarn-server-nodemanager in the patch failed. +1 unit 1m 3s hadoop-yarn-server-timelineservice in the patch passed. +1 unit 9m 16s hadoop-mapreduce-client-app in the patch passed. +1 unit 2m 24s hadoop-mapreduce-client-core in the patch passed. -1 unit 111m 10s hadoop-mapreduce-client-jobclient in the patch failed. +1 unit 9m 19s hadoop-yarn-applications-distributedshell in the patch passed. -1 unit 8m 37s hadoop-yarn-client in the patch failed. -1 unit 37m 5s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 4m 43s hadoop-yarn-server-tests in the patch failed. +1 unit 5m 3s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 unit 0m 24s hadoop-yarn-site in the patch passed. -1 asflicense 0m 37s The patch generated 4 ASF License warnings. 475m 17s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation   hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation   hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager   hadoop.mapred.TestMRCJCFileOutputCommitter   hadoop.mapred.TestMRTimelineEventHandling   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12817009/YARN-2928.03.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux 190f8f06f837 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9bdb5be Default Java 1.8.0_91 shellcheck v0.4.4 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12259/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12259/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12259/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 34s Docker mode activated.
          0 shelldocs 0m 0s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 84 new or modified test files.
          0 mvndep 0m 12s Maven dependency ordering for branch
          +1 mvninstall 7m 29s trunk passed
          +1 compile 8m 17s trunk passed
          +1 checkstyle 2m 27s trunk passed
          +1 mvnsite 11m 59s trunk passed
          +1 mvneclipse 4m 54s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 10m 38s trunk passed
          +1 javadoc 7m 41s trunk passed
          0 mvndep 0m 53s Maven dependency ordering for patch
          -1 mvninstall 0m 20s hadoop-mapreduce-client-app in the patch failed.
          +1 compile 8m 41s the patch passed
          +1 cc 8m 41s the patch passed
          -1 javac 8m 41s root generated 2 new + 708 unchanged - 0 fixed = 710 total (was 708)
          -1 checkstyle 2m 33s root: The patch generated 103 new + 3267 unchanged - 128 fixed = 3370 total (was 3395)
          +1 mvnsite 14m 59s the patch passed
          +1 mvneclipse 6m 40s the patch passed
          +1 shellcheck 0m 12s There were no new shellcheck issues.
          +1 whitespace 0m 2s The patch has no whitespace issues.
          +1 xml 0m 18s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 14m 42s the patch passed
          +1 javadoc 0m 17s hadoop-project in the patch passed.
          +1 javadoc 1m 1s hadoop-common in the patch passed.
          +1 javadoc 1m 45s hadoop-yarn-project_hadoop-yarn generated 0 new + 6621 unchanged - 1 fixed = 6621 total (was 6622)
          +1 javadoc 0m 25s hadoop-yarn-api in the patch passed.
          +1 javadoc 0m 41s hadoop-yarn-common in the patch passed.
          +1 javadoc 1m 7s hadoop-yarn-server in the patch passed.
          +1 javadoc 0m 21s hadoop-yarn-server-common in the patch passed.
          +1 javadoc 0m 31s hadoop-yarn-server-nodemanager in the patch passed.
          +1 javadoc 0m 20s hadoop-yarn-server-timelineservice in the patch passed.
          +1 javadoc 0m 25s hadoop-mapreduce-client-app in the patch passed.
          +1 javadoc 0m 31s hadoop-mapreduce-client-core in the patch passed.
          +1 javadoc 0m 20s hadoop-mapreduce-client-jobclient in the patch passed.
          +1 javadoc 0m 18s hadoop-yarn-applications-distributedshell in the patch passed.
          +1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156)
          +1 javadoc 0m 27s hadoop-yarn-server-resourcemanager in the patch passed.
          +1 javadoc 0m 16s hadoop-yarn-server-tests in the patch passed.
          +1 javadoc 0m 15s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 javadoc 0m 13s hadoop-yarn-site in the patch passed.
          +1 unit 0m 12s hadoop-project in the patch passed.
          +1 unit 11m 0s hadoop-common in the patch passed.
          -1 unit 52m 41s hadoop-yarn in the patch failed.
          -1 unit 0m 24s hadoop-yarn-api in the patch failed.
          -1 unit 0m 21s hadoop-yarn-common in the patch failed.
          -1 unit 39m 18s hadoop-yarn-server in the patch failed.
          -1 unit 0m 24s hadoop-yarn-server-common in the patch failed.
          -1 unit 3m 42s hadoop-yarn-server-nodemanager in the patch failed.
          -1 unit 3m 50s hadoop-yarn-server-timelineservice in the patch failed.
          -1 unit 0m 26s hadoop-mapreduce-client-app in the patch failed.
          +1 unit 2m 30s hadoop-mapreduce-client-core in the patch passed.
          -1 unit 0m 27s hadoop-mapreduce-client-jobclient in the patch failed.
          -1 unit 0m 35s hadoop-yarn-applications-distributedshell in the patch failed.
          -1 unit 0m 19s hadoop-yarn-client in the patch failed.
          +1 unit 35m 45s hadoop-yarn-server-resourcemanager in the patch passed.
          -1 unit 0m 14s hadoop-yarn-server-tests in the patch failed.
          -1 unit 88m 44s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed.
          +1 unit 0m 12s hadoop-yarn-site in the patch passed.
          -1 asflicense 0m 24s The patch generated 1 ASF License warnings.
          370m 7s



          Reason Tests
          Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShell
            hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.api.impl.TestTimelineClient
            hadoop.yarn.client.api.impl.TestTimelineClient
            hadoop.hdfs.web.TestWebHDFS
            hadoop.hdfs.TestRollingUpgrade
            hadoop.yarn.applications.distributedshell.TestDistributedShell
            hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.api.impl.TestTimelineClient
            hadoop.yarn.applications.distributedshell.TestDistributedShell
            hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.hdfs.web.TestWebHDFS
            hadoop.hdfs.TestRollingUpgrade
            hadoop.yarn.applications.distributedshell.TestDistributedShell
            hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels
            hadoop.yarn.client.api.impl.TestTimelineClient
          Timed out junit tests org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12817009/YARN-2928.03.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux 2df9227f313e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9bdb5be
          Default Java 1.8.0_91
          shellcheck v0.4.4
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12258/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12258/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 34s Docker mode activated. 0 shelldocs 0m 0s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 84 new or modified test files. 0 mvndep 0m 12s Maven dependency ordering for branch +1 mvninstall 7m 29s trunk passed +1 compile 8m 17s trunk passed +1 checkstyle 2m 27s trunk passed +1 mvnsite 11m 59s trunk passed +1 mvneclipse 4m 54s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 10m 38s trunk passed +1 javadoc 7m 41s trunk passed 0 mvndep 0m 53s Maven dependency ordering for patch -1 mvninstall 0m 20s hadoop-mapreduce-client-app in the patch failed. +1 compile 8m 41s the patch passed +1 cc 8m 41s the patch passed -1 javac 8m 41s root generated 2 new + 708 unchanged - 0 fixed = 710 total (was 708) -1 checkstyle 2m 33s root: The patch generated 103 new + 3267 unchanged - 128 fixed = 3370 total (was 3395) +1 mvnsite 14m 59s the patch passed +1 mvneclipse 6m 40s the patch passed +1 shellcheck 0m 12s There were no new shellcheck issues. +1 whitespace 0m 2s The patch has no whitespace issues. +1 xml 0m 18s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 14m 42s the patch passed +1 javadoc 0m 17s hadoop-project in the patch passed. +1 javadoc 1m 1s hadoop-common in the patch passed. +1 javadoc 1m 45s hadoop-yarn-project_hadoop-yarn generated 0 new + 6621 unchanged - 1 fixed = 6621 total (was 6622) +1 javadoc 0m 25s hadoop-yarn-api in the patch passed. +1 javadoc 0m 41s hadoop-yarn-common in the patch passed. +1 javadoc 1m 7s hadoop-yarn-server in the patch passed. +1 javadoc 0m 21s hadoop-yarn-server-common in the patch passed. +1 javadoc 0m 31s hadoop-yarn-server-nodemanager in the patch passed. +1 javadoc 0m 20s hadoop-yarn-server-timelineservice in the patch passed. +1 javadoc 0m 25s hadoop-mapreduce-client-app in the patch passed. +1 javadoc 0m 31s hadoop-mapreduce-client-core in the patch passed. +1 javadoc 0m 20s hadoop-mapreduce-client-jobclient in the patch passed. +1 javadoc 0m 18s hadoop-yarn-applications-distributedshell in the patch passed. +1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 0 new + 155 unchanged - 1 fixed = 155 total (was 156) +1 javadoc 0m 27s hadoop-yarn-server-resourcemanager in the patch passed. +1 javadoc 0m 16s hadoop-yarn-server-tests in the patch passed. +1 javadoc 0m 15s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 javadoc 0m 13s hadoop-yarn-site in the patch passed. +1 unit 0m 12s hadoop-project in the patch passed. +1 unit 11m 0s hadoop-common in the patch passed. -1 unit 52m 41s hadoop-yarn in the patch failed. -1 unit 0m 24s hadoop-yarn-api in the patch failed. -1 unit 0m 21s hadoop-yarn-common in the patch failed. -1 unit 39m 18s hadoop-yarn-server in the patch failed. -1 unit 0m 24s hadoop-yarn-server-common in the patch failed. -1 unit 3m 42s hadoop-yarn-server-nodemanager in the patch failed. -1 unit 3m 50s hadoop-yarn-server-timelineservice in the patch failed. -1 unit 0m 26s hadoop-mapreduce-client-app in the patch failed. +1 unit 2m 30s hadoop-mapreduce-client-core in the patch passed. -1 unit 0m 27s hadoop-mapreduce-client-jobclient in the patch failed. -1 unit 0m 35s hadoop-yarn-applications-distributedshell in the patch failed. -1 unit 0m 19s hadoop-yarn-client in the patch failed. +1 unit 35m 45s hadoop-yarn-server-resourcemanager in the patch passed. -1 unit 0m 14s hadoop-yarn-server-tests in the patch failed. -1 unit 88m 44s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed. +1 unit 0m 12s hadoop-yarn-site in the patch passed. -1 asflicense 0m 24s The patch generated 1 ASF License warnings. 370m 7s Reason Tests Failed junit tests hadoop.yarn.applications.distributedshell.TestDistributedShell   hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.api.impl.TestTimelineClient   hadoop.yarn.client.api.impl.TestTimelineClient   hadoop.hdfs.web.TestWebHDFS   hadoop.hdfs.TestRollingUpgrade   hadoop.yarn.applications.distributedshell.TestDistributedShell   hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.api.impl.TestTimelineClient   hadoop.yarn.applications.distributedshell.TestDistributedShell   hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.hdfs.web.TestWebHDFS   hadoop.hdfs.TestRollingUpgrade   hadoop.yarn.applications.distributedshell.TestDistributedShell   hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels   hadoop.yarn.client.api.impl.TestTimelineClient Timed out junit tests org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12817009/YARN-2928.03.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux 2df9227f313e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9bdb5be Default Java 1.8.0_91 shellcheck v0.4.4 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12258/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12258/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12258/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          sjlee0 Sangjin Lee added a comment -

          More analysis:
          (1) mvninstall failure (hadoop-mapreduce-client-app)
          I cannot reproduce this locally. It looks as though the hadoop-mapreduce-client-app build picked up an older version of hadoop-mapreduce-client-core that does not have this new code. It appears somehow jenkins is picking up wrong jars.

          (2) javac errors
          Both are from files and code that we did not touch.

          (3) javadoc
          Fixed.

          (4) checkstyle
          Fixed 3 more. The remaining checkstyle violations are not related to code changes by us, not practical to fix as part of this (e.g. adding javadoc to a large number of existing classes), or else. I think we have fixed it as much as possible without burdening this JIRA more with a lot of unrelated changes.

          (5) unit test failures

          • TestGangliaMetrics: known issue (HADOOP-12588)
          • TestZKFailoverController: appears that it encountered "connection refused" errors (environment?)
          • TestYarnClient: known issue (YARN-4202, YARN-4954)
          • TestLogsCLI: known issue (YARN-5313)
          • TestContainerManagerSecurity: known issue (YARN-4342)
          • TestMiniYarnClusterNodeUtilization: known issue (YARN-4453)

          That leaves TestMRTimelineEventHandling#testMRNewTimelineServiceEventHandling. This is a new test we added. I tried to reproduce this locally, but it is not reproducible on mine (tried a number of times). I'm not exactly sure how this could fail other than it failed to create the directory. The desired directory is correct. I'm going to see if this failure persists with the next run.

          Show
          sjlee0 Sangjin Lee added a comment - More analysis: (1) mvninstall failure (hadoop-mapreduce-client-app) I cannot reproduce this locally. It looks as though the hadoop-mapreduce-client-app build picked up an older version of hadoop-mapreduce-client-core that does not have this new code. It appears somehow jenkins is picking up wrong jars. (2) javac errors Both are from files and code that we did not touch. (3) javadoc Fixed. (4) checkstyle Fixed 3 more. The remaining checkstyle violations are not related to code changes by us, not practical to fix as part of this (e.g. adding javadoc to a large number of existing classes), or else. I think we have fixed it as much as possible without burdening this JIRA more with a lot of unrelated changes. (5) unit test failures TestGangliaMetrics: known issue ( HADOOP-12588 ) TestZKFailoverController: appears that it encountered "connection refused" errors (environment?) TestYarnClient: known issue ( YARN-4202 , YARN-4954 ) TestLogsCLI: known issue ( YARN-5313 ) TestContainerManagerSecurity: known issue ( YARN-4342 ) TestMiniYarnClusterNodeUtilization: known issue ( YARN-4453 ) That leaves TestMRTimelineEventHandling#testMRNewTimelineServiceEventHandling . This is a new test we added. I tried to reproduce this locally, but it is not reproducible on mine (tried a number of times). I'm not exactly sure how this could fail other than it failed to create the directory. The desired directory is correct. I'm going to see if this failure persists with the next run.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Posted patch v.3.

          Fixed a few more javadoc and checkstyle errors.

          Show
          sjlee0 Sangjin Lee added a comment - Posted patch v.3. Fixed a few more javadoc and checkstyle errors.
          Hide
          sjlee0 Sangjin Lee added a comment -

          The latest one run on patch v.2 seems more promising, although that one still seems strange. Will look into that one.

          Show
          sjlee0 Sangjin Lee added a comment - The latest one run on patch v.2 seems more promising, although that one still seems strange. Will look into that one.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 26s Docker mode activated.
          0 shelldocs 0m 1s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 84 new or modified test files.
          0 mvndep 1m 46s Maven dependency ordering for branch
          +1 mvninstall 11m 19s trunk passed
          +1 compile 11m 31s trunk passed
          +1 checkstyle 3m 0s trunk passed
          +1 mvnsite 16m 31s trunk passed
          +1 mvneclipse 6m 12s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 15m 18s trunk passed
          +1 javadoc 9m 43s trunk passed
          0 mvndep 0m 54s Maven dependency ordering for patch
          -1 mvninstall 0m 21s hadoop-mapreduce-client-app in the patch failed.
          +1 compile 10m 31s the patch passed
          +1 cc 10m 31s the patch passed
          -1 javac 10m 31s root generated 2 new + 709 unchanged - 0 fixed = 711 total (was 709)
          -1 checkstyle 2m 51s root: The patch generated 108 new + 3268 unchanged - 127 fixed = 3376 total (was 3395)
          +1 mvnsite 18m 55s the patch passed
          +1 mvneclipse 7m 4s the patch passed
          +1 shellcheck 0m 16s There were no new shellcheck issues.
          +1 whitespace 0m 1s The patch has no whitespace issues.
          +1 xml 0m 20s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 15m 33s the patch passed
          -1 javadoc 1m 36s hadoop-yarn-project_hadoop-yarn generated 1 new + 6621 unchanged - 1 fixed = 6622 total (was 6622)
          -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 1 new + 155 unchanged - 1 fixed = 156 total (was 156)
          +1 unit 0m 11s hadoop-project in the patch passed.
          -1 unit 8m 48s hadoop-common in the patch failed.
          -1 unit 61m 47s hadoop-yarn in the patch failed.
          +1 unit 0m 31s hadoop-yarn-api in the patch passed.
          +1 unit 2m 23s hadoop-yarn-common in the patch passed.
          -1 unit 56m 12s hadoop-yarn-server in the patch failed.
          +1 unit 0m 32s hadoop-yarn-server-common in the patch passed.
          +1 unit 13m 7s hadoop-yarn-server-nodemanager in the patch passed.
          +1 unit 0m 54s hadoop-yarn-server-timelineservice in the patch passed.
          +1 unit 9m 6s hadoop-mapreduce-client-app in the patch passed.
          +1 unit 2m 12s hadoop-mapreduce-client-core in the patch passed.
          -1 unit 121m 32s hadoop-mapreduce-client-jobclient in the patch failed.
          +1 unit 10m 22s hadoop-yarn-applications-distributedshell in the patch passed.
          -1 unit 8m 51s hadoop-yarn-client in the patch failed.
          +1 unit 34m 12s hadoop-yarn-server-resourcemanager in the patch passed.
          -1 unit 4m 42s hadoop-yarn-server-tests in the patch failed.
          +1 unit 4m 19s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 unit 0m 23s hadoop-yarn-site in the patch passed.
          -1 asflicense 0m 37s The patch generated 4 ASF License warnings.
          500m 13s



          Reason Tests
          Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics
            hadoop.ha.TestZKFailoverController
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.mapred.TestMRTimelineEventHandling
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816978/YARN-2928.02.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux 305e33b49acc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / da6f1b8
          Default Java 1.8.0_91
          shellcheck v0.4.4
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12252/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12252/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 26s Docker mode activated. 0 shelldocs 0m 1s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 84 new or modified test files. 0 mvndep 1m 46s Maven dependency ordering for branch +1 mvninstall 11m 19s trunk passed +1 compile 11m 31s trunk passed +1 checkstyle 3m 0s trunk passed +1 mvnsite 16m 31s trunk passed +1 mvneclipse 6m 12s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 15m 18s trunk passed +1 javadoc 9m 43s trunk passed 0 mvndep 0m 54s Maven dependency ordering for patch -1 mvninstall 0m 21s hadoop-mapreduce-client-app in the patch failed. +1 compile 10m 31s the patch passed +1 cc 10m 31s the patch passed -1 javac 10m 31s root generated 2 new + 709 unchanged - 0 fixed = 711 total (was 709) -1 checkstyle 2m 51s root: The patch generated 108 new + 3268 unchanged - 127 fixed = 3376 total (was 3395) +1 mvnsite 18m 55s the patch passed +1 mvneclipse 7m 4s the patch passed +1 shellcheck 0m 16s There were no new shellcheck issues. +1 whitespace 0m 1s The patch has no whitespace issues. +1 xml 0m 20s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 15m 33s the patch passed -1 javadoc 1m 36s hadoop-yarn-project_hadoop-yarn generated 1 new + 6621 unchanged - 1 fixed = 6622 total (was 6622) -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 1 new + 155 unchanged - 1 fixed = 156 total (was 156) +1 unit 0m 11s hadoop-project in the patch passed. -1 unit 8m 48s hadoop-common in the patch failed. -1 unit 61m 47s hadoop-yarn in the patch failed. +1 unit 0m 31s hadoop-yarn-api in the patch passed. +1 unit 2m 23s hadoop-yarn-common in the patch passed. -1 unit 56m 12s hadoop-yarn-server in the patch failed. +1 unit 0m 32s hadoop-yarn-server-common in the patch passed. +1 unit 13m 7s hadoop-yarn-server-nodemanager in the patch passed. +1 unit 0m 54s hadoop-yarn-server-timelineservice in the patch passed. +1 unit 9m 6s hadoop-mapreduce-client-app in the patch passed. +1 unit 2m 12s hadoop-mapreduce-client-core in the patch passed. -1 unit 121m 32s hadoop-mapreduce-client-jobclient in the patch failed. +1 unit 10m 22s hadoop-yarn-applications-distributedshell in the patch passed. -1 unit 8m 51s hadoop-yarn-client in the patch failed. +1 unit 34m 12s hadoop-yarn-server-resourcemanager in the patch passed. -1 unit 4m 42s hadoop-yarn-server-tests in the patch failed. +1 unit 4m 19s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 unit 0m 23s hadoop-yarn-site in the patch passed. -1 asflicense 0m 37s The patch generated 4 ASF License warnings. 500m 13s Reason Tests Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics   hadoop.ha.TestZKFailoverController   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.mapred.TestMRTimelineEventHandling   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816978/YARN-2928.02.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux 305e33b49acc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / da6f1b8 Default Java 1.8.0_91 shellcheck v0.4.4 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-checkstyle-root.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12252/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12252/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12252/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          varun_saxena Varun Saxena added a comment - - edited

          The test result is weird. javadoc seems to be double of previous number of errors.
          We can probably just run it locally and go ahead.

          Show
          varun_saxena Varun Saxena added a comment - - edited The test result is weird. javadoc seems to be double of previous number of errors. We can probably just run it locally and go ahead.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 23s Docker mode activated.
          0 shelldocs 0m 0s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 84 new or modified test files.
          0 mvndep 0m 16s Maven dependency ordering for branch
          -1 mvninstall 0m 54s root in trunk failed.
          -1 compile 0m 57s root in trunk failed.
          +1 checkstyle 2m 32s trunk passed
          -1 mvnsite 0m 24s hadoop-yarn-api in trunk failed.
          -1 mvnsite 0m 42s hadoop-yarn-common in trunk failed.
          -1 mvnsite 0m 21s hadoop-yarn-server-nodemanager in trunk failed.
          -1 mvnsite 0m 54s hadoop-yarn-server-resourcemanager in trunk failed.
          -1 mvnsite 0m 35s hadoop-mapreduce-client-app in trunk failed.
          +1 mvneclipse 6m 50s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          -1 findbugs 2m 11s branch/hadoop-common-project/hadoop-common no findbugs output file (hadoop-common-project/hadoop-common/target/findbugsXml.xml)
          -1 findbugs 1m 29s branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/findbugsXml.xml)
          -1 findbugs 0m 40s branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
          -1 findbugs 0m 28s hadoop-mapreduce-client-app in trunk failed.
          +1 javadoc 10m 4s trunk passed
          0 mvndep 0m 27s Maven dependency ordering for patch
          +1 mvninstall 18m 46s the patch passed
          -1 compile 5m 24s root in the patch failed.
          -1 cc 5m 24s root in the patch failed.
          -1 javac 5m 24s root in the patch failed.
          -1 checkstyle 4m 11s root: The patch generated 6771 new + 3395 unchanged - 0 fixed = 10166 total (was 3395)
          -1 mvnsite 1m 2s hadoop-yarn in the patch failed.
          +1 mvneclipse 7m 5s the patch passed
          -1 shellcheck 0m 16s The patch generated 74 new + 75 unchanged - 0 fixed = 149 total (was 75)
          +1 whitespace 0m 2s The patch has no whitespace issues.
          +1 xml 0m 21s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          -1 findbugs 1m 17s patch/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core no findbugs output file (hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/findbugsXml.xml)
          -1 javadoc 0m 59s hadoop-common-project_hadoop-common generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
          -1 javadoc 2m 7s hadoop-yarn-project_hadoop-yarn generated 6622 new + 6622 unchanged - 0 fixed = 13244 total (was 6622)
          -1 javadoc 0m 25s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 156 new + 156 unchanged - 0 fixed = 312 total (was 156)
          -1 javadoc 0m 36s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 4579 new + 4579 unchanged - 0 fixed = 9158 total (was 4579)
          -1 javadoc 1m 19s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server generated 1664 new + 1664 unchanged - 0 fixed = 3328 total (was 1664)
          -1 javadoc 0m 28s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 163 new + 163 unchanged - 0 fixed = 326 total (was 163)
          -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 279 new + 279 unchanged - 0 fixed = 558 total (was 279)
          -1 javadoc 0m 33s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 989 new + 989 unchanged - 0 fixed = 1978 total (was 989)
          -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 156 new + 156 unchanged - 0 fixed = 312 total (was 156)
          -1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell generated 13 new + 13 unchanged - 0 fixed = 26 total (was 13)
          -1 javadoc 0m 30s hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core generated 2508 new + 2508 unchanged - 0 fixed = 5016 total (was 2508)
          -1 javadoc 0m 23s hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app generated 215 new + 215 unchanged - 0 fixed = 430 total (was 215)
          +1 unit 0m 12s hadoop-project in the patch passed.
          -1 unit 10m 12s hadoop-common in the patch failed.
          -1 unit 64m 31s hadoop-yarn in the patch failed.
          +1 unit 0m 29s hadoop-yarn-api in the patch passed.
          +1 unit 2m 21s hadoop-yarn-common in the patch passed.
          -1 unit 56m 0s hadoop-yarn-server in the patch failed.
          +1 unit 0m 34s hadoop-yarn-server-common in the patch passed.
          +1 unit 13m 23s hadoop-yarn-server-nodemanager in the patch passed.
          +1 unit 0m 54s hadoop-yarn-server-timelineservice in the patch passed.
          -1 unit 34m 37s hadoop-yarn-server-resourcemanager in the patch failed.
          -1 unit 4m 36s hadoop-yarn-server-tests in the patch failed.
          -1 unit 8m 38s hadoop-yarn-client in the patch failed.
          +1 unit 4m 17s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 unit 9m 30s hadoop-yarn-applications-distributedshell in the patch passed.
          +1 unit 0m 12s hadoop-yarn-site in the patch passed.
          +1 unit 2m 9s hadoop-mapreduce-client-core in the patch passed.
          +1 unit 9m 12s hadoop-mapreduce-client-app in the patch passed.
          +1 unit 121m 45s hadoop-mapreduce-client-jobclient in the patch passed.
          +1 asflicense 0m 34s The patch does not generate ASF License warnings.
          479m 52s



          Reason Tests
          Failed junit tests hadoop.ha.TestZKFailoverController
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.client.api.impl.TestYarnClient
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority
            hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.client.api.impl.TestYarnClient



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816978/YARN-2928.02.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux dd2a67d8122d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / da6f1b8
          Default Java 1.8.0_91
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvninstall-root.txt
          compile https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          shellcheck v0.4.4
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          compile https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-compile-root.txt
          cc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-compile-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-checkstyle-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
          shellcheck https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-patch-shellcheck.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12251/testReport/
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12251/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated. 0 shelldocs 0m 0s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 84 new or modified test files. 0 mvndep 0m 16s Maven dependency ordering for branch -1 mvninstall 0m 54s root in trunk failed. -1 compile 0m 57s root in trunk failed. +1 checkstyle 2m 32s trunk passed -1 mvnsite 0m 24s hadoop-yarn-api in trunk failed. -1 mvnsite 0m 42s hadoop-yarn-common in trunk failed. -1 mvnsite 0m 21s hadoop-yarn-server-nodemanager in trunk failed. -1 mvnsite 0m 54s hadoop-yarn-server-resourcemanager in trunk failed. -1 mvnsite 0m 35s hadoop-mapreduce-client-app in trunk failed. +1 mvneclipse 6m 50s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site -1 findbugs 2m 11s branch/hadoop-common-project/hadoop-common no findbugs output file (hadoop-common-project/hadoop-common/target/findbugsXml.xml) -1 findbugs 1m 29s branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/findbugsXml.xml) -1 findbugs 0m 40s branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml) -1 findbugs 0m 28s hadoop-mapreduce-client-app in trunk failed. +1 javadoc 10m 4s trunk passed 0 mvndep 0m 27s Maven dependency ordering for patch +1 mvninstall 18m 46s the patch passed -1 compile 5m 24s root in the patch failed. -1 cc 5m 24s root in the patch failed. -1 javac 5m 24s root in the patch failed. -1 checkstyle 4m 11s root: The patch generated 6771 new + 3395 unchanged - 0 fixed = 10166 total (was 3395) -1 mvnsite 1m 2s hadoop-yarn in the patch failed. +1 mvneclipse 7m 5s the patch passed -1 shellcheck 0m 16s The patch generated 74 new + 75 unchanged - 0 fixed = 149 total (was 75) +1 whitespace 0m 2s The patch has no whitespace issues. +1 xml 0m 21s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site -1 findbugs 1m 17s patch/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core no findbugs output file (hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/findbugsXml.xml) -1 javadoc 0m 59s hadoop-common-project_hadoop-common generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) -1 javadoc 2m 7s hadoop-yarn-project_hadoop-yarn generated 6622 new + 6622 unchanged - 0 fixed = 13244 total (was 6622) -1 javadoc 0m 25s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 156 new + 156 unchanged - 0 fixed = 312 total (was 156) -1 javadoc 0m 36s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 4579 new + 4579 unchanged - 0 fixed = 9158 total (was 4579) -1 javadoc 1m 19s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server generated 1664 new + 1664 unchanged - 0 fixed = 3328 total (was 1664) -1 javadoc 0m 28s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 163 new + 163 unchanged - 0 fixed = 326 total (was 163) -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 279 new + 279 unchanged - 0 fixed = 558 total (was 279) -1 javadoc 0m 33s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 989 new + 989 unchanged - 0 fixed = 1978 total (was 989) -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 156 new + 156 unchanged - 0 fixed = 312 total (was 156) -1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell generated 13 new + 13 unchanged - 0 fixed = 26 total (was 13) -1 javadoc 0m 30s hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core generated 2508 new + 2508 unchanged - 0 fixed = 5016 total (was 2508) -1 javadoc 0m 23s hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app generated 215 new + 215 unchanged - 0 fixed = 430 total (was 215) +1 unit 0m 12s hadoop-project in the patch passed. -1 unit 10m 12s hadoop-common in the patch failed. -1 unit 64m 31s hadoop-yarn in the patch failed. +1 unit 0m 29s hadoop-yarn-api in the patch passed. +1 unit 2m 21s hadoop-yarn-common in the patch passed. -1 unit 56m 0s hadoop-yarn-server in the patch failed. +1 unit 0m 34s hadoop-yarn-server-common in the patch passed. +1 unit 13m 23s hadoop-yarn-server-nodemanager in the patch passed. +1 unit 0m 54s hadoop-yarn-server-timelineservice in the patch passed. -1 unit 34m 37s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 4m 36s hadoop-yarn-server-tests in the patch failed. -1 unit 8m 38s hadoop-yarn-client in the patch failed. +1 unit 4m 17s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 unit 9m 30s hadoop-yarn-applications-distributedshell in the patch passed. +1 unit 0m 12s hadoop-yarn-site in the patch passed. +1 unit 2m 9s hadoop-mapreduce-client-core in the patch passed. +1 unit 9m 12s hadoop-mapreduce-client-app in the patch passed. +1 unit 121m 45s hadoop-mapreduce-client-jobclient in the patch passed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 479m 52s Reason Tests Failed junit tests hadoop.ha.TestZKFailoverController   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.client.api.impl.TestYarnClient   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.client.api.impl.TestYarnClient Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816978/YARN-2928.02.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux dd2a67d8122d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / da6f1b8 Default Java 1.8.0_91 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvninstall-root.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt shellcheck v0.4.4 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-compile-root.txt cc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-compile-root.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-checkstyle-root.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt shellcheck https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-patch-shellcheck.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt https://builds.apache.org/job/PreCommit-YARN-Build/12251/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12251/testReport/ modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12251/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 23s Docker mode activated.
          0 shelldocs 0m 1s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 84 new or modified test files.
          0 mvndep 0m 13s Maven dependency ordering for branch
          +1 mvninstall 6m 50s trunk passed
          +1 compile 7m 6s trunk passed
          +1 checkstyle 2m 24s trunk passed
          +1 mvnsite 11m 18s trunk passed
          +1 mvneclipse 4m 42s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 9m 36s trunk passed
          +1 javadoc 7m 2s trunk passed
          0 mvndep 0m 53s Maven dependency ordering for patch
          -1 mvninstall 0m 18s hadoop-mapreduce-client-app in the patch failed.
          -1 mvninstall 0m 49s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed.
          -1 compile 3m 24s root in the patch failed.
          -1 cc 3m 24s root in the patch failed.
          -1 javac 3m 24s root in the patch failed.
          +1 checkstyle 2m 50s the patch passed
          -1 mvnsite 2m 32s hadoop-yarn in the patch failed.
          -1 mvnsite 0m 39s hadoop-yarn-common in the patch failed.
          -1 mvnsite 1m 31s hadoop-yarn-server in the patch failed.
          -1 mvnsite 0m 39s hadoop-mapreduce-client-app in the patch failed.
          -1 mvnsite 5m 28s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed.
          +1 mvneclipse 8m 52s the patch passed
          +1 shellcheck 0m 13s The patch generated 0 new + 74 unchanged - 1 fixed = 74 total (was 75)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 19s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          -1 findbugs 2m 2s patch/hadoop-common-project/hadoop-common no findbugs output file (hadoop-common-project/hadoop-common/target/findbugsXml.xml)
          -1 findbugs 1m 47s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/findbugsXml.xml)
          -1 findbugs 1m 31s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/findbugsXml.xml)
          -1 findbugs 1m 4s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/findbugsXml.xml)
          -1 findbugs 1m 4s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/findbugsXml.xml)
          -1 findbugs 1m 44s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/findbugsXml.xml)
          -1 findbugs 0m 44s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
          -1 javadoc 2m 30s hadoop-yarn-project_hadoop-yarn generated 1 new + 6621 unchanged - 1 fixed = 6622 total (was 6622)
          -1 javadoc 0m 22s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 1 new + 155 unchanged - 1 fixed = 156 total (was 156)
          +1 unit 0m 14s hadoop-project in the patch passed.
          -1 unit 3m 17s hadoop-common in the patch failed.
          -1 unit 0m 46s hadoop-yarn in the patch failed.
          -1 unit 0m 16s hadoop-yarn-api in the patch failed.
          +1 unit 3m 27s hadoop-yarn-common in the patch passed.
          -1 unit 1m 2s hadoop-yarn-server in the patch failed.
          +1 unit 0m 50s hadoop-yarn-server-common in the patch passed.
          +1 unit 14m 54s hadoop-yarn-server-nodemanager in the patch passed.
          +1 unit 1m 20s hadoop-yarn-server-timelineservice in the patch passed.
          +1 unit 12m 0s hadoop-mapreduce-client-app in the patch passed.
          +1 unit 3m 18s hadoop-mapreduce-client-core in the patch passed.
          -1 unit 11m 27s hadoop-mapreduce-client-jobclient in the patch failed.
          +1 unit 10m 15s hadoop-yarn-applications-distributedshell in the patch passed.
          -1 unit 9m 48s hadoop-yarn-client in the patch failed.
          -1 unit 13m 0s hadoop-yarn-server-resourcemanager in the patch failed.
          -1 unit 5m 5s hadoop-yarn-server-tests in the patch failed.
          +1 unit 4m 29s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed.
          +1 unit 0m 13s hadoop-yarn-site in the patch passed.
          +1 asflicense 0m 25s The patch does not generate ASF License warnings.
          238m 48s



          Reason Tests
          Failed junit tests hadoop.ha.TestZKFailoverController
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
            hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions
            hadoop.mapred.jobcontrol.TestLocalJobControl
            hadoop.yarn.client.cli.TestLogsCLI
            hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions
            hadoop.yarn.server.TestContainerManagerSecurity
            hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
          Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816870/YARN-2928.01.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux 8c01603832f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / da6f1b8
          Default Java 1.8.0_91
          shellcheck v0.4.4
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
          compile https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-compile-root.txt
          cc https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12249/testReport/
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12249/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated. 0 shelldocs 0m 1s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 84 new or modified test files. 0 mvndep 0m 13s Maven dependency ordering for branch +1 mvninstall 6m 50s trunk passed +1 compile 7m 6s trunk passed +1 checkstyle 2m 24s trunk passed +1 mvnsite 11m 18s trunk passed +1 mvneclipse 4m 42s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 9m 36s trunk passed +1 javadoc 7m 2s trunk passed 0 mvndep 0m 53s Maven dependency ordering for patch -1 mvninstall 0m 18s hadoop-mapreduce-client-app in the patch failed. -1 mvninstall 0m 49s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed. -1 compile 3m 24s root in the patch failed. -1 cc 3m 24s root in the patch failed. -1 javac 3m 24s root in the patch failed. +1 checkstyle 2m 50s the patch passed -1 mvnsite 2m 32s hadoop-yarn in the patch failed. -1 mvnsite 0m 39s hadoop-yarn-common in the patch failed. -1 mvnsite 1m 31s hadoop-yarn-server in the patch failed. -1 mvnsite 0m 39s hadoop-mapreduce-client-app in the patch failed. -1 mvnsite 5m 28s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed. +1 mvneclipse 8m 52s the patch passed +1 shellcheck 0m 13s The patch generated 0 new + 74 unchanged - 1 fixed = 74 total (was 75) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 19s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site -1 findbugs 2m 2s patch/hadoop-common-project/hadoop-common no findbugs output file (hadoop-common-project/hadoop-common/target/findbugsXml.xml) -1 findbugs 1m 47s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/target/findbugsXml.xml) -1 findbugs 1m 31s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/findbugsXml.xml) -1 findbugs 1m 4s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/target/findbugsXml.xml) -1 findbugs 1m 4s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/findbugsXml.xml) -1 findbugs 1m 44s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/findbugsXml.xml) -1 findbugs 0m 44s patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no findbugs output file (hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml) -1 javadoc 2m 30s hadoop-yarn-project_hadoop-yarn generated 1 new + 6621 unchanged - 1 fixed = 6622 total (was 6622) -1 javadoc 0m 22s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 1 new + 155 unchanged - 1 fixed = 156 total (was 156) +1 unit 0m 14s hadoop-project in the patch passed. -1 unit 3m 17s hadoop-common in the patch failed. -1 unit 0m 46s hadoop-yarn in the patch failed. -1 unit 0m 16s hadoop-yarn-api in the patch failed. +1 unit 3m 27s hadoop-yarn-common in the patch passed. -1 unit 1m 2s hadoop-yarn-server in the patch failed. +1 unit 0m 50s hadoop-yarn-server-common in the patch passed. +1 unit 14m 54s hadoop-yarn-server-nodemanager in the patch passed. +1 unit 1m 20s hadoop-yarn-server-timelineservice in the patch passed. +1 unit 12m 0s hadoop-mapreduce-client-app in the patch passed. +1 unit 3m 18s hadoop-mapreduce-client-core in the patch passed. -1 unit 11m 27s hadoop-mapreduce-client-jobclient in the patch failed. +1 unit 10m 15s hadoop-yarn-applications-distributedshell in the patch passed. -1 unit 9m 48s hadoop-yarn-client in the patch failed. -1 unit 13m 0s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 5m 5s hadoop-yarn-server-tests in the patch failed. +1 unit 4m 29s hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. +1 unit 0m 13s hadoop-yarn-site in the patch passed. +1 asflicense 0m 25s The patch does not generate ASF License warnings. 238m 48s Reason Tests Failed junit tests hadoop.ha.TestZKFailoverController   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions   hadoop.mapred.jobcontrol.TestLocalJobControl   hadoop.yarn.client.cli.TestLogsCLI   hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization Timed out junit tests org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816870/YARN-2928.01.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux 8c01603832f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / da6f1b8 Default Java 1.8.0_91 shellcheck v0.4.4 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-compile-root.txt cc https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12249/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12249/testReport/ modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12249/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Addressed javac warnings, javadoc warnings, and fixable checkstyle violations.

          I suspect the unit test failures are unrelated to this branch.

          Show
          sjlee0 Sangjin Lee added a comment - Addressed javac warnings, javadoc warnings, and fixable checkstyle violations. I suspect the unit test failures are unrelated to this branch.
          Hide
          sjlee0 Sangjin Lee added a comment -

          I'm fixing javac warnings, javadoc warnings, and checkstyle violations. I'll kick off another build when I'm done with that, which should be shortly.

          Show
          sjlee0 Sangjin Lee added a comment - I'm fixing javac warnings, javadoc warnings, and checkstyle violations. I'll kick off another build when I'm done with that, which should be shortly.
          Hide
          varun_saxena Varun Saxena added a comment - - edited

          Should we invoke the build again ?
          Tests like TestQueuingContainerManager should not really fail on trunk even after applying YARN-2928 changes.

          Show
          varun_saxena Varun Saxena added a comment - - edited Should we invoke the build again ? Tests like TestQueuingContainerManager should not really fail on trunk even after applying YARN-2928 changes.
          Hide
          sjlee0 Sangjin Lee added a comment -

          There are also strange compilation failures that are reported as mvninstall failures and some "unit test failures". I would have thought with dockers you would not have interference from concurrently running builds?

          I'm analyzing javac warnings, unit test failures, and javadoc errors. Findbugs is clean.

          Show
          sjlee0 Sangjin Lee added a comment - There are also strange compilation failures that are reported as mvninstall failures and some "unit test failures". I would have thought with dockers you would not have interference from concurrently running builds? I'm analyzing javac warnings, unit test failures, and javadoc errors. Findbugs is clean.
          Hide
          jrottinghuis Joep Rottinghuis added a comment -

          License part is bogus. Files reported w/o license are:

           !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/output/_SUCCESS
           !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/output/part-00000
           !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/input/part-0
           !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/input/part-1
          

          Jenkins URLs don't seem to resolve, so I cannot check the other items right now.

          Show
          jrottinghuis Joep Rottinghuis added a comment - License part is bogus. Files reported w/o license are: !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/output/_SUCCESS !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/output/part-00000 !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/input/part-0 !????? /testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/input/part-1 Jenkins URLs don't seem to resolve, so I cannot check the other items right now.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 22s Docker mode activated.
          0 shelldocs 0m 1s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 84 new or modified test files.
          0 mvndep 1m 52s Maven dependency ordering for branch
          +1 mvninstall 6m 40s trunk passed
          +1 compile 6m 46s trunk passed
          +1 checkstyle 2m 22s trunk passed
          +1 mvnsite 10m 52s trunk passed
          +1 mvneclipse 4m 36s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 10m 16s trunk passed
          +1 javadoc 7m 5s trunk passed
          0 mvndep 0m 52s Maven dependency ordering for patch
          -1 mvninstall 0m 18s hadoop-mapreduce-client-app in the patch failed.
          +1 compile 7m 53s the patch passed
          +1 cc 7m 53s the patch passed
          -1 javac 7m 53s root generated 13 new + 708 unchanged - 0 fixed = 721 total (was 708)
          -1 checkstyle 2m 40s root: The patch generated 493 new + 3285 unchanged - 114 fixed = 3778 total (was 3399)
          +1 mvnsite 14m 39s the patch passed
          +1 mvneclipse 6m 12s the patch passed
          +1 shellcheck 0m 12s There were no new shellcheck issues.
          +1 whitespace 0m 2s The patch has no whitespace issues.
          +1 xml 0m 18s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 13m 5s the patch passed
          -1 javadoc 1m 35s hadoop-yarn-project_hadoop-yarn generated 19 new + 6622 unchanged - 0 fixed = 6641 total (was 6622)
          -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 2 new + 156 unchanged - 0 fixed = 158 total (was 156)
          -1 javadoc 0m 31s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 7 new + 4579 unchanged - 0 fixed = 4586 total (was 4579)
          -1 javadoc 0m 55s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server generated 6 new + 1664 unchanged - 0 fixed = 1670 total (was 1664)
          -1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 6 new + 163 unchanged - 0 fixed = 169 total (was 163)
          -1 javadoc 0m 26s hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core generated 2 new + 2508 unchanged - 0 fixed = 2510 total (was 2508)
          -1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 4 new + 156 unchanged - 0 fixed = 160 total (was 156)
          +1 unit 0m 11s hadoop-project in the patch passed.
          +1 unit 8m 58s hadoop-common in the patch passed.
          -1 unit 56m 38s hadoop-yarn in the patch failed.
          +1 unit 0m 28s hadoop-yarn-api in the patch passed.
          +1 unit 2m 22s hadoop-yarn-common in the patch passed.
          -1 unit 55m 3s hadoop-yarn-server in the patch failed.
          +1 unit 0m 30s hadoop-yarn-server-common in the patch passed.
          -1 unit 13m 1s hadoop-yarn-server-nodemanager in the patch failed.
          +1 unit 0m 51s hadoop-yarn-server-timelineservice in the patch passed.
          +1 unit 8m 52s hadoop-mapreduce-client-app in the patch passed.
          -1 unit 2m 5s hadoop-mapreduce-client-core in the patch failed.
          -1 unit 119m 0s hadoop-mapreduce-client-jobclient in the patch failed.
          -1 unit 1m 2s hadoop-yarn-applications-distributedshell in the patch failed.
          -1 unit 0m 31s hadoop-yarn-client in the patch failed.
          -1 unit 0m 39s hadoop-yarn-server-resourcemanager in the patch failed.
          -1 unit 0m 32s hadoop-yarn-server-tests in the patch failed.
          -1 unit 0m 51s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed.
          +1 unit 0m 23s hadoop-yarn-site in the patch passed.
          -1 asflicense 0m 36s The patch generated 4 ASF License warnings.
          392m 10s



          Reason Tests
          Failed junit tests hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
            hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
            hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
            hadoop.mapreduce.tools.TestCLI
            hadoop.mapred.TestMRCJCFileInputFormat
            hadoop.mapred.TestMROpportunisticMaps
            hadoop.mapred.TestMRCJCFileOutputCommitter
            hadoop.mapred.TestMRTimelineEventHandling
          Timed out junit tests org.apache.hadoop.mapred.TestMRIntermediateDataEncryption



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816870/YARN-2928.01.patch
          JIRA Issue YARN-2928
          Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc
          uname Linux a863adcac935 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 30ee57c
          Default Java 1.8.0_91
          shellcheck v0.4.4
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
          javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
          unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12237/testReport/
          asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: .
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/12237/console
          Powered by Apache Yetus 0.3.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 22s Docker mode activated. 0 shelldocs 0m 1s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 84 new or modified test files. 0 mvndep 1m 52s Maven dependency ordering for branch +1 mvninstall 6m 40s trunk passed +1 compile 6m 46s trunk passed +1 checkstyle 2m 22s trunk passed +1 mvnsite 10m 52s trunk passed +1 mvneclipse 4m 36s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 10m 16s trunk passed +1 javadoc 7m 5s trunk passed 0 mvndep 0m 52s Maven dependency ordering for patch -1 mvninstall 0m 18s hadoop-mapreduce-client-app in the patch failed. +1 compile 7m 53s the patch passed +1 cc 7m 53s the patch passed -1 javac 7m 53s root generated 13 new + 708 unchanged - 0 fixed = 721 total (was 708) -1 checkstyle 2m 40s root: The patch generated 493 new + 3285 unchanged - 114 fixed = 3778 total (was 3399) +1 mvnsite 14m 39s the patch passed +1 mvneclipse 6m 12s the patch passed +1 shellcheck 0m 12s There were no new shellcheck issues. +1 whitespace 0m 2s The patch has no whitespace issues. +1 xml 0m 18s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 13m 5s the patch passed -1 javadoc 1m 35s hadoop-yarn-project_hadoop-yarn generated 19 new + 6622 unchanged - 0 fixed = 6641 total (was 6622) -1 javadoc 0m 20s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 2 new + 156 unchanged - 0 fixed = 158 total (was 156) -1 javadoc 0m 31s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 7 new + 4579 unchanged - 0 fixed = 4586 total (was 4579) -1 javadoc 0m 55s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server generated 6 new + 1664 unchanged - 0 fixed = 1670 total (was 1664) -1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 6 new + 163 unchanged - 0 fixed = 169 total (was 163) -1 javadoc 0m 26s hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core generated 2 new + 2508 unchanged - 0 fixed = 2510 total (was 2508) -1 javadoc 0m 18s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client generated 4 new + 156 unchanged - 0 fixed = 160 total (was 156) +1 unit 0m 11s hadoop-project in the patch passed. +1 unit 8m 58s hadoop-common in the patch passed. -1 unit 56m 38s hadoop-yarn in the patch failed. +1 unit 0m 28s hadoop-yarn-api in the patch passed. +1 unit 2m 22s hadoop-yarn-common in the patch passed. -1 unit 55m 3s hadoop-yarn-server in the patch failed. +1 unit 0m 30s hadoop-yarn-server-common in the patch passed. -1 unit 13m 1s hadoop-yarn-server-nodemanager in the patch failed. +1 unit 0m 51s hadoop-yarn-server-timelineservice in the patch passed. +1 unit 8m 52s hadoop-mapreduce-client-app in the patch passed. -1 unit 2m 5s hadoop-mapreduce-client-core in the patch failed. -1 unit 119m 0s hadoop-mapreduce-client-jobclient in the patch failed. -1 unit 1m 2s hadoop-yarn-applications-distributedshell in the patch failed. -1 unit 0m 31s hadoop-yarn-client in the patch failed. -1 unit 0m 39s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 0m 32s hadoop-yarn-server-tests in the patch failed. -1 unit 0m 51s hadoop-yarn-server-timelineservice-hbase-tests in the patch failed. +1 unit 0m 23s hadoop-yarn-site in the patch passed. -1 asflicense 0m 36s The patch generated 4 ASF License warnings. 392m 10s Reason Tests Failed junit tests hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager   hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager   hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager   hadoop.mapreduce.tools.TestCLI   hadoop.mapred.TestMRCJCFileInputFormat   hadoop.mapred.TestMROpportunisticMaps   hadoop.mapred.TestMRCJCFileOutputCommitter   hadoop.mapred.TestMRTimelineEventHandling Timed out junit tests org.apache.hadoop.mapred.TestMRIntermediateDataEncryption Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12816870/YARN-2928.01.patch JIRA Issue YARN-2928 Optional Tests asflicense mvnsite compile javac javadoc mvninstall unit findbugs checkstyle xml shellcheck shelldocs cc uname Linux a863adcac935 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 30ee57c Default Java 1.8.0_91 shellcheck v0.4.4 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-checkstyle-root.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt unit test logs https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/12237/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/12237/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . Console output https://builds.apache.org/job/PreCommit-YARN-Build/12237/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
          Hide
          sjlee0 Sangjin Lee added a comment -

          I did a rebase with the trunk just now.

          Attached the complete patch for YARN-2928 to get a jenkins run. The actual merge will be done via git merge.

          Show
          sjlee0 Sangjin Lee added a comment - I did a rebase with the trunk just now. Attached the complete patch for YARN-2928 to get a jenkins run. The actual merge will be done via git merge .
          Hide
          jrottinghuis Joep Rottinghuis added a comment -

          Note the https://issues.apache.org/jira/secure/attachment/12811409/timeline_service_v2_next_milestones.pdf attachment in this jira with the outcome of initial discussion what the next milestones could look like.

          Show
          jrottinghuis Joep Rottinghuis added a comment - Note the https://issues.apache.org/jira/secure/attachment/12811409/timeline_service_v2_next_milestones.pdf attachment in this jira with the outcome of initial discussion what the next milestones could look like.
          Hide
          jrottinghuis Joep Rottinghuis added a comment -

          email thread on yarn-dev "[DISCUSS] merging YARN-2928 (Timeline Service v.2) to trunk": http://markmail.org/thread/bnpwpjhkbs6wsn7z

          Show
          jrottinghuis Joep Rottinghuis added a comment - email thread on yarn-dev " [DISCUSS] merging YARN-2928 (Timeline Service v.2) to trunk": http://markmail.org/thread/bnpwpjhkbs6wsn7z
          Hide
          sjlee0 Sangjin Lee added a comment -

          For those who are following this ticket, we are nearing the first merge to trunk milestone: http://markmail.org/message/27uk4iwqvihs335e

          Please check out the WIP documentation on YARN-3150. Thanks!

          Show
          sjlee0 Sangjin Lee added a comment - For those who are following this ticket, we are nearing the first merge to trunk milestone: http://markmail.org/message/27uk4iwqvihs335e Please check out the WIP documentation on YARN-3150 . Thanks!
          Hide
          jamestaylor James Taylor added a comment -

          I'm wondering that, when adding the dynamic columns into a view, do I still need to explicitly claim those dynamic columns (I assume yes but would like to double check)?

          Yes - instead of building up the SQL string with the dynamic column name, you'd execute the following:

          ALTER VIEW my_view ADD IF NOT EXISTS <dynamic column name> <dynamic column type>
          

          Then, when you query, you no longer need to use dynamic columns, but instead can select all of them:

          SELECT * FROM my_view;
          

          As far as APIs, we'll be happy to give you the ones you need, Li Lu. The higher up in the stack you hook in, the easier it'll be. For reading from HBase, you can always fallback to creating a read-only view over your HBase table. We should work through a couple of examples, though, as if you're storing multiple pieces of information in your row key, we'll want to make sure it's compatible with the way Phoenix expects it to be structured.

          For writing to HBase, I think it'd be good to re-test the Phoenix write path once I finish PHOENIX-2028 (with your KeyPrefixRegionSplitPolicy installed on the table). If it's still not fast enough, then there are a number of options:

          • Use PDataType.toBytes() to get the KeyValue value bytes
          • Use PhoenixRuntime APIs to create the row key if they encapsulate multiple pieces of information
          • Create new APIs as needed
          Show
          jamestaylor James Taylor added a comment - I'm wondering that, when adding the dynamic columns into a view, do I still need to explicitly claim those dynamic columns (I assume yes but would like to double check)? Yes - instead of building up the SQL string with the dynamic column name, you'd execute the following: ALTER VIEW my_view ADD IF NOT EXISTS <dynamic column name> <dynamic column type> Then, when you query, you no longer need to use dynamic columns, but instead can select all of them: SELECT * FROM my_view; As far as APIs, we'll be happy to give you the ones you need, Li Lu . The higher up in the stack you hook in, the easier it'll be. For reading from HBase, you can always fallback to creating a read-only view over your HBase table. We should work through a couple of examples, though, as if you're storing multiple pieces of information in your row key, we'll want to make sure it's compatible with the way Phoenix expects it to be structured. For writing to HBase, I think it'd be good to re-test the Phoenix write path once I finish PHOENIX-2028 (with your KeyPrefixRegionSplitPolicy installed on the table). If it's still not fast enough, then there are a number of options: Use PDataType.toBytes() to get the KeyValue value bytes Use PhoenixRuntime APIs to create the row key if they encapsulate multiple pieces of information Create new APIs as needed
          Hide
          gtCarrera9 Li Lu added a comment -

          Oh one more thing James Taylor, are there any plans to make the PDataTypes APIs to be public and/or stable, or, at least make it limited public to YARN? I believe that will be very helpful for us. Thanks!

          Show
          gtCarrera9 Li Lu added a comment - Oh one more thing James Taylor , are there any plans to make the PDataTypes APIs to be public and/or stable, or, at least make it limited public to YARN? I believe that will be very helpful for us. Thanks!
          Hide
          gtCarrera9 Li Lu added a comment -

          Hi James Taylor, thank you very much for your great help! Some clarifications on my questions...

          For your configuration/metric key-value pair, how are they named? Do you know the possible set of key values in advance? Or are they known more-or-less on-the-fly?

          For our use case they're completely on-the-fly. For each timeline entity, we plan to store each of its configuration/metric in one dynamic column. It is possible that different entities may have completely different configs/metrics. For example, a mapreduce job may have a completely different set of configs to a tez job. Therefore, we need to generate all columns for configs/metrics dynamically. I'm wondering that, when adding the dynamic columns into a view, do I still need to explicitly claim those dynamic columns (I assume yes but would like to double check)?

          Are you thinking to have a secondary table that's a rollup aggregation of more raw data? Is that required, or is it more of a convenience for the user? If the raw data is Phoenix-queryable, then I think you have a lot of options. Can you point me to some more info on your design?

          Yes, we are considering to have multiple levels of aggregation tables, each with a different granularity. For example, now we're planning to do the first level (application level) aggregation from an HBase table to a Phoenix table. Then, we can aggregate flow level information based on our application level aggregation (since each application belongs to and only belongs to one flow). In this way, we can temporarily get rid of the write throughput limitation of Phoenix, but still support SQL queries on aggregated data. If the Phoenix PDataTypes are stable, then is it possible for us to do the following two things?

          1. Use HBase API and PDataTypes to read a Phoenix table, and read dynamic columns iteratively.
          2. Use HBase API and PDataTypes to write a Phoenix table, and write dynamic columns iteratively.
          Show
          gtCarrera9 Li Lu added a comment - Hi James Taylor , thank you very much for your great help! Some clarifications on my questions... For your configuration/metric key-value pair, how are they named? Do you know the possible set of key values in advance? Or are they known more-or-less on-the-fly? For our use case they're completely on-the-fly. For each timeline entity, we plan to store each of its configuration/metric in one dynamic column. It is possible that different entities may have completely different configs/metrics. For example, a mapreduce job may have a completely different set of configs to a tez job. Therefore, we need to generate all columns for configs/metrics dynamically. I'm wondering that, when adding the dynamic columns into a view, do I still need to explicitly claim those dynamic columns (I assume yes but would like to double check)? Are you thinking to have a secondary table that's a rollup aggregation of more raw data? Is that required, or is it more of a convenience for the user? If the raw data is Phoenix-queryable, then I think you have a lot of options. Can you point me to some more info on your design? Yes, we are considering to have multiple levels of aggregation tables, each with a different granularity. For example, now we're planning to do the first level (application level) aggregation from an HBase table to a Phoenix table. Then, we can aggregate flow level information based on our application level aggregation (since each application belongs to and only belongs to one flow). In this way, we can temporarily get rid of the write throughput limitation of Phoenix, but still support SQL queries on aggregated data. If the Phoenix PDataTypes are stable, then is it possible for us to do the following two things? Use HBase API and PDataTypes to read a Phoenix table, and read dynamic columns iteratively. Use HBase API and PDataTypes to write a Phoenix table, and write dynamic columns iteratively.
          Hide
          jamestaylor James Taylor added a comment -

          Happy to help, Li Lu. Thanks for the information.

          If I understand this correctly, in this case, Phoenix will inherit pre-split settings from HBase? Will this alter the existing HBase table, including its schema and/or data inside? In general, if one runs CREATE TABLE IF NOT EXISTS or simply CREATE TABLE commands over a pre-split existing HBase table, will Phoenix simply accept the existing table as-is?

          If you create a table in Phoenix and the table already exists in HBase, Phoenix will accept the existing table as-is, adding any metadata it needs (i.e. it's coprocessors). If the table has existing data, then Phoenix will add an empty KeyValue to each row in the first column family referenced in the create table statement (or the default column family if there are no column families referenced). Phoenix needs this empty key value for a variety of reasons. The onus is on the user to ensure that the types in the create table statement match the actual means in which the data was serialized.

          For your configuration/metric key-value pair, how are they named? Do you know the possible set of key values in advance? Or are they known more-or-less on-the-fly? One way you could model this with views is to just dynamically add the column to the view when you need to. Adding a column to a view is a very light weight operation - corresponding to a few Puts to the SYSTEM.CATALOG table. Then you'd have a way of looping through all metrics for a given view using the metadata APIs. Think of a view as a set of explicitly named dynamic columns. You'd still need to generate the SQL statement, though.

          One potential solution is to use HBase coprocessors to aggregate application data from the HBase storage, and then store them in a Phoenix aggregation table.

          I'm not following. Are you thinking to have a secondary table that's a rollup aggregation of more raw data? Is that required, or is it more of a convenience for the user? If the raw data is Phoenix-queryable, then I think you have a lot of options. Can you point me to some more info on your design?

          The stable APIs for Phoenix are the ones we expose through our public APIs: JDBC and our various integration modules (i.e. MapReduce, Pig, etc.). I'd say that our serialization format produced by PDataType is stable (it needs to be for us to meet our b/w compat guarantees) and the PDataType APIs are more stable than others. Also, we're looking to integrate with Apache Calcite, so we may have some other APIs that could be hooked into as well down the road.

          Show
          jamestaylor James Taylor added a comment - Happy to help, Li Lu . Thanks for the information. If I understand this correctly, in this case, Phoenix will inherit pre-split settings from HBase? Will this alter the existing HBase table, including its schema and/or data inside? In general, if one runs CREATE TABLE IF NOT EXISTS or simply CREATE TABLE commands over a pre-split existing HBase table, will Phoenix simply accept the existing table as-is? If you create a table in Phoenix and the table already exists in HBase, Phoenix will accept the existing table as-is, adding any metadata it needs (i.e. it's coprocessors). If the table has existing data, then Phoenix will add an empty KeyValue to each row in the first column family referenced in the create table statement (or the default column family if there are no column families referenced). Phoenix needs this empty key value for a variety of reasons. The onus is on the user to ensure that the types in the create table statement match the actual means in which the data was serialized. For your configuration/metric key-value pair, how are they named? Do you know the possible set of key values in advance? Or are they known more-or-less on-the-fly? One way you could model this with views is to just dynamically add the column to the view when you need to. Adding a column to a view is a very light weight operation - corresponding to a few Puts to the SYSTEM.CATALOG table. Then you'd have a way of looping through all metrics for a given view using the metadata APIs. Think of a view as a set of explicitly named dynamic columns. You'd still need to generate the SQL statement, though. One potential solution is to use HBase coprocessors to aggregate application data from the HBase storage, and then store them in a Phoenix aggregation table. I'm not following. Are you thinking to have a secondary table that's a rollup aggregation of more raw data? Is that required, or is it more of a convenience for the user? If the raw data is Phoenix-queryable, then I think you have a lot of options. Can you point me to some more info on your design? The stable APIs for Phoenix are the ones we expose through our public APIs: JDBC and our various integration modules (i.e. MapReduce, Pig, etc.). I'd say that our serialization format produced by PDataType is stable (it needs to be for us to meet our b/w compat guarantees) and the PDataType APIs are more stable than others. Also, we're looking to integrate with Apache Calcite, so we may have some other APIs that could be hooked into as well down the road.
          Hide
          gtCarrera9 Li Lu added a comment -

          Hi James Taylor

          Thank you very much for your suggestions and PHOENIX-2028! I wrote the experimental Phoenix writer code and currently have some follow up questions w.r.t your comments.

          The easiest is probably to create the HBase table the same way (through code or using the HBase shell) with the KeyPrefixRegionSplitPolicy specified at create time. Then, in Phoenix you can issue a CREATE TABLE statement against the existing HBase table and it'll just map to it. Then you'll have your split policy for your benchmark in both write paths.

          If I understand this correctly, in this case, Phoenix will inherit pre-split settings from HBase? Will this alter the existing HBase table, including its schema and/or data inside? In general, if one runs CREATE TABLE IF NOT EXISTS or simply CREATE TABLE commands over a pre-split existing HBase table, will Phoenix simply accept the existing table as-is?

          An alternative to dynamic columns is to define views over your Phoenix table (http://phoenix.apache.org/views.html).

          I once looked at views but I'm not sure if that fits our write path use case well. Let me briefly talk about our use case in YARN first. In general, we would like to dynamically store the configuration and metrics for each YARN timeline entity in a Phoenix database, such that our timeline reader apps or users can use SQL to query historical data. Phoenix view may make a perfect solution for the reader use cases. However, we are hitting problems on the writer side. We store each configuration/metric key-value pair in a dynamic column. This causes us two main troubles. First, we need to use a dynamically generated SQL statement to write to the Phoenix table which is cumbersome and error-prone. Second, when performing aggregations, we need to aggregate on all available metrics for an application (or a user, flow), but we cannot simply iterate on those dynamic columns because there is no such API. I'm not sure how to resolve these two problems via Phoenix view, or via existing Phoenix APIs. Actually, I suspect that if it's possible to fall back to the HBase-style APIs, our writer path would be much simpler.

          If you do end up going with a direct HBase write path, I'd encourage you to use the Phoenix serialization format (through PDataType and derived classes) to ensure you can do adhoc querying on the data.

          We're currently looking into this method in the aggregation part. We're doing our best to support SQL on the aggregated data by using Phoenix. One potential solution is to use HBase coprocessors to aggregate application data from the HBase storage, and then store them in a Phoenix aggregation table. However, if we want to keep aggregating on the Phoenix table, can we also write a HBase coprocessor that read the Phoenix PDataTypes, and aggregate them into other Phoenix tables? If it's possible, are there any stable (or "safe") APIs for PDataTypes?

          A slightly more generalized question here is, is SQL the only API for Phoenix, or there may be more? I ask this question because from a YARN timeline service perspective, Phoenix is a nice tool through which we can easily add SQL support to our final users, but we may not necessarily use SQL to program it all the time?

          Thank you very much for your comments and help from the Phoenix side. Our current Phoenix writer is more of an experimental version, but we really hope to have something for our aggregators and readers in near future.

          Show
          gtCarrera9 Li Lu added a comment - Hi James Taylor Thank you very much for your suggestions and PHOENIX-2028 ! I wrote the experimental Phoenix writer code and currently have some follow up questions w.r.t your comments. The easiest is probably to create the HBase table the same way (through code or using the HBase shell) with the KeyPrefixRegionSplitPolicy specified at create time. Then, in Phoenix you can issue a CREATE TABLE statement against the existing HBase table and it'll just map to it. Then you'll have your split policy for your benchmark in both write paths. If I understand this correctly, in this case, Phoenix will inherit pre-split settings from HBase? Will this alter the existing HBase table, including its schema and/or data inside? In general, if one runs CREATE TABLE IF NOT EXISTS or simply CREATE TABLE commands over a pre-split existing HBase table, will Phoenix simply accept the existing table as-is? An alternative to dynamic columns is to define views over your Phoenix table ( http://phoenix.apache.org/views.html ). I once looked at views but I'm not sure if that fits our write path use case well. Let me briefly talk about our use case in YARN first. In general, we would like to dynamically store the configuration and metrics for each YARN timeline entity in a Phoenix database, such that our timeline reader apps or users can use SQL to query historical data. Phoenix view may make a perfect solution for the reader use cases. However, we are hitting problems on the writer side. We store each configuration/metric key-value pair in a dynamic column. This causes us two main troubles. First, we need to use a dynamically generated SQL statement to write to the Phoenix table which is cumbersome and error-prone. Second, when performing aggregations, we need to aggregate on all available metrics for an application (or a user, flow), but we cannot simply iterate on those dynamic columns because there is no such API. I'm not sure how to resolve these two problems via Phoenix view, or via existing Phoenix APIs. Actually, I suspect that if it's possible to fall back to the HBase-style APIs, our writer path would be much simpler. If you do end up going with a direct HBase write path, I'd encourage you to use the Phoenix serialization format (through PDataType and derived classes) to ensure you can do adhoc querying on the data. We're currently looking into this method in the aggregation part. We're doing our best to support SQL on the aggregated data by using Phoenix. One potential solution is to use HBase coprocessors to aggregate application data from the HBase storage, and then store them in a Phoenix aggregation table. However, if we want to keep aggregating on the Phoenix table, can we also write a HBase coprocessor that read the Phoenix PDataTypes, and aggregate them into other Phoenix tables? If it's possible, are there any stable (or "safe") APIs for PDataTypes? A slightly more generalized question here is, is SQL the only API for Phoenix, or there may be more? I ask this question because from a YARN timeline service perspective, Phoenix is a nice tool through which we can easily add SQL support to our final users, but we may not necessarily use SQL to program it all the time? Thank you very much for your comments and help from the Phoenix side. Our current Phoenix writer is more of an experimental version, but we really hope to have something for our aggregators and readers in near future.
          Hide
          vrushalic Vrushali C added a comment -

          Hi James Taylor

          Thank you for taking the time to look through the write up and for filing PHOENIX-2028.

          In the context of pre-splits, yes, we wanted to have both writers write to tables that were pre-split with the same presplit strategy. However, I believe the folks working on the Phoenix writer mentioned that the only way to achieve in Phoenix that was to use SPLIT ON substatement, which required that approach to rewrite the HBase presplitting strategy. Perhaps Li Lu might be able to speak to this better.

          I'd encourage you to use the Phoenix serialization format (through PDataType and derived classes) to ensure you can do adhoc querying on the data

          Okay, thanks, I will check that out. We are working on a whole set of enhancements for the base writer as well and I will look at this.

          The most important aspect is how your row key is written and the separators you use if you're storing multiple values in the row key.

          You’ve hit the nail on the head. We do have multiple values with different datatypes in row key as well as in column names with and without prefixes, so we have different datatypes and bunch of separators. Joep Rottinghuis has been addressing these points in YARN-3706 , for e.g. dealing with storing and parsing byte representations of separators.

          The timeline service schema has more tables and we are considering storing aggregated values in these Phoenix based tables (current thinking is to have them populated via co-processors watching the basic entity table). Thanks for suggesting defining views on Phoenix tables, I will look up more details on that.

          Thanks once again,
          Vrushali

          Show
          vrushalic Vrushali C added a comment - Hi James Taylor Thank you for taking the time to look through the write up and for filing PHOENIX-2028 . In the context of pre-splits, yes, we wanted to have both writers write to tables that were pre-split with the same presplit strategy. However, I believe the folks working on the Phoenix writer mentioned that the only way to achieve in Phoenix that was to use SPLIT ON substatement, which required that approach to rewrite the HBase presplitting strategy. Perhaps Li Lu might be able to speak to this better. I'd encourage you to use the Phoenix serialization format (through PDataType and derived classes) to ensure you can do adhoc querying on the data Okay, thanks, I will check that out. We are working on a whole set of enhancements for the base writer as well and I will look at this. The most important aspect is how your row key is written and the separators you use if you're storing multiple values in the row key. You’ve hit the nail on the head. We do have multiple values with different datatypes in row key as well as in column names with and without prefixes, so we have different datatypes and bunch of separators. Joep Rottinghuis has been addressing these points in YARN-3706 , for e.g. dealing with storing and parsing byte representations of separators. The timeline service schema has more tables and we are considering storing aggregated values in these Phoenix based tables (current thinking is to have them populated via co-processors watching the basic entity table). Thanks for suggesting defining views on Phoenix tables, I will look up more details on that. Thanks once again, Vrushali
          Hide
          jamestaylor James Taylor added a comment -

          Nice writeup, Vrushali C. For your benchmarks, if you're pre-splitting for the HBase direct write path but not for the Phoenix write path, you're not really comparing apples-to-apples. There are a number of ways you can install your KeyPrefixRegionSplitPolicy in Phoenix. The easiest is probably to create the HBase table the same way (through code or using the HBase shell) with the KeyPrefixRegionSplitPolicy specified at create time. Then, in Phoenix you can issue a CREATE TABLE statement against the existing HBase table and it'll just map to it. Then you'll have your split policy for your benchmark in both write paths.

          An alternative to dynamic columns is to define views over your Phoenix table (http://phoenix.apache.org/views.html). In each view, you could specify the set of columns it contains. Then you can use the regular JDBC metadata APIs to get the set of columns that define your view: http://docs.oracle.com/javase/7/docs/api/java/sql/DatabaseMetaData.html#getColumns%28java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String%29

          Another interesting angle with views (not sure if this is relevant for your use case or not), but they're capable of being multi-tenant where the definition of the "tenant" is up to you (maybe it would map to a User?). In this case, each tenant can define their own derived view and add columns specific to their usage. You can even create secondary indexes over a view. This is the way Phoenix surfaces NoSQL in the SQL world. More here: http://phoenix.apache.org/multi-tenancy.html

          There is room for improvement in the Phoenix write path, though. I've filed PHOENIX-2028 and plan to work on that shortly.

          If you do end up going with a direct HBase write path, I'd encourage you to use the Phoenix serialization format (through PDataType and derived classes) to ensure you can do adhoc querying on the data. The most important aspect is how your row key is written and the separators you use if you're storing multiple values in the row key.

          Show
          jamestaylor James Taylor added a comment - Nice writeup, Vrushali C . For your benchmarks, if you're pre-splitting for the HBase direct write path but not for the Phoenix write path, you're not really comparing apples-to-apples. There are a number of ways you can install your KeyPrefixRegionSplitPolicy in Phoenix. The easiest is probably to create the HBase table the same way (through code or using the HBase shell) with the KeyPrefixRegionSplitPolicy specified at create time. Then, in Phoenix you can issue a CREATE TABLE statement against the existing HBase table and it'll just map to it. Then you'll have your split policy for your benchmark in both write paths. An alternative to dynamic columns is to define views over your Phoenix table ( http://phoenix.apache.org/views.html ). In each view, you could specify the set of columns it contains. Then you can use the regular JDBC metadata APIs to get the set of columns that define your view: http://docs.oracle.com/javase/7/docs/api/java/sql/DatabaseMetaData.html#getColumns%28java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String%29 Another interesting angle with views (not sure if this is relevant for your use case or not), but they're capable of being multi-tenant where the definition of the "tenant" is up to you (maybe it would map to a User?). In this case, each tenant can define their own derived view and add columns specific to their usage. You can even create secondary indexes over a view. This is the way Phoenix surfaces NoSQL in the SQL world. More here: http://phoenix.apache.org/multi-tenancy.html There is room for improvement in the Phoenix write path, though. I've filed PHOENIX-2028 and plan to work on that shortly. If you do end up going with a direct HBase write path, I'd encourage you to use the Phoenix serialization format (through PDataType and derived classes) to ensure you can do adhoc querying on the data. The most important aspect is how your row key is written and the separators you use if you're storing multiple values in the row key.
          Hide
          gtCarrera9 Li Lu added a comment -

          Thanks Sangjin Lee, Joep Rottinghuis, and Vrushali C for hosting the benchmark session. This is very helpful!

          Show
          gtCarrera9 Li Lu added a comment - Thanks Sangjin Lee , Joep Rottinghuis , and Vrushali C for hosting the benchmark session. This is very helpful!
          Hide
          sjlee0 Sangjin Lee added a comment -

          Thanks Vrushali C for the summary!

          Show
          sjlee0 Sangjin Lee added a comment - Thanks Vrushali C for the summary!
          Hide
          vrushalic Vrushali C added a comment -

          We decided to evaluate two approaches of backend storage implementations in terms of their performance, scalability, usability, maintenance: YARN-3134 (Phoenix based HBase schema) and  YARN-3411  (hybrid HBase schema - vanilla HBase tables in the direct write path and phoenix based tables for reporting).

          Attaching a write-up that describes how we ended up choosing the approach of writing to vanilla HBase tables (YARN-3411) in the direct write path.

          Show
          vrushalic Vrushali C added a comment - We decided to evaluate two approaches of backend storage implementations in terms of their performance, scalability, usability, maintenance: YARN-3134 (Phoenix based HBase schema) and   YARN-3411  (hybrid HBase schema - vanilla HBase tables in the direct write path and phoenix based tables for reporting). Attaching a write-up that describes how we ended up choosing the approach of writing to vanilla HBase tables ( YARN-3411 ) in the direct write path.
          Hide
          sjlee0 Sangjin Lee added a comment -

          We should make sure the timeline service v.2 does the right thing in this regard.

          Show
          sjlee0 Sangjin Lee added a comment - We should make sure the timeline service v.2 does the right thing in this regard.
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Updating title as we are tackling all the phases here..

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Updating title as we are tackling all the phases here..
          Hide
          sjlee0 Sangjin Lee added a comment -

          No worries. I'll wait until YARN-3039 is done. Thanks for letting me know.

          Show
          sjlee0 Sangjin Lee added a comment - No worries. I'll wait until YARN-3039 is done. Thanks for letting me know.
          Hide
          gtCarrera9 Li Lu added a comment -

          can we defer the renaming work until that patch get in?

          I'm +1 on this suggestion. When we commit YARN-3210 back, there were some work interferences that delayed YARN-3264. This time we may probably want to have less interference with ongoing aggregator (to-be-changed) related work.

          Show
          gtCarrera9 Li Lu added a comment - can we defer the renaming work until that patch get in? I'm +1 on this suggestion. When we commit YARN-3210 back, there were some work interferences that delayed YARN-3264 . This time we may probably want to have less interference with ongoing aggregator (to-be-changed) related work.
          Hide
          djp Junping Du added a comment -

          let me know if you are OK with the name, and I can make a quick refactoring patch.

          I have an outstanding patch in YARN-3039 for review now. Sangjin Lee, can we defer the renaming work until that patch get in? Thx!

          Show
          djp Junping Du added a comment - let me know if you are OK with the name, and I can make a quick refactoring patch. I have an outstanding patch in YARN-3039 for review now. Sangjin Lee , can we defer the renaming work until that patch get in? Thx!
          Hide
          zjshen Zhijie Shen added a comment -

          let me know if you are OK with the name, and I can make a quick refactoring patch.

          Sounds good to me.

          Show
          zjshen Zhijie Shen added a comment - let me know if you are OK with the name, and I can make a quick refactoring patch. Sounds good to me.
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Overall I'd like to push other efforts like YARN-2141, YARN-1012 to fit into the current architecture being proposed in this JIRA. This is so that we don't duplicate stats collection between efforts.

          Filed YARN-3332.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Overall I'd like to push other efforts like YARN-2141 , YARN-1012 to fit into the current architecture being proposed in this JIRA. This is so that we don't duplicate stats collection between efforts. Filed YARN-3332 .
          Hide
          sjlee0 Sangjin Lee added a comment -

          I like the name TimelineCollector. Zhijie Shen, Vinod Kumar Vavilapalli, let me know if you are OK with the name, and I can make a quick refactoring patch.

          Show
          sjlee0 Sangjin Lee added a comment - I like the name TimelineCollector. Zhijie Shen , Vinod Kumar Vavilapalli , let me know if you are OK with the name, and I can make a quick refactoring patch.
          Hide
          kasha Karthik Kambatla added a comment -

          +1 to renaming.

          Prefer - TimelineCollector and TimelineReceiver in that order.

          Show
          kasha Karthik Kambatla added a comment - +1 to renaming. Prefer - TimelineCollector and TimelineReceiver in that order.
          Hide
          vrushalic Vrushali C added a comment -

          + 1 to renaming TimelineAggregator. TimelineReceiver is good. Some other suggestions are TimelineAccumulator or TimelineCollector.

          Show
          vrushalic Vrushali C added a comment - + 1 to renaming TimelineAggregator. TimelineReceiver is good. Some other suggestions are TimelineAccumulator or TimelineCollector.
          Hide
          rkanter Robert Kanter added a comment -

          I agree; we're using "aggregator" for too many things.

          For TimelineAggregator, IIRC, Karthik Kambatla had suggested TimelineCollector at one point, and that sounded good. TimelineReceiver also sounds fine.

          Show
          rkanter Robert Kanter added a comment - I agree; we're using "aggregator" for too many things. For TimelineAggregator, IIRC, Karthik Kambatla had suggested TimelineCollector at one point, and that sounded good. TimelineReceiver also sounds fine.
          Hide
          sjlee0 Sangjin Lee added a comment -

          A couple of more comments on the plan:

          • I think the metrics API should be part of phase 2 since we will handle aggregation
          • It's a small item, but we should make the per-node aggregator a standalone daemon part of phase 2

          Speaking of "aggregator", the word "aggregation/aggregator" is now getting quite overloaded. Originally it meant "rolling up metrics to parent entities". Now it's really used in two quite different contexts. For example, the TimelineAggregator classes have little to do with that original meaning. I'm not quite sure what aggregation means in that context, although, I know, I know, I said +1 to the name TimelineAggregator. Should we clear up this confusion? IMO, we should stick with the original meaning of aggregation when we talk about aggregation. For TimelineAggregator, perhaps we could rename it to TimelineReceiver or another name?

          Show
          sjlee0 Sangjin Lee added a comment - A couple of more comments on the plan: I think the metrics API should be part of phase 2 since we will handle aggregation It's a small item, but we should make the per-node aggregator a standalone daemon part of phase 2 Speaking of "aggregator", the word "aggregation/aggregator" is now getting quite overloaded. Originally it meant "rolling up metrics to parent entities". Now it's really used in two quite different contexts. For example, the TimelineAggregator classes have little to do with that original meaning. I'm not quite sure what aggregation means in that context, although, I know, I know, I said +1 to the name TimelineAggregator. Should we clear up this confusion? IMO, we should stick with the original meaning of aggregation when we talk about aggregation. For TimelineAggregator, perhaps we could rename it to TimelineReceiver or another name?
          Hide
          sjlee0 Sangjin Lee added a comment -

          I suppose the "ApplicationMaster events" refer to the ones that are written by the distributed shell AM. Correct?

          Show
          sjlee0 Sangjin Lee added a comment - I suppose the "ApplicationMaster events" refer to the ones that are written by the distributed shell AM. Correct?
          Hide
          sjlee0 Sangjin Lee added a comment -

          Thanks Vinod Kumar Vavilapalli for putting the plan together! Some quick questions:

          • Phase 1: what are the "ApplicationMaster events"? Is it the lifecycle of the AM container or AMs like MR AMs emitting framework-specific events/metrics?
          • Phase 3: would it be possible to pull in the RM events bit earlier than phase 3? I don't think it would be that difficult to do, and it would serve as a very useful data point. Thoughts?
          • Are we thinking of merging only after all phases are complete, or could there be meaningful intermediate checkpoints we can merge? It would be good if we could achieve the latter.
          Show
          sjlee0 Sangjin Lee added a comment - Thanks Vinod Kumar Vavilapalli for putting the plan together! Some quick questions: Phase 1: what are the "ApplicationMaster events"? Is it the lifecycle of the AM container or AMs like MR AMs emitting framework-specific events/metrics? Phase 3: would it be possible to pull in the RM events bit earlier than phase 3? I don't think it would be that difficult to do, and it would serve as a very useful data point. Thoughts? Are we thinking of merging only after all phases are complete, or could there be meaningful intermediate checkpoints we can merge? It would be good if we could achieve the latter.
          Hide
          sjlee0 Sangjin Lee added a comment -

          I suppose directly reading the data off of the storage (reading the file in case of the filesystem-based one, for example).

          Show
          sjlee0 Sangjin Lee added a comment - I suppose directly reading the data off of the storage (reading the file in case of the filesystem-based one, for example).
          Hide
          varun_saxena Varun Saxena added a comment -

          What is meant by manual reader ?

          Show
          varun_saxena Varun Saxena added a comment - What is meant by manual reader ?
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          I made up some notes (attached) on how we collectively work on this - to help surface some clarity of project execution for everyone involved. Divided the effort into phases. Feedback welcome. I'll keep this updated as things progress.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - I made up some notes (attached) on how we collectively work on this - to help surface some clarity of project execution for everyone involved. Divided the effort into phases. Feedback welcome. I'll keep this updated as things progress.
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Are there any plans to include intermediate routing/forwarding systems for ATS v2?

          We have a storage/forwarder interface that can definitely be plugged into to do something like this.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Are there any plans to include intermediate routing/forwarding systems for ATS v2? We have a storage/forwarder interface that can definitely be plugged into to do something like this.
          Hide
          gopalv Gopal V added a comment -

          The original discussion about ATS v1 drew inspiration from existing systems like rsyslog and scribe, which are simple systems which buffer/route/forward into a central store.

          Those mechanisms were very useful in duplicating higher priority (and rare) events for immediate alerting/dashboards (errors/sec etc).

          Are there any plans to include intermediate routing/forwarding systems for ATS v2?

          The "tail -f | grep" firehose across a cluster is useful in avoiding scalability issues when looking for rare events in a distributed store.

          Being able to route something like a node blacklisting event from an AppMaster to such a system would prevent the fault checker systems from having to produce irrelevant ATS traffic periodically to scrape through it.

          Show
          gopalv Gopal V added a comment - The original discussion about ATS v1 drew inspiration from existing systems like rsyslog and scribe, which are simple systems which buffer/route/forward into a central store. Those mechanisms were very useful in duplicating higher priority (and rare) events for immediate alerting/dashboards (errors/sec etc). Are there any plans to include intermediate routing/forwarding systems for ATS v2? The "tail -f | grep" firehose across a cluster is useful in avoiding scalability issues when looking for rare events in a distributed store. Being able to route something like a node blacklisting event from an AppMaster to such a system would prevent the fault checker systems from having to produce irrelevant ATS traffic periodically to scrape through it.
          Hide
          zjshen Zhijie Shen added a comment -

          I am assuming you are already aware of YARN-2423 and plan to maintain compatibility

          The data models of current and next gen TS are likely to be different. To be compatible to old data model, we probably need to change the existing timeline client to covert the old entity to the new one.

          We should have such a configuration that disables the timeline service globally.

          I think it's also good to have per-app flag. If the app is configured not to use the timeline service, we don't need to start the per-app aggregator.

          My point related to events was not about a new interesting feature but to generally understand what use case is meant to be solved by events and how should an application developer use events?

          I thought you mean using publisher/subscriber architecture, such as Kafka, to consume the incoming event streams. Other than that, IMHO, we still need to support the existing query of getting the stored events of a set of some entities.

          Show
          zjshen Zhijie Shen added a comment - I am assuming you are already aware of YARN-2423 and plan to maintain compatibility The data models of current and next gen TS are likely to be different. To be compatible to old data model, we probably need to change the existing timeline client to covert the old entity to the new one. We should have such a configuration that disables the timeline service globally. I think it's also good to have per-app flag. If the app is configured not to use the timeline service, we don't need to start the per-app aggregator. My point related to events was not about a new interesting feature but to generally understand what use case is meant to be solved by events and how should an application developer use events? I thought you mean using publisher/subscriber architecture, such as Kafka, to consume the incoming event streams. Other than that, IMHO, we still need to support the existing query of getting the stored events of a set of some entities.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Not sure I understand clearly as to how the relationship is captured. Consider this case: There are 5 hive queries: q1 to q5. There are 3 Tez apps: a1 to a3. Now, q1 and q5 ran on a1, q2 ran on a2 and q3,q4 ran on a3. Given q1, I need to know which app it ran on. Given a1, I need to know which queries ran on it. Could you clarify how this should be represented as flows?

          Based on that description, this would be the parent-child relationship: a1 --> (q1, q5), a2 --> (q2), a3 --> (q3, q4). Given q1, its parent is a1. Given a1, a1's children are q1 and q5. If q1 spawned 3 YARN apps (y1, y2, y3), their parent would be q1. This parent-child relationship would be encoded in the data model.

          The only case where this would break is if the same entity needs more than one parent at the YARN level (flow runs, YARN apps, etc.). Note that we're talking about flow runs, not flows. The same flow may have multiple actual runs. The parent-child relationship is at the flow runs. Let me know if this helps.

          Please explain what "globally" means.

          What I'm envisioning is a boolean configuration that can disable the timeline service altogether, not unlike the current switch on the ATS. If this configuration is enabled, no timeline data would be written, no daemon would be started, etc.

          Show
          sjlee0 Sangjin Lee added a comment - Not sure I understand clearly as to how the relationship is captured. Consider this case: There are 5 hive queries: q1 to q5. There are 3 Tez apps: a1 to a3. Now, q1 and q5 ran on a1, q2 ran on a2 and q3,q4 ran on a3. Given q1, I need to know which app it ran on. Given a1, I need to know which queries ran on it. Could you clarify how this should be represented as flows? Based on that description, this would be the parent-child relationship: a1 --> (q1, q5), a2 --> (q2), a3 --> (q3, q4). Given q1, its parent is a1. Given a1, a1's children are q1 and q5. If q1 spawned 3 YARN apps (y1, y2, y3), their parent would be q1. This parent-child relationship would be encoded in the data model. The only case where this would break is if the same entity needs more than one parent at the YARN level (flow runs, YARN apps, etc.). Note that we're talking about flow runs , not flows. The same flow may have multiple actual runs. The parent-child relationship is at the flow runs. Let me know if this helps. Please explain what "globally" means. What I'm envisioning is a boolean configuration that can disable the timeline service altogether, not unlike the current switch on the ATS. If this configuration is enabled, no timeline data would be written, no daemon would be started, etc.
          Hide
          hitesh Hitesh Shah added a comment -

          Also, Sangjin Lee Zhijie Shen I am assuming you are already aware of YARN-2423 and plan to maintain compatibility with that implementation if that is introduced in a version earlier to the one in which this next-gen impl is supported?

          Show
          hitesh Hitesh Shah added a comment - Also, Sangjin Lee Zhijie Shen I am assuming you are already aware of YARN-2423 and plan to maintain compatibility with that implementation if that is introduced in a version earlier to the one in which this next-gen impl is supported?
          Hide
          hitesh Hitesh Shah added a comment -

          We should have such a configuration that disables the timeline service globally.

          Please explain what "globally" means.

          Can it be handled as a "flow of flows" as described in the design? For instance, tez application <-- hive queries <-- YARN apps? Or does it not capture the relationship?

          Not sure I understand clearly as to how the relationship is captured. Consider this case: There are 5 hive queries: q1 to q5. There are 3 Tez apps: a1 to a3. Now, q1 and q5 ran on a1, q2 ran on a2 and q3,q4 ran on a3. Given q1, I need to know which app it ran on. Given a1, I need to know which queries ran on it. Could you clarify how this should be represented as flows?

          Show
          hitesh Hitesh Shah added a comment - We should have such a configuration that disables the timeline service globally. Please explain what "globally" means. Can it be handled as a "flow of flows" as described in the design? For instance, tez application <-- hive queries <-- YARN apps? Or does it not capture the relationship? Not sure I understand clearly as to how the relationship is captured. Consider this case: There are 5 hive queries: q1 to q5. There are 3 Tez apps: a1 to a3. Now, q1 and q5 ran on a1, q2 ran on a2 and q3,q4 ran on a3. Given q1, I need to know which app it ran on. Given a1, I need to know which queries ran on it. Could you clarify how this should be represented as flows?
          Hide
          sjlee0 Sangjin Lee added a comment -

          How is a workflow defined when an entity has 2 parents? Considering the tez-hive example, do you agree that both a Hive Query and a Tez application are workflows and share some entities?

          Can it be handled as a "flow of flows" as described in the design? For instance, tez application <-- hive queries <-- YARN apps? Or does it not capture the relationship?

          Show
          sjlee0 Sangjin Lee added a comment - How is a workflow defined when an entity has 2 parents? Considering the tez-hive example, do you agree that both a Hive Query and a Tez application are workflows and share some entities? Can it be handled as a "flow of flows" as described in the design? For instance, tez application <-- hive queries <-- YARN apps? Or does it not capture the relationship?
          Hide
          sjlee0 Sangjin Lee added a comment -

          Also, what if an application does not want to write data to ATS or does not care if the data does not reach ATS? Will there now be more flags introducing an application submission to tell the RM that it does or does not need the ATS service so as to ensure that its app does not fail?

          We should have such a configuration that disables the timeline service globally.

          Show
          sjlee0 Sangjin Lee added a comment - Also, what if an application does not want to write data to ATS or does not care if the data does not reach ATS? Will there now be more flags introducing an application submission to tell the RM that it does or does not need the ATS service so as to ensure that its app does not fail? We should have such a configuration that disables the timeline service globally.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Does these mean that containers ( non-AM ) of an application will face a penalty if they need to write data to ATS as all data will need to be routed to the aggregator on the AM host first?

          One could say that it is a "penalty" in the sense that it incurs an extra hop before it can be written to the storage. However, if we let the timeline aggregators write directly from the nodes on which the containers run, we would have a situation where all nodes have many connections (as many as the number of apps running on the node) open to the backing storage. For example, for a HBase storage with hundreds of region servers and a writing hadoop cluster with thousands of nodes, you'd easily have a criss-cross connection case where each region server taking traffic/connections from every single hadoop worker node.

          With the current design, each application would retain a (mostly) single stable connection to a single region server (assuming it is designed so that data for an application resides in a single region server), and will lead to a much fewer connections overall. Also, if container data is collected at the app level, the timeline aggregator can be a little smarter about this and aggregate/update values appropriately.

          Show
          sjlee0 Sangjin Lee added a comment - Does these mean that containers ( non-AM ) of an application will face a penalty if they need to write data to ATS as all data will need to be routed to the aggregator on the AM host first? One could say that it is a "penalty" in the sense that it incurs an extra hop before it can be written to the storage. However, if we let the timeline aggregators write directly from the nodes on which the containers run, we would have a situation where all nodes have many connections (as many as the number of apps running on the node) open to the backing storage. For example, for a HBase storage with hundreds of region servers and a writing hadoop cluster with thousands of nodes, you'd easily have a criss-cross connection case where each region server taking traffic/connections from every single hadoop worker node. With the current design, each application would retain a (mostly) single stable connection to a single region server (assuming it is designed so that data for an application resides in a single region server), and will lead to a much fewer connections overall. Also, if container data is collected at the app level, the timeline aggregator can be a little smarter about this and aggregate/update values appropriately.
          Hide
          sjlee0 Sangjin Lee added a comment -

          What use are events? Will there be a "streaming" API available to listen to all events based on some search criteria? If there is a hierarchy of objects, will there be support to listen to or retrieve all events for a given tree by providing a root node?

          That is a good question. In the design doc, I kept the events mainly because it is there in ATS today. I envision that the "events" we will support are mostly state transitions (e.g. application submitted, container finished, etc.). One could argue that it is just an update of an entity state. At minimum we need a property that will hold entity state and an update that will change the state value.

          Show
          sjlee0 Sangjin Lee added a comment - What use are events? Will there be a "streaming" API available to listen to all events based on some search criteria? If there is a hierarchy of objects, will there be support to listen to or retrieve all events for a given tree by providing a root node? That is a good question. In the design doc, I kept the events mainly because it is there in ATS today. I envision that the "events" we will support are mostly state transitions (e.g. application submitted, container finished, etc.). One could argue that it is just an update of an entity state. At minimum we need a property that will hold entity state and an update that will change the state value.
          Hide
          hitesh Hitesh Shah added a comment -

          Also, what if an application does not want to write data to ATS or does not care if the data does not reach ATS? Will there now be more flags introducing an application submission to tell the RM that it does or does not need the ATS service so as to ensure that its app does not fail?

          Show
          hitesh Hitesh Shah added a comment - Also, what if an application does not want to write data to ATS or does not care if the data does not reach ATS? Will there now be more flags introducing an application submission to tell the RM that it does or does not need the ATS service so as to ensure that its app does not fail?
          Hide
          sjlee0 Sangjin Lee added a comment -

          When you say user, what does it really imply? User "a" can submit a hive query. A tez application running as user "hive" can execute the query submitted by user "a" using a's delegation tokens. With proxy users and potential use of delegation tokens, which "user" should be used?

          That's something we haven't fully considered. IMO the user is used for resource attribution (e.g. chargeback) and also for access control. We'll need to sort out this scenario (probably not for the first cut however).

          What are the main differences between meta-data and configuration?

          One could argue they are not different. However, from a user's perspective (especially MR jobs), the configuration has a strong meaning. It might be good to call out configuration separately from other metadata.

          Show
          sjlee0 Sangjin Lee added a comment - When you say user, what does it really imply? User "a" can submit a hive query. A tez application running as user "hive" can execute the query submitted by user "a" using a's delegation tokens. With proxy users and potential use of delegation tokens, which "user" should be used? That's something we haven't fully considered. IMO the user is used for resource attribution (e.g. chargeback) and also for access control. We'll need to sort out this scenario (probably not for the first cut however). What are the main differences between meta-data and configuration? One could argue they are not different. However, from a user's perspective (especially MR jobs), the configuration has a strong meaning. It might be good to call out configuration separately from other metadata.
          Hide
          hitesh Hitesh Shah added a comment -

          In the per-app timeline aggregator (a.k.a. ATS writer companion) model, it is a special container. And we need to be able to allocate both the timeline aggregator and the AM or neither. Also, we do want to be able to co-locate the AM and the aggregator on the same node.

          Does these mean that containers ( non-AM ) of an application will face a penalty if they need to write data to ATS as all data will need to be routed to the aggregator on the AM host first?

          Show
          hitesh Hitesh Shah added a comment - In the per-app timeline aggregator (a.k.a. ATS writer companion) model, it is a special container. And we need to be able to allocate both the timeline aggregator and the AM or neither. Also, we do want to be able to co-locate the AM and the aggregator on the same node. Does these mean that containers ( non-AM ) of an application will face a penalty if they need to write data to ATS as all data will need to be routed to the aggregator on the AM host first?
          Hide
          sjlee0 Sangjin Lee added a comment -

          Rajesh Balamohan:

          In certain cases, it might be required to mine a specific job's data by exporting contents out of ATS. Would there be any support for an export tool to get data out of ATS?

          Other than access to the REST endpoint, one might be able to query the backing storage directly. And we're keeping that in mind. But that would depend on the backing storage's capability. For example, for HBase, we could provide phoenix schema on which one can do offline queries pretty efficiently.

          Show
          sjlee0 Sangjin Lee added a comment - Rajesh Balamohan : In certain cases, it might be required to mine a specific job's data by exporting contents out of ATS. Would there be any support for an export tool to get data out of ATS? Other than access to the REST endpoint, one might be able to query the backing storage directly. And we're keeping that in mind. But that would depend on the backing storage's capability. For example, for HBase, we could provide phoenix schema on which one can do offline queries pretty efficiently.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Hitesh Shah, continuing that discussion,

          Vinod Kumar Vavilapalli Should have probably added more context from the design doc:
          "We assume that the failure semantics of the ATS writer companion is the same as the AM. If the ATS writer companion fails for any reason, we try to bring it back up up to a specified number of times. If the maximum retries are exhausted, we consider it a fatal failure, and fail the application."

          Yes, I definitely could add more color to that point. I'm going to update the design doc as there are a number of clarifications made. Hopefully some time next week.

          In the per-app timeline aggregator (a.k.a. ATS writer companion) model, it is a special container. And we need to be able to allocate both the timeline aggregator and the AM or neither. Also, we do want to be able to co-locate the AM and the aggregator on the same node. Then RM needs to negotiate that combined capacity atomically. In other words, we don't want to have a situation where we were able to allocate ATS but not AM, or vice versa. If AM needs 2 G, and the timeline aggregator needs 1 G, then this pair needs to go to a node on which 3 G can be allocated at that time.

          In terms of the failure scenarios, we may need to hash out some more details. Since allocation is considered as a pair, it is also natural to consider their failure semantics in the same manner. But a deeper question is, if the AM came up but the timeline aggregator didn't come up (for resource reasons or otherwise), do we consider that an acceptable situation? If the timeline aggregator for that app cannot come up, should that be considered fatal? Or, if apps are running but they're not logging critical lifecycle events, etc. because the timeline aggregator went down, do we consider that situation acceptable? The discussion was that it is probably not acceptable as if it is a common occurrence, it would leave a large hole in the collected timeline data and the overall value of the timeline data goes down significantly.

          That said, this point is deferred somewhat because initially we're starting out with a per-node aggregator option. The per-node aggregator option somewhat sidesteps (but not completely) this issue.

          Show
          sjlee0 Sangjin Lee added a comment - Hitesh Shah , continuing that discussion, Vinod Kumar Vavilapalli Should have probably added more context from the design doc: "We assume that the failure semantics of the ATS writer companion is the same as the AM. If the ATS writer companion fails for any reason, we try to bring it back up up to a specified number of times. If the maximum retries are exhausted, we consider it a fatal failure, and fail the application." Yes, I definitely could add more color to that point. I'm going to update the design doc as there are a number of clarifications made. Hopefully some time next week. In the per-app timeline aggregator (a.k.a. ATS writer companion) model, it is a special container. And we need to be able to allocate both the timeline aggregator and the AM or neither. Also, we do want to be able to co-locate the AM and the aggregator on the same node. Then RM needs to negotiate that combined capacity atomically. In other words, we don't want to have a situation where we were able to allocate ATS but not AM, or vice versa. If AM needs 2 G, and the timeline aggregator needs 1 G, then this pair needs to go to a node on which 3 G can be allocated at that time. In terms of the failure scenarios, we may need to hash out some more details. Since allocation is considered as a pair, it is also natural to consider their failure semantics in the same manner. But a deeper question is, if the AM came up but the timeline aggregator didn't come up (for resource reasons or otherwise), do we consider that an acceptable situation? If the timeline aggregator for that app cannot come up, should that be considered fatal? Or, if apps are running but they're not logging critical lifecycle events, etc. because the timeline aggregator went down, do we consider that situation acceptable? The discussion was that it is probably not acceptable as if it is a common occurrence, it would leave a large hole in the collected timeline data and the overall value of the timeline data goes down significantly. That said, this point is deferred somewhat because initially we're starting out with a per-node aggregator option. The per-node aggregator option somewhat sidesteps (but not completely) this issue.
          Hide
          hitesh Hitesh Shah added a comment -

          In this use case, who is the user of the TEZ application? This may affect the data mode and the parent-child relationship (cluster->user->flow->flow run->application).

          When you say user, what does it really imply? User "a" can submit a hive query. A tez application running as user "hive" can execute the query submitted by user "a" using a's delegation tokens. With proxy users and potential use of delegation tokens, which "user" should be used?

          "metadata" aims to store the same thing as "otherInfo", ... "primaryFilters"

          Seems like a good option. What form of search will be supported? In most cases, values will unlikely be primitive types but deep nested structures. Will you support all forms of search on all objects?

          They sound to be interesting features, ..

          My point related to events was not about a new interesting feature but to generally understand what use case is meant to be solved by events and how should an application developer use events?

          We may probably run adhoc query to get the events of all applications of a workflow.

          How is a workflow defined when an entity has 2 parents? Considering the tez-hive example, do you agree that both a Hive Query and a Tez application are workflows and share some entities?

          Show
          hitesh Hitesh Shah added a comment - In this use case, who is the user of the TEZ application? This may affect the data mode and the parent-child relationship (cluster->user->flow->flow run->application). When you say user, what does it really imply? User "a" can submit a hive query. A tez application running as user "hive" can execute the query submitted by user "a" using a's delegation tokens. With proxy users and potential use of delegation tokens, which "user" should be used? "metadata" aims to store the same thing as "otherInfo", ... "primaryFilters" Seems like a good option. What form of search will be supported? In most cases, values will unlikely be primitive types but deep nested structures. Will you support all forms of search on all objects? They sound to be interesting features, .. My point related to events was not about a new interesting feature but to generally understand what use case is meant to be solved by events and how should an application developer use events? We may probably run adhoc query to get the events of all applications of a workflow. How is a workflow defined when an entity has 2 parents? Considering the tez-hive example, do you agree that both a Hive Query and a Tez application are workflows and share some entities?
          Hide
          zjshen Zhijie Shen added a comment -

          A single tez application can run multiple different Hive queries submitted by different users.

          In this use case, who is the user of the TEZ application? This may affect the data mode and the parent-child relationship (cluster->user->flow->flow run->application).

          Where does the current implementation's "otherInfo" and "primaryFilters" fit in?

          "metadata" aims to store the same thing as "otherInfo", but I didn't want to be called "otherinfo" because it's no longer the other info than "primaryFilters". When making the new schema, I'm looking for the option to have the entity indexed, but don't need to explicitly specify what is the "primaryFilters", which makes trouble and bugs when updating the entity before.

          What are the main differences between meta-data and configuration?

          It may be combined, as I consider both are key-value pairs, but I distinguish them explicitly for better usage. Or is there any special access pattern for config?

          If there is a hierarchy of objects, will there be support to listen to or retrieve all events for a given tree by providing a root node?

          We may probably run adhoc query to get the events of all applications of a workflow.

          What use are events? Will there be a "streaming" API available to listen to all events based on some search criteria?

          In certain cases, it might be required to mine a specific job's data by exporting contents out of ATS.

          They sound to be interesting features, but we may not able to accommodate them within Hadoop 2.8 timeline.

          Show
          zjshen Zhijie Shen added a comment - A single tez application can run multiple different Hive queries submitted by different users. In this use case, who is the user of the TEZ application? This may affect the data mode and the parent-child relationship (cluster->user->flow->flow run->application). Where does the current implementation's "otherInfo" and "primaryFilters" fit in? "metadata" aims to store the same thing as "otherInfo", but I didn't want to be called "otherinfo" because it's no longer the other info than "primaryFilters". When making the new schema, I'm looking for the option to have the entity indexed, but don't need to explicitly specify what is the "primaryFilters", which makes trouble and bugs when updating the entity before. What are the main differences between meta-data and configuration? It may be combined, as I consider both are key-value pairs, but I distinguish them explicitly for better usage. Or is there any special access pattern for config? If there is a hierarchy of objects, will there be support to listen to or retrieve all events for a given tree by providing a root node? We may probably run adhoc query to get the events of all applications of a workflow. What use are events? Will there be a "streaming" API available to listen to all events based on some search criteria? In certain cases, it might be required to mine a specific job's data by exporting contents out of ATS. They sound to be interesting features, but we may not able to accommodate them within Hadoop 2.8 timeline.
          Hide
          rajesh.balamohan Rajesh Balamohan added a comment -

          In certain cases, it might be required to mine a specific job's data by exporting contents out of ATS. Would there be any support for an export tool to get data out of ATS?

          Show
          rajesh.balamohan Rajesh Balamohan added a comment - In certain cases, it might be required to mine a specific job's data by exporting contents out of ATS. Would there be any support for an export tool to get data out of ATS?
          Hide
          hitesh Hitesh Shah added a comment -

          Vinod Kumar Vavilapalli Should have probably added more context from the design doc:

          "We assume that the failure semantics of the ATS writer companion is the same as the AM. If the ATS writer companion fails for any reason, we try to bring it back up up to a specified number of times. If the maximum retries are exhausted, we consider it a fatal failure, and fail the application."

          Show
          hitesh Hitesh Shah added a comment - Vinod Kumar Vavilapalli Should have probably added more context from the design doc: "We assume that the failure semantics of the ATS writer companion is the same as the AM. If the ATS writer companion fails for any reason, we try to bring it back up up to a specified number of times. If the maximum retries are exhausted, we consider it a fatal failure, and fail the application."
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          The AM and the ATS writer are always considered as a pair, both in terms of resource allocation and failure handling.

          Why is this necessary? Why does the ATS layer decide what is fatal or non-fatal for an application?

          This might have meant something different. Colocating the AM and the Timeline aggregator is a physical optimization that also simplifies scheduling a bit. So if the AM fails and runs on a different host, it may make sense to move the aggregator too.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - The AM and the ATS writer are always considered as a pair, both in terms of resource allocation and failure handling. Why is this necessary? Why does the ATS layer decide what is fatal or non-fatal for an application? This might have meant something different. Colocating the AM and the Timeline aggregator is a physical optimization that also simplifies scheduling a bit. So if the AM fails and runs on a different host, it may make sense to move the aggregator too.
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Might have accidentally assigned it to myself, didn't realize before, unassigning.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Might have accidentally assigned it to myself, didn't realize before, unassigning.
          Hide
          hitesh Hitesh Shah added a comment -

          More questions:

          What are the main differences between meta-data and configuration?
          What search/sort/aggregate/count functionality is plan to be supported? It seems based on the design doc, certain functionality is not supported on configuration. Does this mean that it is simpler to dump all the data into the meta-data to make it searchable?
          Where does the current implementation's "otherInfo" and "primaryFilters" fit in?
          What use are events? Will there be a "streaming" API available to listen to all events based on some search criteria? If there is a hierarchy of objects, will there be support to listen to or retrieve all events for a given tree by providing a root node?
          How does an application define a relationship for its entity to a system entity?

          Show
          hitesh Hitesh Shah added a comment - More questions: What are the main differences between meta-data and configuration? What search/sort/aggregate/count functionality is plan to be supported? It seems based on the design doc, certain functionality is not supported on configuration. Does this mean that it is simpler to dump all the data into the meta-data to make it searchable? Where does the current implementation's "otherInfo" and "primaryFilters" fit in? What use are events? Will there be a "streaming" API available to listen to all events based on some search criteria? If there is a hierarchy of objects, will there be support to listen to or retrieve all events for a given tree by providing a root node? How does an application define a relationship for its entity to a system entity?
          Hide
          hitesh Hitesh Shah added a comment -

          Some questions on the design doc:

          The AM and the ATS writer are always considered as a pair, both in terms of resource allocation and failure handling.

          Why is this necessary? Why does the ATS layer decide what is fatal or non-fatal for an application?

          From a Tez perspective, we have a different use-case when it comes to relationships with higher level applications. A single tez application can run multiple different Hive queries submitted by different users. In today's implementation, the data generated from each different query ( within the same Tez yarn application ) will have different access/privacy controls. How do you see the flow relationship being handled in this case as there is an entity ( Tez DAG ) that is a child of both a Tez Application as well as a Hive query?

          Show
          hitesh Hitesh Shah added a comment - Some questions on the design doc: The AM and the ATS writer are always considered as a pair, both in terms of resource allocation and failure handling. Why is this necessary? Why does the ATS layer decide what is fatal or non-fatal for an application? From a Tez perspective, we have a different use-case when it comes to relationships with higher level applications. A single tez application can run multiple different Hive queries submitted by different users. In today's implementation, the data generated from each different query ( within the same Tez yarn application ) will have different access/privacy controls. How do you see the flow relationship being handled in this case as there is an entity ( Tez DAG ) that is a child of both a Tez Application as well as a Hive query?
          Hide
          rkanter Robert Kanter added a comment -

          To mirror my comment from the doc that Sangjin is referring to; I had said:

          It would be useful to be able to aggregate to queues; what would be a good way to fit those into the data model?

          in the "Some issues to address" section.

          As discussed, if we only do child --> parent aggregation ("primary aggregation"), then we can't aggregate to queues because they don't really fit in that path.

          Show
          rkanter Robert Kanter added a comment - To mirror my comment from the doc that Sangjin is referring to; I had said: It would be useful to be able to aggregate to queues; what would be a good way to fit those into the data model? in the "Some issues to address" section. As discussed, if we only do child --> parent aggregation ("primary aggregation"), then we can't aggregate to queues because they don't really fit in that path.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Thanks Zhijie Shen for putting it together! It looks good mostly. Some high level comments...

          (1) Are "relates to" and "is related to" meant to capture the parent-child relationship?

          (2)
          (Flow) run and application definitely have a parent-child relationship.

          Now it's less clear between the flow and the flow run. One scenario that is definitely worth considering is a flow of flows, and that brings some complications to this.

          Suppose you have an oozie flow that starts a pig script which in turn spawns multiple MR jobs. If flow is an entity and parent of the flow run, how to model this situation becomes more challenging. One idea might be

          oozie flow -> oozie flow run -> pig flow -> pig flow run -> MR job

          However, the oozie flow run is not really the parent of the pig flow. Rather, the oozie flow run is the parent of the pig flow run.

          Another idea is not to have the flow as a separate entity but as metadata of the flow run entities. And that's actually what the design doc indicates (see sections 3.1.1. and 3.1.2).

          Now one issue with not having the flow as an entity is that it might complicate the aggregation scenario. More on that later...

          (3) Could we stick with the same terminology as in the design doc? Those are "flow" and "flow run". Thoughts? Better suggestions?

          (4)
          The part about the metrics would need to be further expanded with the metrics API JIRA, but I definitely see at least two types of metrics: one that requires a time series and another that doesn't. The former may be something like CPU, and the latter would be something like HDFS bytes written for example.

          For the latter type, the only value that matters for a given metric is the latest value. And depending on which type, the way to implement the storage could be hugely different.

          I think we need to come up with a well-defined set of metric types that cover most useful cases. Initially we said we were going to look at the existing hadoop metrics types, but we might need to come up with our own here.

          (5)
          The parent-child relationship (and therefore the necessity of making things entities) is tightly related with aggregation (rolling up the values from children to parent). The idea was that for parent-child entities aggregation would be done generically as part of creating/updating those entities (what we called "primary aggregation" in some discussion).

          If cluster or user is not an entity, then there is no parent-child relationship, and aggregation from flows to user or cluster would have to be done explicitly outside the context of the parent-child relationship.

          Of course that is doable; we could just do it as specific aggregation. Maybe that's what we need to do (and the queue-level aggregation which Robert mentioned could be treated in the same manner).

          Either way, I think we should mention how the run/flow/user/cluster/queue aggregation would be done.

          Show
          sjlee0 Sangjin Lee added a comment - Thanks Zhijie Shen for putting it together! It looks good mostly. Some high level comments... (1) Are "relates to" and "is related to" meant to capture the parent-child relationship? (2) (Flow) run and application definitely have a parent-child relationship. Now it's less clear between the flow and the flow run. One scenario that is definitely worth considering is a flow of flows, and that brings some complications to this. Suppose you have an oozie flow that starts a pig script which in turn spawns multiple MR jobs. If flow is an entity and parent of the flow run, how to model this situation becomes more challenging. One idea might be oozie flow -> oozie flow run -> pig flow -> pig flow run -> MR job However, the oozie flow run is not really the parent of the pig flow. Rather, the oozie flow run is the parent of the pig flow run. Another idea is not to have the flow as a separate entity but as metadata of the flow run entities. And that's actually what the design doc indicates (see sections 3.1.1. and 3.1.2). Now one issue with not having the flow as an entity is that it might complicate the aggregation scenario. More on that later... (3) Could we stick with the same terminology as in the design doc? Those are "flow" and "flow run". Thoughts? Better suggestions? (4) The part about the metrics would need to be further expanded with the metrics API JIRA, but I definitely see at least two types of metrics: one that requires a time series and another that doesn't. The former may be something like CPU, and the latter would be something like HDFS bytes written for example. For the latter type, the only value that matters for a given metric is the latest value. And depending on which type, the way to implement the storage could be hugely different. I think we need to come up with a well-defined set of metric types that cover most useful cases. Initially we said we were going to look at the existing hadoop metrics types, but we might need to come up with our own here. (5) The parent-child relationship (and therefore the necessity of making things entities) is tightly related with aggregation (rolling up the values from children to parent). The idea was that for parent-child entities aggregation would be done generically as part of creating/updating those entities (what we called "primary aggregation" in some discussion). If cluster or user is not an entity, then there is no parent-child relationship, and aggregation from flows to user or cluster would have to be done explicitly outside the context of the parent-child relationship. Of course that is doable; we could just do it as specific aggregation. Maybe that's what we need to do (and the queue-level aggregation which Robert mentioned could be treated in the same manner). Either way, I think we should mention how the run/flow/user/cluster/queue aggregation would be done.
          Hide
          zjshen Zhijie Shen added a comment -

          Based on the design and previous timeline data model, I drafted and attached a new data model proposal. Please take a look. It doesn't focus on elaborating each individual fields, but showing the major concepts.

          Show
          zjshen Zhijie Shen added a comment - Based on the design and previous timeline data model, I drafted and attached a new data model proposal. Please take a look. It doesn't focus on elaborating each individual fields, but showing the major concepts.
          Hide
          sjlee0 Sangjin Lee added a comment -

          Just a reminder that we have an IRC channel for quick discussions on this effort at ##hadoop-ats on irc.freenode.net. We also have regular Google hangout status calls. Email me if you'd like to participate in the status calls.

          Show
          sjlee0 Sangjin Lee added a comment - Just a reminder that we have an IRC channel for quick discussions on this effort at ##hadoop-ats on irc.freenode.net. We also have regular Google hangout status calls. Email me if you'd like to participate in the status calls.
          Hide
          zjshen Zhijie Shen added a comment -

          As far as I can tell, all server code has the naming org.apache.hadoop.yarn.server.[feature_name].

          Oh, may bad. You're right.

          Wouldn't it be good to stick with that then? How about the following then?

          +1 LGTM

          it may become a source of confusion as the new name sounds very similar.

          Agree, like "mapred" and "mapreduce". Let's make sure in the documentation we tell the users which package should use for which version of timeline server.

          Show
          zjshen Zhijie Shen added a comment - As far as I can tell, all server code has the naming org.apache.hadoop.yarn.server. [feature_name] . Oh, may bad. You're right. Wouldn't it be good to stick with that then? How about the following then? +1 LGTM it may become a source of confusion as the new name sounds very similar. Agree, like "mapred" and "mapreduce". Let's make sure in the documentation we tell the users which package should use for which version of timeline server.
          Hide
          sjlee0 Sangjin Lee added a comment -

          As far as I can tell, all server code has the naming org.apache.hadoop.yarn.server.[feature_name]. Wouldn't it be good to stick with that then? How about the following then?

          common: org.apache.hadoop.yarn.timelineservice.*
          client: org.apache.hadoop.yarn.client.api.timelineservice.*
          server: org.apache.hadoop.yarn.server.timelineservice.*
          

          One thing is the current ATS code in yarn-api is in org.apache.hadoop.yarn.client.api.records.timeline so it may become a source of confusion as the new name sounds very similar. We'll just need to find a way to differentiate them (via plenty of documentation or renaming things).

          Show
          sjlee0 Sangjin Lee added a comment - As far as I can tell, all server code has the naming org.apache.hadoop.yarn.server.[feature_name] . Wouldn't it be good to stick with that then? How about the following then? common: org.apache.hadoop.yarn.timelineservice.* client: org.apache.hadoop.yarn.client.api.timelineservice.* server: org.apache.hadoop.yarn.server.timelineservice.* One thing is the current ATS code in yarn-api is in org.apache.hadoop.yarn.client.api.records.timeline so it may become a source of confusion as the new name sounds very similar. We'll just need to find a way to differentiate them (via plenty of documentation or renaming things).
          Hide
          zjshen Zhijie Shen added a comment -

          I suppose the reason the client-side API resides in yarn-api and yarn-common rather than yarn-client is to accommodate RM's use of ATS?

          Right, this is because we want to prevent cyclic dependency issue (RM -> ATS -> server-tests -> RM). Another issue is TimelineDelegationToken#renewer inside common module is using timeline client too. YARN-2506 is investigating the solution to correct the packaging.

          but we need to make a decision on where we will put the client and common pieces.

          IMHO, common code goes to hadoop-yarn-common (or hadoop-yarn-api if it's API related). If we can prevent cyclic dependency, the client code is best to be in hadoop-yarn-client.

          My suggestion would be to use

          The package naming looks good. However, there are already some conventions. For example, all client libs are under org.apache.hadoop.yarn.client.api. It may be better to keep to it. As to the common code, I saw the style in hadoop-yarn-common is org.apache.hadoop.yarn.[feature name]. Finally, the server code doesn't have "server" in the package name. It may be organized like org.apache.hadoop.yarn.timelineservice.aggregator.*

          Show
          zjshen Zhijie Shen added a comment - I suppose the reason the client-side API resides in yarn-api and yarn-common rather than yarn-client is to accommodate RM's use of ATS? Right, this is because we want to prevent cyclic dependency issue (RM -> ATS -> server-tests -> RM). Another issue is TimelineDelegationToken#renewer inside common module is using timeline client too. YARN-2506 is investigating the solution to correct the packaging. but we need to make a decision on where we will put the client and common pieces. IMHO, common code goes to hadoop-yarn-common (or hadoop-yarn-api if it's API related). If we can prevent cyclic dependency, the client code is best to be in hadoop-yarn-client. My suggestion would be to use The package naming looks good. However, there are already some conventions. For example, all client libs are under org.apache.hadoop.yarn.client.api . It may be better to keep to it. As to the common code, I saw the style in hadoop-yarn-common is org.apache.hadoop.yarn.[feature name] . Finally, the server code doesn't have "server" in the package name. It may be organized like org.apache.hadoop.yarn.timelineservice.aggregator.*
          Hide
          sjlee0 Sangjin Lee added a comment -

          One observation on the code organization. The existing ATS code is actually spread out in several places:

          • entities, etc. API: org.apache.hadoop.yarn.api.records.timeline.* at hadoop-yarn-api
          • TimelineClient A