Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-543 [Umbrella] NodeManager localization related issues
  3. YARN-574

PrivateLocalizer does not support parallel resource download via ContainerLocalizer

    Details

    • Type: Sub-task
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.6.0, 2.8.0, 2.7.1
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Release Note:
      Hide
      YARN-574. Allow parallel download of resources in PrivateLocalizer. Contributed by Zheng Shao.
      Show
      YARN-574 . Allow parallel download of resources in PrivateLocalizer. Contributed by Zheng Shao.

      Description

      At present private resources will be downloaded in parallel only if multiple containers request the same resource. However otherwise it will be serial. The protocol between PrivateLocalizer and ContainerLocalizer supports multiple downloads however it is not used and only one resource is sent for downloading at a time.

      I think we can increase / assure parallelism (even for single container requesting resource) for private/application resources by making multiple downloads per ContainerLocalizer.
      Total Parallelism before
      = number of threads allotted for PublicLocalizer [public resource] + number of containers[private and application resource]
      Total Parallelism after
      = number of threads allotted for PublicLocalizer [public resource] + number of containers * max downloads per container [private and application resource]

      1. YARN-574.2.patch
        6 kB
        Zheng Shao
      2. YARN-574.1.patch
        6 kB
        Zheng Shao
      3. YARN-574.05.patch
        23 kB
        Ajith S
      4. YARN-574.04.patch
        21 kB
        Ajith S
      5. YARN-574.03.patch
        14 kB
        Ajith S

        Activity

        Hide
        subru Subru Krishnan added a comment -

        Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required.

        Show
        subru Subru Krishnan added a comment - Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        -1 patch 3m 48s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



        Subsystem Report/Notes
        JIRA Issue YARN-574
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/17713/console
        Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 3m 48s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue YARN-574 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch Console output https://builds.apache.org/job/PreCommit-YARN-Build/17713/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        asuresh Arun Suresh added a comment -

        Is this still on target for 2.9.0 ? If not, can we we push this out to the next major release ?

        Show
        asuresh Arun Suresh added a comment - Is this still on target for 2.9.0 ? If not, can we we push this out to the next major release ?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        -1 patch 0m 7s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



        Subsystem Report/Notes
        JIRA Issue YARN-574
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/15712/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 7s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue YARN-574 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch Console output https://builds.apache.org/job/PreCommit-YARN-Build/15712/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        sjlee0 Sangjin Lee added a comment -

        Thanks for your contribution Ajith S.

        Regarding the use of AtomicInteger, why not use Semaphore for this? The semantics we're demanding is really that of a semaphore. We could also eliminate it altogether with Jason's suggestion but if we're going to use it, using a real counting semaphore is clearer.

        Could you please update the patch for Jason's feedback and mine here too? Thanks!

        Show
        sjlee0 Sangjin Lee added a comment - Thanks for your contribution Ajith S . Regarding the use of AtomicInteger , why not use Semaphore for this? The semantics we're demanding is really that of a semaphore. We could also eliminate it altogether with Jason's suggestion but if we're going to use it, using a real counting semaphore is clearer. Could you please update the patch for Jason's feedback and mine here too? Thanks!
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for updating the patch!

        I don't think this while loop is desired:

                while (currentActiveDownloads.get() >= downloadThreadCount) {
                  pauseHeartbeat(cs);
                }
        

        Doing so will prevent the localizer from heartbeating at all to the NM for the duration of the active localizations. That means it ends up doing unnecessary work if the container is killed during localization (i.e.: doesn't know it would receive a DIE request). It would also be problematic if we ever implemented proper liveness detection for localizers (i.e.: they need to continue heartbeating to show they're alive while still localizing).

        If we want to prevent the localizer from receiving more work when it's full then we should augment the localizer protocol to indicate that in the status, e.g.: a boolean indicating that it is 'full' of active localizations or maybe a count indicating how many localizations the localizer is ready to accept at the moment. Doing a count has the advantage that the NM can internally loop during the localizer status processing and respond with all of the localizations in one response rather than making the localizer send N heartbeats to get N active downloads going. Removes the whole messy sometimes-we-heartbeat-fast-sometimes-slow thing and excess RPC processing to get a lot of downloads going.

        Speaking of counting downloads, we can eliminate the Atomic stuff and needing to wrap the download call by simply counting the incomplete Futures in the createStatus method. It's already walking all of the pending downloads for every heartbeat, which means it would be trivial for it to update a member variable with the count of unfinished download Futures (i.e.: active downloads). It would be a simpler approach, but the existing counting scheme should work as well. I'll leave it up to you.

        If the download wrapping stays, it should not be using a lambda expression if this is going into branch-2 since branch-2 does not require JDK8. Either that or we need a separate patch for branch-2, and I'd rather keep them closer in sync to make future cherry-picks easier to do.

        Nit: Javadoc that just enumerates the arguments and empty return tags provide no value. Please remove them or add appropriate documentation to make them worthwhile.

        The unit test failure is related, please investigate.

        Show
        jlowe Jason Lowe added a comment - Thanks for updating the patch! I don't think this while loop is desired: while (currentActiveDownloads.get() >= downloadThreadCount) { pauseHeartbeat(cs); } Doing so will prevent the localizer from heartbeating at all to the NM for the duration of the active localizations. That means it ends up doing unnecessary work if the container is killed during localization (i.e.: doesn't know it would receive a DIE request). It would also be problematic if we ever implemented proper liveness detection for localizers (i.e.: they need to continue heartbeating to show they're alive while still localizing). If we want to prevent the localizer from receiving more work when it's full then we should augment the localizer protocol to indicate that in the status, e.g.: a boolean indicating that it is 'full' of active localizations or maybe a count indicating how many localizations the localizer is ready to accept at the moment. Doing a count has the advantage that the NM can internally loop during the localizer status processing and respond with all of the localizations in one response rather than making the localizer send N heartbeats to get N active downloads going. Removes the whole messy sometimes-we-heartbeat-fast-sometimes-slow thing and excess RPC processing to get a lot of downloads going. Speaking of counting downloads, we can eliminate the Atomic stuff and needing to wrap the download call by simply counting the incomplete Futures in the createStatus method. It's already walking all of the pending downloads for every heartbeat, which means it would be trivial for it to update a member variable with the count of unfinished download Futures (i.e.: active downloads). It would be a simpler approach, but the existing counting scheme should work as well. I'll leave it up to you. If the download wrapping stays, it should not be using a lambda expression if this is going into branch-2 since branch-2 does not require JDK8. Either that or we need a separate patch for branch-2, and I'd rather keep them closer in sync to make future cherry-picks easier to do. Nit: Javadoc that just enumerates the arguments and empty return tags provide no value. Please remove them or add appropriate documentation to make them worthwhile. The unit test failure is related, please investigate.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 20s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 51s Maven dependency ordering for branch
        +1 mvninstall 18m 36s trunk passed
        +1 compile 9m 7s trunk passed
        +1 checkstyle 0m 52s trunk passed
        +1 mvnsite 1m 46s trunk passed
        +1 mvneclipse 0m 59s trunk passed
        +1 findbugs 3m 19s trunk passed
        +1 javadoc 1m 27s trunk passed
        0 mvndep 0m 11s Maven dependency ordering for patch
        +1 mvninstall 1m 22s the patch passed
        +1 compile 6m 34s the patch passed
        +1 javac 6m 34s the patch passed
        -0 checkstyle 0m 58s hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 393 unchanged - 3 fixed = 400 total (was 396)
        +1 mvnsite 2m 3s the patch passed
        +1 mvneclipse 0m 56s the patch passed
        -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
        +1 xml 0m 2s The patch has no ill-formed XML file.
        +1 findbugs 4m 17s the patch passed
        +1 javadoc 1m 22s the patch passed
        +1 unit 0m 33s hadoop-yarn-api in the patch passed.
        +1 unit 2m 55s hadoop-yarn-common in the patch passed.
        -1 unit 14m 18s hadoop-yarn-server-nodemanager in the patch failed.
        +1 asflicense 0m 33s The patch does not generate ASF License warnings.
        82m 18s



        Reason Tests
        Failed junit tests hadoop.yarn.server.nodemanager.containermanager.localizer.TestContainerLocalizer



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-574
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
        uname Linux e78dd57b4d4d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 5a56520
        Default Java 1.8.0_121
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14751/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        whitespace https://builds.apache.org/job/PreCommit-YARN-Build/14751/artifact/patchprocess/whitespace-eol.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14751/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14751/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14751/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 51s Maven dependency ordering for branch +1 mvninstall 18m 36s trunk passed +1 compile 9m 7s trunk passed +1 checkstyle 0m 52s trunk passed +1 mvnsite 1m 46s trunk passed +1 mvneclipse 0m 59s trunk passed +1 findbugs 3m 19s trunk passed +1 javadoc 1m 27s trunk passed 0 mvndep 0m 11s Maven dependency ordering for patch +1 mvninstall 1m 22s the patch passed +1 compile 6m 34s the patch passed +1 javac 6m 34s the patch passed -0 checkstyle 0m 58s hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 393 unchanged - 3 fixed = 400 total (was 396) +1 mvnsite 2m 3s the patch passed +1 mvneclipse 0m 56s the patch passed -1 whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 4m 17s the patch passed +1 javadoc 1m 22s the patch passed +1 unit 0m 33s hadoop-yarn-api in the patch passed. +1 unit 2m 55s hadoop-yarn-common in the patch passed. -1 unit 14m 18s hadoop-yarn-server-nodemanager in the patch failed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 82m 18s Reason Tests Failed junit tests hadoop.yarn.server.nodemanager.containermanager.localizer.TestContainerLocalizer Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-574 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12849264/YARN-574.05.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux e78dd57b4d4d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 5a56520 Default Java 1.8.0_121 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14751/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/14751/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14751/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14751/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14751/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajithshetty Ajith S added a comment - - edited

        Jason Lowe thanks for the clarification. Attaching patch with suggested approach of controlling multiple heartbeats using a atomic counter. Please review

        Show
        ajithshetty Ajith S added a comment - - edited Jason Lowe thanks for the clarification. Attaching patch with suggested approach of controlling multiple heartbeats using a atomic counter. Please review
        Hide
        jlowe Jason Lowe added a comment -

        No, that's still racy. There's a window where the worker thread has dequeued the task (i.e.: queue size is now zero) but has not set the flag yet to indicate it is active. So we can still end up doing quick heartbeats and end up obtaining more work than we're prepared to handle in parallel.

        Show
        jlowe Jason Lowe added a comment - No, that's still racy. There's a window where the worker thread has dequeued the task (i.e.: queue size is now zero) but has not set the flag yet to indicate it is active. So we can still end up doing quick heartbeats and end up obtaining more work than we're prepared to handle in parallel.
        Hide
        ajithshetty Ajith S added a comment -

        Thanks for the input Jason Lowe agree with the race condition mentioned
        To simplify the approach, can we instead track the size of the LinkedBlockingQueue passed to the executor and avoid doing the quick heartbeats incase the LinkedBlockingQueue size is greater than zero.?

        Show
        ajithshetty Ajith S added a comment - Thanks for the input Jason Lowe agree with the race condition mentioned To simplify the approach, can we instead track the size of the LinkedBlockingQueue passed to the executor and avoid doing the quick heartbeats incase the LinkedBlockingQueue size is greater than zero.?
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Good catch! Thanks for the comment Jason Lowe,

        Show
        Naganarasimha Naganarasimha G R added a comment - Good catch! Thanks for the comment Jason Lowe ,
        Hide
        jlowe Jason Lowe added a comment -

        Thanks for picking this up Ajith S. I took a quick look at the patch. It looks OK at a high level, but there is a race condition in how we're dealing with the thread pool. The code makes the assumption that work submitted to the queue will be picked up instantly by an idle thread in the thread pool. If it's not picked up fast enough then we can end up doing one or more super-quick heartbeats and accidentally queue up more work for the thread pool than we have active threads. That could actually make the localization slower when there are multiple containers for the same job on the same node, since one of the other container localizers that has idle threads cannot work on a resource already handed to another localizer.

        IMHO we can trivially track the outstanding count ourselves. We simply need to increment an AtomicInteger when we submit the work to the executor, then wrap FSDownload in another Callable that decrements the AtomicInteger when FSDownload returns/throws. Then we can track how many resources are either pending or actively being downloaded without getting bitten by race conditions in the executor implementation. Alternatively the createStatus method already walks the Future objects returned from the executor and we could calculate how many resources are in-progress (i.e.: either pending or actively being downloaded) there. Once there are as many in-progress resources as the configured parallelism then we should avoid making quick heartbeats.

        Show
        jlowe Jason Lowe added a comment - Thanks for picking this up Ajith S . I took a quick look at the patch. It looks OK at a high level, but there is a race condition in how we're dealing with the thread pool. The code makes the assumption that work submitted to the queue will be picked up instantly by an idle thread in the thread pool. If it's not picked up fast enough then we can end up doing one or more super-quick heartbeats and accidentally queue up more work for the thread pool than we have active threads. That could actually make the localization slower when there are multiple containers for the same job on the same node, since one of the other container localizers that has idle threads cannot work on a resource already handed to another localizer. IMHO we can trivially track the outstanding count ourselves. We simply need to increment an AtomicInteger when we submit the work to the executor, then wrap FSDownload in another Callable that decrements the AtomicInteger when FSDownload returns/throws. Then we can track how many resources are either pending or actively being downloaded without getting bitten by race conditions in the executor implementation. Alternatively the createStatus method already walks the Future objects returned from the executor and we could calculate how many resources are in-progress (i.e.: either pending or actively being downloaded) there. Once there are as many in-progress resources as the configured parallelism then we should avoid making quick heartbeats.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks Ajith S for the patch and yes keeping thread size as "1" ensures current behavior.
        Overall the approach looks fine but would like to get one relook on the patch from Jason Lowe as well

        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks Ajith S for the patch and yes keeping thread size as "1" ensures current behavior. Overall the approach looks fine but would like to get one relook on the patch from Jason Lowe as well
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 21s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 10s Maven dependency ordering for branch
        +1 mvninstall 15m 29s trunk passed
        +1 compile 6m 1s trunk passed
        +1 checkstyle 0m 55s trunk passed
        +1 mvnsite 1m 57s trunk passed
        +1 mvneclipse 1m 2s trunk passed
        +1 findbugs 3m 48s trunk passed
        +1 javadoc 1m 37s trunk passed
        0 mvndep 0m 12s Maven dependency ordering for patch
        +1 mvninstall 1m 33s the patch passed
        +1 compile 5m 32s the patch passed
        +1 javac 5m 32s the patch passed
        -0 checkstyle 0m 59s hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 393 unchanged - 3 fixed = 400 total (was 396)
        +1 mvnsite 1m 57s the patch passed
        +1 mvneclipse 1m 2s the patch passed
        -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
        +1 xml 0m 2s The patch has no ill-formed XML file.
        +1 findbugs 4m 31s the patch passed
        +1 javadoc 1m 36s the patch passed
        +1 unit 0m 40s hadoop-yarn-api in the patch passed.
        +1 unit 2m 55s hadoop-yarn-common in the patch passed.
        +1 unit 14m 1s hadoop-yarn-server-nodemanager in the patch passed.
        +1 asflicense 0m 33s The patch does not generate ASF License warnings.
        75m 29s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-574
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12846599/YARN-574.04.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
        uname Linux cc84afd1a80b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / c18590f
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14623/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        whitespace https://builds.apache.org/job/PreCommit-YARN-Build/14623/artifact/patchprocess/whitespace-eol.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14623/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14623/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 21s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 10s Maven dependency ordering for branch +1 mvninstall 15m 29s trunk passed +1 compile 6m 1s trunk passed +1 checkstyle 0m 55s trunk passed +1 mvnsite 1m 57s trunk passed +1 mvneclipse 1m 2s trunk passed +1 findbugs 3m 48s trunk passed +1 javadoc 1m 37s trunk passed 0 mvndep 0m 12s Maven dependency ordering for patch +1 mvninstall 1m 33s the patch passed +1 compile 5m 32s the patch passed +1 javac 5m 32s the patch passed -0 checkstyle 0m 59s hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 393 unchanged - 3 fixed = 400 total (was 396) +1 mvnsite 1m 57s the patch passed +1 mvneclipse 1m 2s the patch passed -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 4m 31s the patch passed +1 javadoc 1m 36s the patch passed +1 unit 0m 40s hadoop-yarn-api in the patch passed. +1 unit 2m 55s hadoop-yarn-common in the patch passed. +1 unit 14m 1s hadoop-yarn-server-nodemanager in the patch passed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 75m 29s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-574 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12846599/YARN-574.04.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux cc84afd1a80b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / c18590f Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14623/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/14623/artifact/patchprocess/whitespace-eol.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14623/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14623/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajithshetty Ajith S added a comment -

        Thanks Naganarasimha Garla and Varun Saxena for the review comments
        I have updated the patch as per review comments

        Show
        ajithshetty Ajith S added a comment - Thanks Naganarasimha Garla and Varun Saxena for the review comments I have updated the patch as per review comments
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Also would like to get the feedback from Jason Lowe, Vinod Kumar Vavilapalli, Devaraj K & others in the watcher list.

        Show
        Naganarasimha Naganarasimha G R added a comment - Also would like to get the feedback from Jason Lowe , Vinod Kumar Vavilapalli , Devaraj K & others in the watcher list.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks Ajith S, for the patch.
        As discussed offline, I think existing approach of sending only one resource as HB response is better as there can be some control at NM end to determine (if req in future) whether to give multiple resources to be localized or one at a time(or even none based on the NM's load). Further to it i would to add further points

        1. Agree to Varun Saxena's point having fixed 4 threads (core and max pool size ) is not ideal. But IMHO i would like to keep atleast 2(default max pool size), as users might not be able make use of the benifit if not.
        2. I think instead of using Executors.newFixedThreadPool(nThreads, tf) we can try to use ThreadPoolExecutor so that we can we can make use of ThreadPoolExecutor.getActiveCount() and compare it with max pool size to determine whether to request Heartbeat immediately or to wait 1000s.
        3. If NM sends LIVE and if no ResourceLocalizationSpecs are shared across then no need to check the current load on the executor, we can wait for the defined poll period and then do the HB.
        4. TestContainerLocalizer.java, ln no 301-306, Please add some proper message which can be shown on failure.
        5. TestYarnConfigurationFields.testCompareConfigurationClassAgainstXml is related to the patch
        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks Ajith S , for the patch. As discussed offline, I think existing approach of sending only one resource as HB response is better as there can be some control at NM end to determine (if req in future) whether to give multiple resources to be localized or one at a time(or even none based on the NM's load). Further to it i would to add further points Agree to Varun Saxena 's point having fixed 4 threads (core and max pool size ) is not ideal. But IMHO i would like to keep atleast 2(default max pool size), as users might not be able make use of the benifit if not. I think instead of using Executors.newFixedThreadPool(nThreads, tf) we can try to use ThreadPoolExecutor so that we can we can make use of ThreadPoolExecutor.getActiveCount() and compare it with max pool size to determine whether to request Heartbeat immediately or to wait 1000s. If NM sends LIVE and if no ResourceLocalizationSpecs are shared across then no need to check the current load on the executor, we can wait for the defined poll period and then do the HB. TestContainerLocalizer.java, ln no 301-306, Please add some proper message which can be shown on failure. TestYarnConfigurationFields.testCompareConfigurationClassAgainstXml is related to the patch
        Hide
        varun_saxena Varun Saxena added a comment -

        Ajith S, thanks for the patch.
        Parallel downloads should speed up container localization phase.
        Coming to the patch, should the configuration value be 4 ? Or should we keep it as 1 i.e. current behavior. And then it can be tuned based on size and visibility of resources to be downloaded, if somebody wants to use it.
        Localizers are not treated as containers which means resources used by them are not accounted for, so it should not be that they all together end up eating up quite a bit of resources on the node with default value of 4.
        Thoughts ?

        Show
        varun_saxena Varun Saxena added a comment - Ajith S , thanks for the patch. Parallel downloads should speed up container localization phase. Coming to the patch, should the configuration value be 4 ? Or should we keep it as 1 i.e. current behavior. And then it can be tuned based on size and visibility of resources to be downloaded, if somebody wants to use it. Localizers are not treated as containers which means resources used by them are not accounted for, so it should not be that they all together end up eating up quite a bit of resources on the node with default value of 4. Thoughts ?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 17s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        0 mvndep 0m 9s Maven dependency ordering for branch
        +1 mvninstall 7m 10s trunk passed
        +1 compile 6m 48s trunk passed
        +1 checkstyle 0m 54s trunk passed
        +1 mvnsite 1m 16s trunk passed
        +1 mvneclipse 0m 48s trunk passed
        +1 findbugs 2m 8s trunk passed
        +1 javadoc 0m 55s trunk passed
        0 mvndep 0m 10s Maven dependency ordering for patch
        +1 mvninstall 0m 46s the patch passed
        +1 compile 5m 17s the patch passed
        +1 javac 5m 17s the patch passed
        -0 checkstyle 0m 54s hadoop-yarn-project/hadoop-yarn: The patch generated 10 new + 397 unchanged - 1 fixed = 407 total (was 398)
        +1 mvnsite 1m 14s the patch passed
        +1 mvneclipse 0m 43s the patch passed
        -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
        +1 findbugs 2m 19s the patch passed
        +1 javadoc 0m 54s the patch passed
        -1 unit 0m 35s hadoop-yarn-api in the patch failed.
        -1 unit 15m 57s hadoop-yarn-server-nodemanager in the patch failed.
        +1 asflicense 0m 38s The patch does not generate ASF License warnings.
        58m 22s



        Reason Tests
        Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields
          hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:e809691
        JIRA Issue YARN-574
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837742/YARN-574.03.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux f6471cf3e508 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / b970446
        Default Java 1.8.0_101
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        whitespace https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/whitespace-eol.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13807/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/13807/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 9s Maven dependency ordering for branch +1 mvninstall 7m 10s trunk passed +1 compile 6m 48s trunk passed +1 checkstyle 0m 54s trunk passed +1 mvnsite 1m 16s trunk passed +1 mvneclipse 0m 48s trunk passed +1 findbugs 2m 8s trunk passed +1 javadoc 0m 55s trunk passed 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 0m 46s the patch passed +1 compile 5m 17s the patch passed +1 javac 5m 17s the patch passed -0 checkstyle 0m 54s hadoop-yarn-project/hadoop-yarn: The patch generated 10 new + 397 unchanged - 1 fixed = 407 total (was 398) +1 mvnsite 1m 14s the patch passed +1 mvneclipse 0m 43s the patch passed -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 2m 19s the patch passed +1 javadoc 0m 54s the patch passed -1 unit 0m 35s hadoop-yarn-api in the patch failed. -1 unit 15m 57s hadoop-yarn-server-nodemanager in the patch failed. +1 asflicense 0m 38s The patch does not generate ASF License warnings. 58m 22s Reason Tests Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields   hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager Subsystem Report/Notes Docker Image:yetus/hadoop:e809691 JIRA Issue YARN-574 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837742/YARN-574.03.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux f6471cf3e508 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b970446 Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt whitespace https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/13807/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13807/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/13807/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        ajithshetty Ajith S added a comment -

        I have rebased and added testcase for the patch. Please review

        Show
        ajithshetty Ajith S added a comment - I have rebased and added testcase for the patch. Please review
        Hide
        ajithshetty Ajith S added a comment -

        I will take it over, if you are working on it, please reassign

        Show
        ajithshetty Ajith S added a comment - I will take it over, if you are working on it, please reassign
        Hide
        jlowe Jason Lowe added a comment -

        Cancelling the patch as it no longer applies.

        Show
        jlowe Jason Lowe added a comment - Cancelling the patch as it no longer applies.
        Hide
        ajithshetty Ajith S added a comment -

        Have a requirement for this. Omkar Vinit Joshi can i work on this.?

        Show
        ajithshetty Ajith S added a comment - Have a requirement for this. Omkar Vinit Joshi can i work on this.?
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Omkar Vinit Joshi seems like its not getting applied to trunk, Are you planning to work in it ?

        Show
        Naganarasimha Naganarasimha G R added a comment - Omkar Vinit Joshi seems like its not getting applied to trunk, Are you planning to work in it ?
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        -1 patch 0m 5s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



        Subsystem Report/Notes
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12738110/YARN-574.2.patch
        JIRA Issue YARN-574
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/12804/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 5s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12738110/YARN-574.2.patch JIRA Issue YARN-574 Console output https://builds.apache.org/job/PreCommit-YARN-Build/12804/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 0s Docker mode activated.
        -1 patch 0m 6s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



        Subsystem Report/Notes
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12738110/YARN-574.2.patch
        JIRA Issue YARN-574
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/12803/console
        Powered by Apache Yetus 0.3.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 6s YARN-574 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12738110/YARN-574.2.patch JIRA Issue YARN-574 Console output https://builds.apache.org/job/PreCommit-YARN-Build/12803/console Powered by Apache Yetus 0.3.0 http://yetus.apache.org This message was automatically generated.
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Omkar Vinit Joshi, Seems like overall patch is fine except for the logic that in ContainerLocalizer.localizeFiles we can optimize to receive all the resources to be localized in one shot (already we receive it as list<ResourceLocalizationSpec> so just need to confirm the same in Resourceloclization service) and avoid polling/hb the server frequently just based on the number of threads in the Container Localizer.
        Retriggering the build to see whether old patch works!

        Show
        Naganarasimha Naganarasimha G R added a comment - Omkar Vinit Joshi , Seems like overall patch is fine except for the logic that in ContainerLocalizer.localizeFiles we can optimize to receive all the resources to be localized in one shot (already we receive it as list<ResourceLocalizationSpec> so just need to confirm the same in Resourceloclization service) and avoid polling/hb the server frequently just based on the number of threads in the Container Localizer. Retriggering the build to see whether old patch works!
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        -1 pre-patch 16m 21s Findbugs (version ) appears to be broken on trunk.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 javac 7m 36s There were no new javac warning messages.
        +1 javadoc 9m 38s There were no new javadoc warning messages.
        +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
        -1 checkstyle 1m 11s The applied patch generated 1 new checkstyle issues (total was 213, now 213).
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 33s mvn install still works.
        +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
        +1 findbugs 2m 46s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 yarn tests 0m 22s Tests passed in hadoop-yarn-api.
        +1 yarn tests 6m 3s Tests passed in hadoop-yarn-server-nodemanager.
            46m 38s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12738110/YARN-574.2.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / 71de367
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8206/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
        hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8206/artifact/patchprocess/testrun_hadoop-yarn-api.txt
        hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8206/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8206/testReport/
        Java 1.7.0_55
        uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8206/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 pre-patch 16m 21s Findbugs (version ) appears to be broken on trunk. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac 7m 36s There were no new javac warning messages. +1 javadoc 9m 38s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 11s The applied patch generated 1 new checkstyle issues (total was 213, now 213). +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 33s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 2m 46s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 0m 22s Tests passed in hadoop-yarn-api. +1 yarn tests 6m 3s Tests passed in hadoop-yarn-server-nodemanager.     46m 38s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12738110/YARN-574.2.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 71de367 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8206/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8206/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8206/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8206/testReport/ Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8206/console This message was automatically generated.
        Hide
        zshao Zheng Shao added a comment -

        Fixed syntax error.

        Show
        zshao Zheng Shao added a comment - Fixed syntax error.
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 17m 17s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
        +1 javac 7m 37s There were no new javac warning messages.
        +1 javadoc 9m 37s There were no new javadoc warning messages.
        +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
        -1 checkstyle 1m 18s The applied patch generated 2 new checkstyle issues (total was 213, now 214).
        +1 whitespace 0m 0s The patch has no lines that end in whitespace.
        +1 install 1m 36s mvn install still works.
        +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
        +1 findbugs 2m 44s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 yarn tests 0m 26s Tests passed in hadoop-yarn-api.
        +1 yarn tests 6m 4s Tests passed in hadoop-yarn-server-nodemanager.
            47m 49s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12737994/YARN-574.1.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / 7588585
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8199/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
        hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8199/artifact/patchprocess/testrun_hadoop-yarn-api.txt
        hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8199/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8199/testReport/
        Java 1.7.0_55
        uname Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/8199/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 17m 17s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac 7m 37s There were no new javac warning messages. +1 javadoc 9m 37s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. -1 checkstyle 1m 18s The applied patch generated 2 new checkstyle issues (total was 213, now 214). +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 36s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 2m 44s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 yarn tests 0m 26s Tests passed in hadoop-yarn-api. +1 yarn tests 6m 4s Tests passed in hadoop-yarn-server-nodemanager.     47m 49s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12737994/YARN-574.1.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 7588585 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/8199/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt hadoop-yarn-api test log https://builds.apache.org/job/PreCommit-YARN-Build/8199/artifact/patchprocess/testrun_hadoop-yarn-api.txt hadoop-yarn-server-nodemanager test log https://builds.apache.org/job/PreCommit-YARN-Build/8199/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/8199/testReport/ Java 1.7.0_55 uname Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-YARN-Build/8199/console This message was automatically generated.

          People

          • Assignee:
            ajithshetty Ajith S
            Reporter:
            ojoshi Omkar Vinit Joshi
          • Votes:
            1 Vote for this issue
            Watchers:
            20 Start watching this issue

            Dates

            • Created:
              Updated:

              Development