Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5864

YARN Capacity Scheduler - Queue Priorities

    Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.9.0, 3.0.0-alpha4
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently, Capacity Scheduler at every parent-queue level uses relative used-capacities of the chil-queues to decide which queue can get next available resource first.

      For example,

      • Q1 & Q2 are child queues under queueA
      • Q1 has 20% of configured capacity, 5% of used-capacity and
      • Q2 has 80% of configured capacity, 8% of used-capacity.

      In the situation, the relative used-capacities are calculated as below

      • Relative used-capacity of Q1 is 5/20 = 0.25
      • Relative used-capacity of Q2 is 8/80 = 0.10

      In the above example, per today’s Capacity Scheduler’s algorithm, Q2 is selected by the scheduler first to receive next available resource.

      Simply ordering queues according to relative used-capacities sometimes causes a few troubles because scarce resources could be assigned to less-important apps first.

      1. Latency sensitivity: This can be a problem with latency sensitive applications where waiting till the ‘other’ queue gets full is not going to cut it. The delay in scheduling directly reflects in the response times of these applications.
      2. Resource fragmentation for large-container apps: Today’s algorithm also causes issues with applications that need very large containers. It is possible that existing queues are all within their resource guarantees but their current allocation distribution on each node may be such that an application which needs large container simply cannot fit on those nodes.
        Services:
      3. The above problem (2) gets worse with long running applications. With short running apps, previous containers may eventually finish and make enough space for the apps with large containers. But with long running services in the cluster, the large containers’ application may never get resources on any nodes even if its demands are not yet met.
      4. Long running services are sometimes more picky w.r.t placement than normal batch apps. For example, for a long running service in a separate queue (say queue=service), during peak hours it may want to launch instances on 50% of the cluster nodes. On each node, it may want to launch a large container, say 200G memory per container.
      1. YARN-5864.001.patch
        125 kB
        Wangda Tan
      2. YARN-5864.002.patch
        156 kB
        Wangda Tan
      3. YARN-5864.003.patch
        165 kB
        Wangda Tan
      4. YARN-5864.004.patch
        175 kB
        Wangda Tan
      5. YARN-5864.005.patch
        176 kB
        Wangda Tan
      6. YARN-5864.006.patch
        176 kB
        Wangda Tan
      7. YARN-5864.007.patch
        176 kB
        Wangda Tan
      8. YARN-5864.branch-2.007_2.patch
        176 kB
        Wangda Tan
      9. YARN-5864.branch-2.007.patch
        176 kB
        Wangda Tan
      10. YARN-5864.branch-2.008.patch
        175 kB
        Wangda Tan
      11. YARN-5864.poc-0.patch
        19 kB
        Wangda Tan
      12. YARN-5864-preemption-performance-report.pdf
        200 kB
        Wangda Tan
      13. YARN-5864-usage-doc.html
        16 kB
        Wangda Tan
      14. YARN-CapacityScheduler-Queue-Priorities-design-v1.pdf
        178 kB
        Wangda Tan

        Activity

        Hide
        leftnoteasy Wangda Tan added a comment -

        The problem in the description is hard because it's hard clearly explain why a queue will be preempted even if a queue is within its limit.

        So I'm proposing to solve one use case only: in some of our customer's configuration, we have separate queues for long running services, for example LLAP-queue for LLAP services. LLAP services will scale up and down depends on the workload, they will ask container with lots of resource to make sure hosts running LLAP daemons not used by other applications.

        And we want to allocate containers for such LRS sooner when they have requirements to scale up.

        There's one quick approach in my mind to handle the use case above:

        • Add a new preemption selector (which make sure this feature can be disabled by configuration)
        • Add a white-list of queues for the new selection: Only queue in white list can preempt from other queues
        • When a reserved container from white-list queue created beyond configured timeout, we will look at the node which reserves the container, and select container from non-whitelisted queue to preempt.

        Thoughts and suggestions? Carlo Curino, Eric Payne, Sunil G.

        Attached patch for review as well.

        Show
        leftnoteasy Wangda Tan added a comment - The problem in the description is hard because it's hard clearly explain why a queue will be preempted even if a queue is within its limit. So I'm proposing to solve one use case only: in some of our customer's configuration, we have separate queues for long running services, for example LLAP-queue for LLAP services. LLAP services will scale up and down depends on the workload, they will ask container with lots of resource to make sure hosts running LLAP daemons not used by other applications. And we want to allocate containers for such LRS sooner when they have requirements to scale up. There's one quick approach in my mind to handle the use case above: Add a new preemption selector (which make sure this feature can be disabled by configuration) Add a white-list of queues for the new selection: Only queue in white list can preempt from other queues When a reserved container from white-list queue created beyond configured timeout, we will look at the node which reserves the container, and select container from non-whitelisted queue to preempt. Thoughts and suggestions? Carlo Curino , Eric Payne , Sunil G . Attached patch for review as well.
        Hide
        curino Carlo Curino added a comment -

        Tan, Wangda I understand the need for this feature, but the general concern I have is with that the collection of features in CS have very poorly defined interactions, and worse they do violate each other invariants left, right and center. For example non-preemptable queues when in use break the fair over-capacity sharing semantics. Similarly locality and node labels have heavy and not fully clear redundancies, and user-limits / app priorities / request priorities / container types / etc... are further complicating this space. The mental model associated with the system is growing disproportionately for both users and operators, and this is a bad sign.

        The new feature you propose seem to further push us down this slippery slope, where the semantics of what a user tenant gets for his/her money are very unclear. Up till before this feature the one invariant we had not violated yet was that, If I paid for capacity C, and I am within capacity C my containers will not be disturbed (regardless of other tenants desires). Now a queue may or may not be preempted within its capacity to accommodate some other queue large containers.

        This opens up many abuses, one that comes to mind:

        1. I request a large container on node N1,
        2. preemption kicks out some other tenant,
        3. I get the container on N1,
        4. I reduce the size of the container on N1 to a normal size containers...
        5. (I repeat till I grab all the nodes I want).
          Through this little trick a nasty user can simply bully his way into the nodes he/she wants, regardless of the container size he really needs, and his/her capacity standing w.r.t. other tenants. I am sure if we squint hard enough there is a combination of configurations that can prevent this, but the general concern remains.

        Bottomline, I don't want to stand in the way of progress and important features, but I don't see this ending well.

        I see two paths forward:

        1. a deep refactoring to make the code manageable, and an analysis that produces crisp semantics associated with each of the N! combination of our features---this should ideally lead to cutting all "nice on the box" features that are rarely/never used, or have undefined semantics.
        2. Keep CS for legacy, and create a new <constraint language + solver>-based scheduler for which we can prove clear semantics, and that allows users/operators to have a simple mental model of what the system is supposed to deliver.

        (2) is my favorite option if I had a choice.

        Show
        curino Carlo Curino added a comment - Tan, Wangda I understand the need for this feature, but the general concern I have is with that the collection of features in CS have very poorly defined interactions, and worse they do violate each other invariants left, right and center. For example non-preemptable queues when in use break the fair over-capacity sharing semantics. Similarly locality and node labels have heavy and not fully clear redundancies, and user-limits / app priorities / request priorities / container types / etc... are further complicating this space. The mental model associated with the system is growing disproportionately for both users and operators, and this is a bad sign. The new feature you propose seem to further push us down this slippery slope, where the semantics of what a user tenant gets for his/her money are very unclear. Up till before this feature the one invariant we had not violated yet was that, If I paid for capacity C, and I am within capacity C my containers will not be disturbed (regardless of other tenants desires). Now a queue may or may not be preempted within its capacity to accommodate some other queue large containers. This opens up many abuses, one that comes to mind: I request a large container on node N1, preemption kicks out some other tenant, I get the container on N1, I reduce the size of the container on N1 to a normal size containers... (I repeat till I grab all the nodes I want). Through this little trick a nasty user can simply bully his way into the nodes he/she wants, regardless of the container size he really needs, and his/her capacity standing w.r.t. other tenants. I am sure if we squint hard enough there is a combination of configurations that can prevent this, but the general concern remains. Bottomline, I don't want to stand in the way of progress and important features, but I don't see this ending well. I see two paths forward: a deep refactoring to make the code manageable, and an analysis that produces crisp semantics associated with each of the N! combination of our features---this should ideally lead to cutting all "nice on the box" features that are rarely/never used, or have undefined semantics. Keep CS for legacy, and create a new <constraint language + solver>-based scheduler for which we can prove clear semantics, and that allows users/operators to have a simple mental model of what the system is supposed to deliver. (2) is my favorite option if I had a choice.
        Hide
        wangda Tan, Wangda added a comment -

        Thanks Carlo Curino for sharing these insightful suggestions.

        The problem you mentioned is totally true: we were putting lots of efforts to add features for various of resource constraints (such as limits, node partition, priority, etc.) but we paid less attention about how to make easier/consistent semantics.

        I also agree that we do need to spend some time to think about what is the semantics that YARN scheduler should have. For example, the minimum guarantee of CS is queue should get at least their configured capacity, but a picky app could make an under-utilized queue waiting forever for the resource. And also as you mentioned above, non-preemptable queue can invalidate configured capacity as well.

        However, I would argue that the scheduler is not able to run perfectly without invalidating all the constraints. It is not just a group of formulas we need to define and let the solver to optimize it, it involves lots of human's emotions and preferences. For example, user may not understand and glad to accept why a picky request cannot be allocated even if the queue/cluster have available capacity. And it may not be acceptable to a production cluster that a long running service for realtime queries cannot be launched because we don't want to kill some less-important batch jobs. My point is, if we can have these rules defined in the doc and user can know what happened from the UI/log, we can add them.

        To improve these, I think your suggestion (1) will be more helpful and achievable in a short term, we can definitely remove some parameters, for example, existing user-limit definition is not good enough and user-limit-factor can always make a queue cannot fully utilize its capacity. And we can better define these semantics in doc and UI.

        (2) Looks beautiful but it may not be able to solve the root problem directly: The first priority is to make our users feel happy to accept it instead of beautifully solving it in mathematics. For example, for the problem I put in description of the JIRA, I don't think (2) can get allocation without harming other applications. And in implementation's perspective, I'm not sure how to make a solver-based solution can handle both of fast allocation (we want to do allocation within milli-seconds for interactive queries) and good placement (such as gang scheduling with some other constraints like anti-affinity). It seems to me that we will sacrifice low latency to get better quality of placement for the option (2).

        This opens up many abuses, one that comes to mind ...

        Actually this feature will be only used in a pretty controlled environment: Important long running services running in a separate queue, and admin/user agrees that it can preempt other batch jobs to get new containers. ACLs will be set to avoid normal user running inside these queues, all apps running in the queue should be trusted apps such as YARN native services (Slider), Spark, etc. And we can also make sure these apps will try best to respect other apps.
        And please advice if you think we can improve the semantics of this feature.

        Thanks,

        Show
        wangda Tan, Wangda added a comment - Thanks Carlo Curino for sharing these insightful suggestions. The problem you mentioned is totally true: we were putting lots of efforts to add features for various of resource constraints (such as limits, node partition, priority, etc.) but we paid less attention about how to make easier/consistent semantics. I also agree that we do need to spend some time to think about what is the semantics that YARN scheduler should have. For example, the minimum guarantee of CS is queue should get at least their configured capacity, but a picky app could make an under-utilized queue waiting forever for the resource. And also as you mentioned above, non-preemptable queue can invalidate configured capacity as well. However, I would argue that the scheduler is not able to run perfectly without invalidating all the constraints. It is not just a group of formulas we need to define and let the solver to optimize it, it involves lots of human's emotions and preferences. For example, user may not understand and glad to accept why a picky request cannot be allocated even if the queue/cluster have available capacity. And it may not be acceptable to a production cluster that a long running service for realtime queries cannot be launched because we don't want to kill some less-important batch jobs. My point is, if we can have these rules defined in the doc and user can know what happened from the UI/log, we can add them. To improve these, I think your suggestion (1) will be more helpful and achievable in a short term, we can definitely remove some parameters, for example, existing user-limit definition is not good enough and user-limit-factor can always make a queue cannot fully utilize its capacity. And we can better define these semantics in doc and UI. (2) Looks beautiful but it may not be able to solve the root problem directly: The first priority is to make our users feel happy to accept it instead of beautifully solving it in mathematics. For example, for the problem I put in description of the JIRA, I don't think (2) can get allocation without harming other applications. And in implementation's perspective, I'm not sure how to make a solver-based solution can handle both of fast allocation (we want to do allocation within milli-seconds for interactive queries) and good placement (such as gang scheduling with some other constraints like anti-affinity). It seems to me that we will sacrifice low latency to get better quality of placement for the option (2). This opens up many abuses, one that comes to mind ... Actually this feature will be only used in a pretty controlled environment: Important long running services running in a separate queue, and admin/user agrees that it can preempt other batch jobs to get new containers. ACLs will be set to avoid normal user running inside these queues, all apps running in the queue should be trusted apps such as YARN native services (Slider), Spark, etc. And we can also make sure these apps will try best to respect other apps. And please advice if you think we can improve the semantics of this feature. Thanks,
        Hide
        curino Carlo Curino added a comment -

        Tan, Wangda I think we are on the same page on the problem side, and I agree that the scheduling invariants (that were once hard constraints) will eventually look more like soft-constraints, which we aim to meet/maximize but are ok to comprise over in some cases.

        Understanding how to trade one for the other, or how to make decisions that maximize the number/amount of met constraints is the hard problem. To this purpose I would argue that (2) is structurally better position to capture all the tradeoffs in a compact and easy to understand way, than any combination of heuristics. Said this how to design (2) in a scalable/fast way is an open problem (an interesting direction recently appeared in OSDI 2016, http://www.firmament.io/, while it is not enough, it has some good ideas we could consider to leverage). So I am proposing it more as a north-star than as a short-term proposal of how to tackle this JIRA (or the scheduler issues in general). On the other hand, (1) is an ongoing activity we can start right-away, and we should do it regardless of whether we eventually manage to do something like (2) or not.

        Regarding abuses/scope of the feature. I am certain that the initial scenarios you are designing for has all the right properties to be safe/reasonable/trusted, but once the feature is out there, people will start using it in the most baroque ways and some of the issues I allude it to, might come up. Having very crisply defined semantics, configuration-validation mechanics (that prevent the worst configuration mistakes), and very tight unit tests are probably our best line of defense.

        Show
        curino Carlo Curino added a comment - Tan, Wangda I think we are on the same page on the problem side, and I agree that the scheduling invariants (that were once hard constraints) will eventually look more like soft-constraints, which we aim to meet/maximize but are ok to comprise over in some cases. Understanding how to trade one for the other, or how to make decisions that maximize the number/amount of met constraints is the hard problem. To this purpose I would argue that (2) is structurally better position to capture all the tradeoffs in a compact and easy to understand way, than any combination of heuristics. Said this how to design (2) in a scalable/fast way is an open problem (an interesting direction recently appeared in OSDI 2016, http://www.firmament.io/ , while it is not enough, it has some good ideas we could consider to leverage). So I am proposing it more as a north-star than as a short-term proposal of how to tackle this JIRA (or the scheduler issues in general). On the other hand, (1) is an ongoing activity we can start right-away, and we should do it regardless of whether we eventually manage to do something like (2) or not. Regarding abuses/scope of the feature. I am certain that the initial scenarios you are designing for has all the right properties to be safe/reasonable/trusted, but once the feature is out there, people will start using it in the most baroque ways and some of the issues I allude it to, might come up. Having very crisply defined semantics, configuration-validation mechanics (that prevent the worst configuration mistakes), and very tight unit tests are probably our best line of defense.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Thanks Carlo Curino for sharing the firmament paper. I just read it, it provided a lot of insightful ideas. I believe it can work pretty well for a cluster which have homogeneous workload, but it may not be able to solve the mix workloads issues, as it stated:

        Firmament shows that a single scheduler can attain scalability, but its MCMF optimization does not trivially admit multiple independent schedulers.

        So in my mind, for YARN, we need borg-like architecture to make different kinds of workload can be scheduled using different pluggable scheduling policies and scorers. Firmament could be one of these scheduling policies.

        I agree your comment about we should make a better semantics of the feature, I will think it again and keep you posted.

        Show
        leftnoteasy Wangda Tan added a comment - Thanks Carlo Curino for sharing the firmament paper. I just read it, it provided a lot of insightful ideas. I believe it can work pretty well for a cluster which have homogeneous workload, but it may not be able to solve the mix workloads issues, as it stated: Firmament shows that a single scheduler can attain scalability, but its MCMF optimization does not trivially admit multiple independent schedulers. So in my mind, for YARN, we need borg-like architecture to make different kinds of workload can be scheduled using different pluggable scheduling policies and scorers. Firmament could be one of these scheduling policies. I agree your comment about we should make a better semantics of the feature, I will think it again and keep you posted.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Offline discussed with Vinod Kumar Vavilapalli.

        We can have a better semantic of this feature, which is we can add queue-priority property. (Credit to Vinod Kumar Vavilapalli for the idea).

        In existing scheduler, we sort queues based on (used-capacity / configured-capacity). But in some cases we have some apps/services need get resource first. For example, we allocate 85% to production queue, and 15% to test queue. When production queue is underutilized, we want scheduler give resource to production queue first regardless of test queue's utilization.

        A rough plan is: we will assign priority to queues under the same parent. Each time scheduler picks underutilized queue with highest priority, if there's no underutilized queue, scheduler picks queue with lowest utilization.

        And when we do preemption, if queue with higher priority has some special resource requests, such as very large memory, hard locality, placement constraint, etc. Scheduler will do relatively conservative preemption from other queues with lower priority regardless of utilization.

        That is just a rough idea, Carlo Curino please let us know your comments. I can formalize the design once we can agree with the approach generally.

        Show
        leftnoteasy Wangda Tan added a comment - Offline discussed with Vinod Kumar Vavilapalli . We can have a better semantic of this feature, which is we can add queue-priority property. (Credit to Vinod Kumar Vavilapalli for the idea). In existing scheduler, we sort queues based on (used-capacity / configured-capacity). But in some cases we have some apps/services need get resource first. For example, we allocate 85% to production queue, and 15% to test queue. When production queue is underutilized, we want scheduler give resource to production queue first regardless of test queue's utilization. A rough plan is: we will assign priority to queues under the same parent. Each time scheduler picks underutilized queue with highest priority, if there's no underutilized queue, scheduler picks queue with lowest utilization. And when we do preemption, if queue with higher priority has some special resource requests, such as very large memory, hard locality, placement constraint, etc. Scheduler will do relatively conservative preemption from other queues with lower priority regardless of utilization. That is just a rough idea, Carlo Curino please let us know your comments. I can formalize the design once we can agree with the approach generally.
        Hide
        curino Carlo Curino added a comment -

        Tan, Wangda I like the direction of specifying more clearly what happens. I think working on a design doc that spells this out would be very valuable, I am happy to review and brainstorm with you if you think it is useful. (But FYI: I am on parental leave, and traveling abroad till mid-Jan.)

        In writing the document, in particular I think you should address the semantics from all points of view, e.g., which guarantees do I get as a user of any of the queues (not just the one we are preempting in favor of)? It is clear that if I am running over-capacity I can be preempted, but what happens if I am (safely?) within my capacity? (This is related to the "abuses" I was describing before, e.g., one in which I ask for massive containers on the nodes I want, and then resize them down, after you have killed anyone in my way).

        Looking further ahead: Ideally, this document you are starting to capture the semantics of this feature can be expanded to slowly cover all "tunables" of the scheduler, and explore the many complex interactions among features and the semantics we can derive from that (I bet we might be able to get rid of some redundancies). This could become part of the documentation of YARN. Even nicer would be to codify this with SLS driven tests (so that any future feature will not mess up with the semantics you are capturing, without us noticing).

        Show
        curino Carlo Curino added a comment - Tan, Wangda I like the direction of specifying more clearly what happens. I think working on a design doc that spells this out would be very valuable, I am happy to review and brainstorm with you if you think it is useful. (But FYI: I am on parental leave, and traveling abroad till mid-Jan.) In writing the document, in particular I think you should address the semantics from all points of view, e.g., which guarantees do I get as a user of any of the queues (not just the one we are preempting in favor of)? It is clear that if I am running over-capacity I can be preempted, but what happens if I am (safely?) within my capacity? (This is related to the "abuses" I was describing before, e.g., one in which I ask for massive containers on the nodes I want, and then resize them down, after you have killed anyone in my way). Looking further ahead: Ideally, this document you are starting to capture the semantics of this feature can be expanded to slowly cover all "tunables" of the scheduler, and explore the many complex interactions among features and the semantics we can derive from that (I bet we might be able to get rid of some redundancies). This could become part of the documentation of YARN. Even nicer would be to codify this with SLS driven tests (so that any future feature will not mess up with the semantics you are capturing, without us noticing).
        Hide
        leftnoteasy Wangda Tan added a comment -

        Thanks Carlo Curino for the quick response!

        All great points, I will cover them in the doc, and will cover at least this feature related "tunables".

        Show
        leftnoteasy Wangda Tan added a comment - Thanks Carlo Curino for the quick response! All great points, I will cover them in the doc, and will cover at least this feature related "tunables".
        Hide
        leftnoteasy Wangda Tan added a comment -

        The original proposed solution for fragmented cluster doesn't have clear semantics and has some conflicts with existing features / assumptions.

        So I worked with Vinod Kumar Vavilapalli to propose the new solution: Add queue priorities to make allocation / preemption can both benefit from the solution, we believe this has better semantics as well.

        Updated title / desc and uploaded v1 design doc.

        Please feel free to let us know your comments. Thanks for feedbacks from Carlo Curino.

        Show
        leftnoteasy Wangda Tan added a comment - The original proposed solution for fragmented cluster doesn't have clear semantics and has some conflicts with existing features / assumptions. So I worked with Vinod Kumar Vavilapalli to propose the new solution: Add queue priorities to make allocation / preemption can both benefit from the solution, we believe this has better semantics as well. Updated title / desc and uploaded v1 design doc. Please feel free to let us know your comments. Thanks for feedbacks from Carlo Curino .
        Hide
        leftnoteasy Wangda Tan added a comment -

        + Jason Lowe, Eric Payne, Sunil G.

        Since this is related to preemption, could you also take a look and share your thoughts?

        Thanks

        Show
        leftnoteasy Wangda Tan added a comment - + Jason Lowe , Eric Payne , Sunil G . Since this is related to preemption, could you also take a look and share your thoughts? Thanks
        Hide
        Naganarasimha Naganarasimha G R added a comment -

        Thanks Tan, Wangda, Seems to be a useful proposal to identify the critical queue at each level of hierarchy. But was wondering instead of ordering of queues based on fixed policy based on priority of the queue, could we introduce a queue ordering policy and one of its implementation being the Priority Queue ordering based policy so that if required in future we could have flexibility for other implementations (like the way fair supports)?

        Show
        Naganarasimha Naganarasimha G R added a comment - Thanks Tan, Wangda , Seems to be a useful proposal to identify the critical queue at each level of hierarchy. But was wondering instead of ordering of queues based on fixed policy based on priority of the queue, could we introduce a queue ordering policy and one of its implementation being the Priority Queue ordering based policy so that if required in future we could have flexibility for other implementations (like the way fair supports)?
        Hide
        leftnoteasy Wangda Tan added a comment -

        Naganarasimha G R,

        What I planned to do is adding an interface for queue ordering policy, but I don't plan to load it dynamically from configs. The reason is, once we change the ordering of queue, we need to update ordering of preemption as well (as I described in the JIRA). This cannot be done automatically for now, to make sure we have compatible preemption / allocation logic, I prefer to only have few supported policies.

        Show
        leftnoteasy Wangda Tan added a comment - Naganarasimha G R , What I planned to do is adding an interface for queue ordering policy, but I don't plan to load it dynamically from configs. The reason is, once we change the ordering of queue, we need to update ordering of preemption as well (as I described in the JIRA). This cannot be done automatically for now, to make sure we have compatible preemption / allocation logic, I prefer to only have few supported policies.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Carlo Curino, Jason Lowe, Eric Payne, Sunil G:

        I plan to start working on the POC patch next week, all feedbacks will be welcome!

        Thanks!

        Show
        leftnoteasy Wangda Tan added a comment - Carlo Curino , Jason Lowe , Eric Payne , Sunil G : I plan to start working on the POC patch next week, all feedbacks will be welcome! Thanks!
        Hide
        eepayne Eric Payne added a comment -

        Thanks Wangda Tan for the design doc. It makes sense, and I just have one comment:

        • Phase II: If the first phase doesnt yield enough resources, we proceed to the second phase where we look at under-utilized queues too.
          • Sort queues similar to the first phase, and continue reclamation from under-utilized queues.

        My understanding is that this containers on under-utilized queues won't be preem
        pted unless a higher priority queue is asking. Can you please clarify that in th
        is section?

        Show
        eepayne Eric Payne added a comment - Thanks Wangda Tan for the design doc. It makes sense, and I just have one comment: Phase II: If the first phase doesnt yield enough resources, we proceed to the second phase where we look at under-utilized queues too. Sort queues similar to the first phase, and continue reclamation from under-utilized queues. My understanding is that this containers on under-utilized queues won't be preem pted unless a higher priority queue is asking. Can you please clarify that in th is section?
        Hide
        leftnoteasy Wangda Tan added a comment -

        Thanks Eric Payne to review the design doc.

        My understanding is that this containers on under-utilized queues won't be preempted unless a higher priority queue is asking.

        It is true, but not all phase I/II means.

        Here's an example of Phase I/II:

        Queue A/B/C/D/E has priority A > B = C > D > E

        Assume A is under utilized and has pending ask, B/C are over-utilized, and D/E are under utilized without pending ask .

        To satisfy request of A:

        • For phase I, we will first try to preempt from B/C since they're over-utilized (even if they have higher priority comparing to D/C), if allocate resource of B/C are not enough, or locality doesn't much ..
        • Phase II, we will continue preempt resource from D/E.

        Hope this answers your question.

        Show
        leftnoteasy Wangda Tan added a comment - Thanks Eric Payne to review the design doc. My understanding is that this containers on under-utilized queues won't be preempted unless a higher priority queue is asking. It is true, but not all phase I/II means. Here's an example of Phase I/II: Queue A/B/C/D/E has priority A > B = C > D > E Assume A is under utilized and has pending ask, B/C are over-utilized, and D/E are under utilized without pending ask . To satisfy request of A: For phase I, we will first try to preempt from B/C since they're over-utilized (even if they have higher priority comparing to D/C), if allocate resource of B/C are not enough, or locality doesn't much .. Phase II, we will continue preempt resource from D/E. Hope this answers your question.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Attached 001 patch for review, will add more comprehensive test cases in following patches, I think all functionalities described in the design doc are implemented.

        Please feel free to let me know your thoughts!

        Show
        leftnoteasy Wangda Tan added a comment - Attached 001 patch for review, will add more comprehensive test cases in following patches, I think all functionalities described in the design doc are implemented. Please feel free to let me know your thoughts!
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 13s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 12s Maven dependency ordering for branch
        +1 mvninstall 12m 28s trunk passed
        +1 compile 4m 55s trunk passed
        +1 checkstyle 1m 3s trunk passed
        +1 mvnsite 3m 26s trunk passed
        +1 mvneclipse 0m 52s trunk passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 13s trunk passed
        +1 javadoc 2m 6s trunk passed
        0 mvndep 0m 9s Maven dependency ordering for patch
        +1 mvninstall 3m 2s the patch passed
        +1 compile 4m 46s the patch passed
        +1 javac 4m 46s the patch passed
        -0 checkstyle 1m 8s hadoop-yarn-project/hadoop-yarn: The patch generated 147 new + 1573 unchanged - 21 fixed = 1720 total (was 1594)
        +1 mvnsite 3m 29s the patch passed
        +1 mvneclipse 0m 44s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 2s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        -1 findbugs 1m 15s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
        -1 javadoc 1m 23s hadoop-yarn in the patch failed.
        -1 javadoc 0m 22s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 unit 21m 53s hadoop-yarn in the patch failed.
        -1 unit 39m 38s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 asflicense 0m 30s The patch generated 2 ASF License warnings.
        112m 6s



        Reason Tests
        FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
          Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy.queues; locked 50% of time Unsynchronized access at PriorityUtilizationQueueOrderingPolicy.java:50% of time Unsynchronized access at PriorityUtilizationQueueOrderingPolicy.java:[line 162]
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
          hadoop.yarn.server.timeline.webapp.TestTimelineWebServices
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12845936/YARN-5864.001.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux e1a3748a1856 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 4a659ff
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        findbugs https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14582/testReport/
        asflicense https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-asflicense-problems.txt
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14582/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 12s Maven dependency ordering for branch +1 mvninstall 12m 28s trunk passed +1 compile 4m 55s trunk passed +1 checkstyle 1m 3s trunk passed +1 mvnsite 3m 26s trunk passed +1 mvneclipse 0m 52s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 13s trunk passed +1 javadoc 2m 6s trunk passed 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 3m 2s the patch passed +1 compile 4m 46s the patch passed +1 javac 4m 46s the patch passed -0 checkstyle 1m 8s hadoop-yarn-project/hadoop-yarn: The patch generated 147 new + 1573 unchanged - 21 fixed = 1720 total (was 1594) +1 mvnsite 3m 29s the patch passed +1 mvneclipse 0m 44s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn -1 findbugs 1m 15s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 javadoc 1m 23s hadoop-yarn in the patch failed. -1 javadoc 0m 22s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 21m 53s hadoop-yarn in the patch failed. -1 unit 39m 38s hadoop-yarn-server-resourcemanager in the patch failed. -1 asflicense 0m 30s The patch generated 2 ASF License warnings. 112m 6s Reason Tests FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager   Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy.queues; locked 50% of time Unsynchronized access at PriorityUtilizationQueueOrderingPolicy.java:50% of time Unsynchronized access at PriorityUtilizationQueueOrderingPolicy.java: [line 162] Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12845936/YARN-5864.001.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux e1a3748a1856 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4a659ff Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14582/testReport/ asflicense https://builds.apache.org/job/PreCommit-YARN-Build/14582/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14582/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Attached ver.2 patch, which handled unit test failures, javadocs/findbugs warnings.

        The biggest change is it added logics to move reservation around (for example, it is not possible to preempt containers to allocate a reserved container.

        See TestCapacitySchedulerSurgicalPreemption#
        testPriorityPreemptionRequiresMoveReservation as an example.

        Show
        leftnoteasy Wangda Tan added a comment - Attached ver.2 patch, which handled unit test failures, javadocs/findbugs warnings. The biggest change is it added logics to move reservation around (for example, it is not possible to preempt containers to allocate a reserved container. See TestCapacitySchedulerSurgicalPreemption # testPriorityPreemptionRequiresMoveReservation as an example.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 14s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 45s Maven dependency ordering for branch
        +1 mvninstall 13m 47s trunk passed
        +1 compile 5m 27s trunk passed
        +1 checkstyle 1m 5s trunk passed
        +1 mvnsite 3m 36s trunk passed
        +1 mvneclipse 0m 52s trunk passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 5s trunk passed
        +1 javadoc 2m 0s trunk passed
        0 mvndep 0m 10s Maven dependency ordering for patch
        +1 mvninstall 3m 23s the patch passed
        +1 compile 5m 26s the patch passed
        +1 javac 5m 26s the patch passed
        -0 checkstyle 1m 15s hadoop-yarn-project/hadoop-yarn: The patch generated 156 new + 1648 unchanged - 20 fixed = 1804 total (was 1668)
        +1 mvnsite 4m 4s the patch passed
        +1 mvneclipse 1m 4s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 32s the patch passed
        -1 javadoc 1m 51s hadoop-yarn-project_hadoop-yarn generated 3 new + 6465 unchanged - 0 fixed = 6468 total (was 6465)
        -1 javadoc 0m 28s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 3 new + 913 unchanged - 0 fixed = 916 total (was 913)
        -1 unit 24m 22s hadoop-yarn in the patch failed.
        -1 unit 43m 31s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 36s The patch does not generate ASF License warnings.
        124m 7s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens
          hadoop.yarn.server.timeline.webapp.TestTimelineWebServices
          hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12846127/YARN-5864.002.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux 81fc4b68633b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 71a4acf
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14596/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14596/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 45s Maven dependency ordering for branch +1 mvninstall 13m 47s trunk passed +1 compile 5m 27s trunk passed +1 checkstyle 1m 5s trunk passed +1 mvnsite 3m 36s trunk passed +1 mvneclipse 0m 52s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 5s trunk passed +1 javadoc 2m 0s trunk passed 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 3m 23s the patch passed +1 compile 5m 26s the patch passed +1 javac 5m 26s the patch passed -0 checkstyle 1m 15s hadoop-yarn-project/hadoop-yarn: The patch generated 156 new + 1648 unchanged - 20 fixed = 1804 total (was 1668) +1 mvnsite 4m 4s the patch passed +1 mvneclipse 1m 4s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 32s the patch passed -1 javadoc 1m 51s hadoop-yarn-project_hadoop-yarn generated 3 new + 6465 unchanged - 0 fixed = 6468 total (was 6465) -1 javadoc 0m 28s hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 3 new + 913 unchanged - 0 fixed = 916 total (was 913) -1 unit 24m 22s hadoop-yarn in the patch failed. -1 unit 43m 31s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 36s The patch does not generate ASF License warnings. 124m 7s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12846127/YARN-5864.002.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux 81fc4b68633b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 71a4acf Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14596/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14596/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14596/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Updated ver.3 patch, updates:

        • Only preempt for un-satisfied queues
        • Updated java docs of CapacitySchedulerConfiguration and optimized options.
        Show
        leftnoteasy Wangda Tan added a comment - Updated ver.3 patch, updates: Only preempt for un-satisfied queues Updated java docs of CapacitySchedulerConfiguration and optimized options.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 19s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 18s Maven dependency ordering for branch
        +1 mvninstall 14m 11s trunk passed
        +1 compile 5m 38s trunk passed
        +1 checkstyle 1m 8s trunk passed
        +1 mvnsite 4m 9s trunk passed
        +1 mvneclipse 0m 58s trunk passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 17s trunk passed
        +1 javadoc 2m 24s trunk passed
        0 mvndep 0m 12s Maven dependency ordering for patch
        +1 mvninstall 3m 44s the patch passed
        +1 compile 5m 56s the patch passed
        +1 javac 5m 56s the patch passed
        -0 checkstyle 1m 17s hadoop-yarn-project/hadoop-yarn: The patch generated 157 new + 1651 unchanged - 20 fixed = 1808 total (was 1671)
        +1 mvnsite 4m 22s the patch passed
        +1 mvneclipse 1m 1s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 2s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 32s the patch passed
        -1 javadoc 1m 35s hadoop-yarn in the patch failed.
        -1 javadoc 0m 28s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 unit 23m 36s hadoop-yarn in the patch failed.
        -1 unit 43m 41s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 31s The patch does not generate ASF License warnings.
        125m 46s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
          hadoop.yarn.server.timeline.webapp.TestTimelineWebServices
          hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12846720/YARN-5864.003.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux ab49673f8bee 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / e692316
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14633/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14633/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 18s Maven dependency ordering for branch +1 mvninstall 14m 11s trunk passed +1 compile 5m 38s trunk passed +1 checkstyle 1m 8s trunk passed +1 mvnsite 4m 9s trunk passed +1 mvneclipse 0m 58s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 17s trunk passed +1 javadoc 2m 24s trunk passed 0 mvndep 0m 12s Maven dependency ordering for patch +1 mvninstall 3m 44s the patch passed +1 compile 5m 56s the patch passed +1 javac 5m 56s the patch passed -0 checkstyle 1m 17s hadoop-yarn-project/hadoop-yarn: The patch generated 157 new + 1651 unchanged - 20 fixed = 1808 total (was 1671) +1 mvnsite 4m 22s the patch passed +1 mvneclipse 1m 1s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 32s the patch passed -1 javadoc 1m 35s hadoop-yarn in the patch failed. -1 javadoc 0m 28s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 23m 36s hadoop-yarn in the patch failed. -1 unit 43m 41s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 31s The patch does not generate ASF License warnings. 125m 46s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices   hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12846720/YARN-5864.003.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux ab49673f8bee 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e692316 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14633/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14633/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14633/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        sunilg Sunil G added a comment -

        Thanks Wangda Tan for detailed proposal and the patch.

        I think this will really help to cut many corner cases whats present in scheduler today. Overall approach looks fine.

        Few doubts in document as well as code:

        PriorityUtilizationQueueOrderingPolicy
        1.

        service queue has 66.7% configured resource (200G), each container needs 90G memory; Batch queue has 33.3% configured resource (100G), each container needs 20G memory.

        One doubt here. If service queue has used+reserved more than 66.7%, I think we ll not be considering higher priority queue here rt.

        2. For normal utilization policy also, we use PriorityUtilizationQueueOrderingPolicy with respectPriority=false mode. May be we can pull a better name as we handle priority and utilization order in same policy impl. Or we could pull a AbstractUtilizationQueueOrderingPolicy which can support normal resource utilization and an extended Priority policy can do priority handling.

        3. PriorityUtilizationQueueOrderingPolicy#getAssignmentIterator needs a readLock for queues ?

        QueuePriorityContainerCandidateSelector
        4. Could we use Guava libs in hadoop (ref: HashBasedTable) ?
        5. intializePriorityDigraph, since queue priority set either at the time of initialize or reinitialize, i think we are recalculating and creating PriorityDigraph everytime. I think its not very specifically a preemption entity, still a scheduler entity as well. Could we create and cache it in CS so that such recomputation can be avoided.
        6. intializePriorityDigraph, In preemptionContext.getLeafQueueNames() we are getting queue names in random. For better performance, i think we need these names in BFS search model which start from one side to another. Will that help ?
        7. selectCandidates exit condition can be added in beginning, where queue priorities are not configured or digraph does not any queues in which some containers are reserved.
        8.

        Collections.sort(reservedContainers, CONTAINER_CREATION_TIME_COMPARATOR);

        Why are we sorting with container create time? Do we first need that reserved container from the most high priority queue?
        9. In selectCandidates

        431	      if (currentTime - reservedContainer.getCreationTime() < minTimeout) {
        432	        break;
        433	      }
        

        I think we need to continue rt ?

        10. selectCandidates all TempQueuePerPartition is still taken from context. I think in IntraQueue preemption selector make some changes in TempQueue. I will confirm soon. If so we might need a relook there.

        11. In selectCandidates, while looping for newlySelectedToBePreemptContainers, it possible that container is already present in selectedCandidates. Currently we still deduct from totalPreemptedResourceAllowed in such cases as well. not looking correct.

        12. tryToMakeBetterReservationPlacement looks a very big loop over all allSchedulerNodes. Looks not very optimal.

        I think i ll give one more pass once some of these are clarified.

        Show
        sunilg Sunil G added a comment - Thanks Wangda Tan for detailed proposal and the patch. I think this will really help to cut many corner cases whats present in scheduler today. Overall approach looks fine. Few doubts in document as well as code: PriorityUtilizationQueueOrderingPolicy 1. service queue has 66.7% configured resource (200G), each container needs 90G memory; Batch queue has 33.3% configured resource (100G), each container needs 20G memory. One doubt here. If service queue has used+reserved more than 66.7%, I think we ll not be considering higher priority queue here rt. 2. For normal utilization policy also, we use PriorityUtilizationQueueOrderingPolicy with respectPriority=false mode. May be we can pull a better name as we handle priority and utilization order in same policy impl. Or we could pull a AbstractUtilizationQueueOrderingPolicy which can support normal resource utilization and an extended Priority policy can do priority handling. 3. PriorityUtilizationQueueOrderingPolicy#getAssignmentIterator needs a readLock for queues ? QueuePriorityContainerCandidateSelector 4. Could we use Guava libs in hadoop (ref: HashBasedTable) ? 5. intializePriorityDigraph , since queue priority set either at the time of initialize or reinitialize, i think we are recalculating and creating PriorityDigraph everytime. I think its not very specifically a preemption entity, still a scheduler entity as well. Could we create and cache it in CS so that such recomputation can be avoided. 6. intializePriorityDigraph , In preemptionContext.getLeafQueueNames() we are getting queue names in random. For better performance, i think we need these names in BFS search model which start from one side to another. Will that help ? 7. selectCandidates exit condition can be added in beginning, where queue priorities are not configured or digraph does not any queues in which some containers are reserved. 8. Collections.sort(reservedContainers, CONTAINER_CREATION_TIME_COMPARATOR); Why are we sorting with container create time? Do we first need that reserved container from the most high priority queue? 9. In selectCandidates 431 if (currentTime - reservedContainer.getCreationTime() < minTimeout) { 432 break; 433 } I think we need to continue rt ? 10. selectCandidates all TempQueuePerPartition is still taken from context. I think in IntraQueue preemption selector make some changes in TempQueue. I will confirm soon. If so we might need a relook there. 11. In selectCandidates , while looping for newlySelectedToBePreemptContainers , it possible that container is already present in selectedCandidates . Currently we still deduct from totalPreemptedResourceAllowed in such cases as well. not looking correct. 12. tryToMakeBetterReservationPlacement looks a very big loop over all allSchedulerNodes . Looks not very optimal. I think i ll give one more pass once some of these are clarified.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Sunil G, thanks for reviewing, for your comments:

        For 1), yes, underutilized queue always goes first before overutilized queues.

        For 2), I have thought about this. I intentionally make it to two policies because:

        • All configurations will be grouped, for example preemption-related configuration.
        • Priority can be interpreted in different way, for example, priority could be used as "weights" in different policy implementation.
        • Avoid too many options to enable/disable features inside one option.
        • Internal implementation is not related how admin uses the feature.

        For 3), added comment to make sure ParentQueue uses readlock correctly. (Now it is fine).

        For 4), it should be fine, it is already part of Maven dependency.

        For 5), As noted in comment, I agree that we can optimize this. Since time complexity of this algorithm is O(N^2 * Max_queue_depth), N is #LeafQueue. Since we have limited number of leaf queues, and Max_queue_depth is a small constant. We're fine now.

        For 6), Similar to above, we're fine now, and 5)/6) can be done separately.

        For 7), Updated
        For 8), Updated, and added new test.
        For 9), Updated according to changes of 8)
        For 10), I think we should make sure queue properties like used/pending/reserved will not be updated. And ideal-assigned/preemptable could be changed for different selectors. Please comment if you find any changes from IntraQueueSelector.
        For 11), Updated
        For 12), Considered this, I cannot think of a relatively easy approach to do this.
        The time complexity will be O(#containers * #reserved-nodes). And since we have a "touchedNode" set to avoid double check nodes, it should not a big problem even we have a large cluster. I will do some SLS performance test to make sure it works well.

        Attached ver.4 patch. This patch is on top of YARN-6081, will update patch available state once YARN-6081 get committed.

        Show
        leftnoteasy Wangda Tan added a comment - Sunil G , thanks for reviewing, for your comments: For 1), yes, underutilized queue always goes first before overutilized queues. For 2), I have thought about this. I intentionally make it to two policies because: All configurations will be grouped, for example preemption-related configuration. Priority can be interpreted in different way, for example, priority could be used as "weights" in different policy implementation. Avoid too many options to enable/disable features inside one option. Internal implementation is not related how admin uses the feature. For 3), added comment to make sure ParentQueue uses readlock correctly. (Now it is fine). For 4), it should be fine, it is already part of Maven dependency. For 5), As noted in comment, I agree that we can optimize this. Since time complexity of this algorithm is O(N^2 * Max_queue_depth), N is #LeafQueue. Since we have limited number of leaf queues, and Max_queue_depth is a small constant. We're fine now. For 6), Similar to above, we're fine now, and 5)/6) can be done separately. For 7), Updated For 8), Updated, and added new test. For 9), Updated according to changes of 8) For 10), I think we should make sure queue properties like used/pending/reserved will not be updated. And ideal-assigned/preemptable could be changed for different selectors. Please comment if you find any changes from IntraQueueSelector. For 11), Updated For 12), Considered this, I cannot think of a relatively easy approach to do this. The time complexity will be O(#containers * #reserved-nodes). And since we have a "touchedNode" set to avoid double check nodes, it should not a big problem even we have a large cluster. I will do some SLS performance test to make sure it works well. Attached ver.4 patch. This patch is on top of YARN-6081 , will update patch available state once YARN-6081 get committed.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Uploaded ver.5 patch, which include code to print performance information.

        Show
        leftnoteasy Wangda Tan added a comment - Uploaded ver.5 patch, which include code to print performance information.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Uploaded ver.6 patch, now made move reserved container to be a configurable option.

        Show
        leftnoteasy Wangda Tan added a comment - Uploaded ver.6 patch, now made move reserved container to be a configurable option.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Attached performance report and usage doc for review.

        Sunil G / Eric Payne.

        Show
        leftnoteasy Wangda Tan added a comment - Attached performance report and usage doc for review. Sunil G / Eric Payne .
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 14s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 43s Maven dependency ordering for branch
        +1 mvninstall 12m 41s trunk passed
        +1 compile 5m 6s trunk passed
        +1 checkstyle 1m 3s trunk passed
        +1 mvnsite 3m 11s trunk passed
        +1 mvneclipse 0m 48s trunk passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 2s trunk passed
        +1 javadoc 1m 47s trunk passed
        0 mvndep 0m 9s Maven dependency ordering for patch
        +1 mvninstall 2m 50s the patch passed
        +1 compile 4m 33s the patch passed
        +1 javac 4m 33s the patch passed
        -0 checkstyle 1m 3s hadoop-yarn-project/hadoop-yarn: The patch generated 164 new + 1661 unchanged - 20 fixed = 1825 total (was 1681)
        +1 mvnsite 3m 7s the patch passed
        +1 mvneclipse 0m 45s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 2s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 9s the patch passed
        -1 javadoc 1m 9s hadoop-yarn in the patch failed.
        -1 javadoc 0m 20s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 unit 20m 59s hadoop-yarn in the patch failed.
        -1 unit 39m 13s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 29s The patch does not generate ASF License warnings.
        109m 35s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
          hadoop.yarn.server.timeline.webapp.TestTimelineWebServices
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847145/YARN-5864.006.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux a09cf64eede4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / d3170f9
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt
        javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14649/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14649/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 43s Maven dependency ordering for branch +1 mvninstall 12m 41s trunk passed +1 compile 5m 6s trunk passed +1 checkstyle 1m 3s trunk passed +1 mvnsite 3m 11s trunk passed +1 mvneclipse 0m 48s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 2s trunk passed +1 javadoc 1m 47s trunk passed 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 2m 50s the patch passed +1 compile 4m 33s the patch passed +1 javac 4m 33s the patch passed -0 checkstyle 1m 3s hadoop-yarn-project/hadoop-yarn: The patch generated 164 new + 1661 unchanged - 20 fixed = 1825 total (was 1681) +1 mvnsite 3m 7s the patch passed +1 mvneclipse 0m 45s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 9s the patch passed -1 javadoc 1m 9s hadoop-yarn in the patch failed. -1 javadoc 0m 20s hadoop-yarn-server-resourcemanager in the patch failed. -1 unit 20m 59s hadoop-yarn in the patch failed. -1 unit 39m 13s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 29s The patch does not generate ASF License warnings. 109m 35s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847145/YARN-5864.006.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux a09cf64eede4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d3170f9 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt javadoc https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14649/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14649/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14649/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Attached patch to fix unit test failure and javadocs. (Ver.7)

        Show
        leftnoteasy Wangda Tan added a comment - Attached patch to fix unit test failure and javadocs. (Ver.7)
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 20s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 23s Maven dependency ordering for branch
        +1 mvninstall 15m 18s trunk passed
        +1 compile 6m 55s trunk passed
        +1 checkstyle 1m 1s trunk passed
        +1 mvnsite 3m 16s trunk passed
        +1 mvneclipse 0m 48s trunk passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 1s trunk passed
        +1 javadoc 1m 47s trunk passed
        0 mvndep 0m 10s Maven dependency ordering for patch
        +1 mvninstall 2m 53s the patch passed
        +1 compile 4m 49s the patch passed
        +1 javac 4m 49s the patch passed
        -0 checkstyle 1m 6s hadoop-yarn-project/hadoop-yarn: The patch generated 165 new + 1661 unchanged - 20 fixed = 1826 total (was 1681)
        +1 mvnsite 3m 19s the patch passed
        +1 mvneclipse 0m 44s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 32s the patch passed
        +1 javadoc 2m 19s the patch passed
        -1 unit 22m 16s hadoop-yarn in the patch failed.
        -1 unit 41m 45s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 asflicense 0m 39s The patch does not generate ASF License warnings.
        119m 42s



        Reason Tests
        Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing
          hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
          hadoop.yarn.server.timeline.webapp.TestTimelineWebServices
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing
          hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847436/YARN-5864.007.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux 28164c9ed6d7 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / d3170f9
        Default Java 1.8.0_111
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14656/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14656/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14656/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14656/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14656/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 23s Maven dependency ordering for branch +1 mvninstall 15m 18s trunk passed +1 compile 6m 55s trunk passed +1 checkstyle 1m 1s trunk passed +1 mvnsite 3m 16s trunk passed +1 mvneclipse 0m 48s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 1s trunk passed +1 javadoc 1m 47s trunk passed 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 2m 53s the patch passed +1 compile 4m 49s the patch passed +1 javac 4m 49s the patch passed -0 checkstyle 1m 6s hadoop-yarn-project/hadoop-yarn: The patch generated 165 new + 1661 unchanged - 20 fixed = 1826 total (was 1681) +1 mvnsite 3m 19s the patch passed +1 mvneclipse 0m 44s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 32s the patch passed +1 javadoc 2m 19s the patch passed -1 unit 22m 16s hadoop-yarn in the patch failed. -1 unit 41m 45s hadoop-yarn-server-resourcemanager in the patch failed. +1 asflicense 0m 39s The patch does not generate ASF License warnings. 119m 42s Reason Tests Failed junit tests hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847436/YARN-5864.007.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux 28164c9ed6d7 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d3170f9 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14656/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14656/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14656/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14656/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14656/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        JUnit failures are not related.

        Show
        leftnoteasy Wangda Tan added a comment - JUnit failures are not related.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Attached patch for branch-2 as well.

        Show
        leftnoteasy Wangda Tan added a comment - Attached patch for branch-2 as well.
        Hide
        leftnoteasy Wangda Tan added a comment -

        The previous patch for branch-2 has some issues, rebased and attached again. (branch-2.007)

        Show
        leftnoteasy Wangda Tan added a comment - The previous patch for branch-2 has some issues, rebased and attached again. (branch-2.007)
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 14s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 10s Maven dependency ordering for branch
        +1 mvninstall 6m 26s branch-2 passed
        +1 compile 1m 54s branch-2 passed with JDK v1.8.0_111
        +1 compile 2m 12s branch-2 passed with JDK v1.7.0_121
        +1 checkstyle 0m 58s branch-2 passed
        +1 mvnsite 3m 19s branch-2 passed
        +1 mvneclipse 0m 40s branch-2 passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 11s branch-2 passed
        +1 javadoc 1m 38s branch-2 passed with JDK v1.8.0_111
        +1 javadoc 2m 0s branch-2 passed with JDK v1.7.0_121
        0 mvndep 0m 10s Maven dependency ordering for patch
        -1 mvninstall 1m 40s hadoop-yarn in the patch failed.
        -1 mvninstall 0m 31s hadoop-yarn-server-resourcemanager in the patch failed.
        -1 compile 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_111.
        -1 javac 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_111.
        -1 compile 1m 34s hadoop-yarn in the patch failed with JDK v1.7.0_121.
        -1 javac 1m 34s hadoop-yarn in the patch failed with JDK v1.7.0_121.
        -0 checkstyle 0m 59s hadoop-yarn-project/hadoop-yarn: The patch generated 166 new + 1662 unchanged - 20 fixed = 1828 total (was 1682)
        -1 mvnsite 1m 45s hadoop-yarn in the patch failed.
        -1 mvnsite 0m 31s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 mvneclipse 0m 38s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 0s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        -1 findbugs 0m 21s hadoop-yarn-server-resourcemanager in the patch failed.
        +1 javadoc 1m 36s the patch passed with JDK v1.8.0_111
        +1 javadoc 1m 58s the patch passed with JDK v1.7.0_121
        -1 unit 20m 17s hadoop-yarn in the patch failed with JDK v1.7.0_121.
        -1 unit 0m 31s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121.
        +1 asflicense 0m 21s The patch does not generate ASF License warnings.
        82m 0s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:b59b8b7
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847666/YARN-5864.branch-2.007.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux 391ff306706e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2 / 00dec84
        Default Java 1.7.0_121
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_111 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
        findbugs v3.0.0
        mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn.txt
        mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        compile https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111.txt
        javac https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111.txt
        compile https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt
        javac https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
        mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        findbugs https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_121.txt
        JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14663/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14663/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 10s Maven dependency ordering for branch +1 mvninstall 6m 26s branch-2 passed +1 compile 1m 54s branch-2 passed with JDK v1.8.0_111 +1 compile 2m 12s branch-2 passed with JDK v1.7.0_121 +1 checkstyle 0m 58s branch-2 passed +1 mvnsite 3m 19s branch-2 passed +1 mvneclipse 0m 40s branch-2 passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 11s branch-2 passed +1 javadoc 1m 38s branch-2 passed with JDK v1.8.0_111 +1 javadoc 2m 0s branch-2 passed with JDK v1.7.0_121 0 mvndep 0m 10s Maven dependency ordering for patch -1 mvninstall 1m 40s hadoop-yarn in the patch failed. -1 mvninstall 0m 31s hadoop-yarn-server-resourcemanager in the patch failed. -1 compile 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_111. -1 javac 1m 22s hadoop-yarn in the patch failed with JDK v1.8.0_111. -1 compile 1m 34s hadoop-yarn in the patch failed with JDK v1.7.0_121. -1 javac 1m 34s hadoop-yarn in the patch failed with JDK v1.7.0_121. -0 checkstyle 0m 59s hadoop-yarn-project/hadoop-yarn: The patch generated 166 new + 1662 unchanged - 20 fixed = 1828 total (was 1682) -1 mvnsite 1m 45s hadoop-yarn in the patch failed. -1 mvnsite 0m 31s hadoop-yarn-server-resourcemanager in the patch failed. +1 mvneclipse 0m 38s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn -1 findbugs 0m 21s hadoop-yarn-server-resourcemanager in the patch failed. +1 javadoc 1m 36s the patch passed with JDK v1.8.0_111 +1 javadoc 1m 58s the patch passed with JDK v1.7.0_121 -1 unit 20m 17s hadoop-yarn in the patch failed with JDK v1.7.0_121. -1 unit 0m 31s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 82m 0s Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847666/YARN-5864.branch-2.007.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux 391ff306706e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 00dec84 Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_111 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn.txt mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14663/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_121.txt JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14663/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14663/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 16s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 42s Maven dependency ordering for branch
        +1 mvninstall 6m 56s branch-2 passed
        +1 compile 2m 22s branch-2 passed with JDK v1.8.0_111
        +1 compile 2m 13s branch-2 passed with JDK v1.7.0_121
        +1 checkstyle 0m 59s branch-2 passed
        +1 mvnsite 3m 35s branch-2 passed
        +1 mvneclipse 0m 45s branch-2 passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 17s branch-2 passed
        +1 javadoc 1m 51s branch-2 passed with JDK v1.8.0_111
        +1 javadoc 2m 10s branch-2 passed with JDK v1.7.0_121
        0 mvndep 0m 11s Maven dependency ordering for patch
        +1 mvninstall 3m 4s the patch passed
        +1 compile 1m 49s the patch passed with JDK v1.8.0_111
        -1 javac 1m 49s hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 generated 1 new + 58 unchanged - 1 fixed = 59 total (was 59)
        +1 compile 2m 11s the patch passed with JDK v1.7.0_121
        +1 javac 2m 11s the patch passed
        -0 checkstyle 0m 59s hadoop-yarn-project/hadoop-yarn: The patch generated 166 new + 1662 unchanged - 20 fixed = 1828 total (was 1682)
        +1 mvnsite 3m 17s the patch passed
        +1 mvneclipse 0m 38s the patch passed
        +1 whitespace 0m 0s The patch has no whitespace issues.
        +1 xml 0m 1s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 24s the patch passed
        +1 javadoc 1m 40s the patch passed with JDK v1.8.0_111
        +1 javadoc 1m 58s the patch passed with JDK v1.7.0_121
        -1 unit 65m 17s hadoop-yarn in the patch failed with JDK v1.7.0_121.
        -1 unit 40m 40s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121.
        +1 asflicense 0m 22s The patch does not generate ASF License warnings.
        255m 19s



        Reason Tests
        JDK v1.8.0_111 Failed junit tests hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
          hadoop.yarn.server.TestContainerManagerSecurity
        JDK v1.7.0_121 Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
          hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
          hadoop.yarn.server.TestContainerManagerSecurity
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:b59b8b7
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847712/YARN-5864.branch-2.007_2.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux dc77cbfef0e6 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2 / 861e275
        Default Java 1.7.0_121
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_111 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
        findbugs v3.0.0
        javac https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111.txt
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_121.txt
        JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14666/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14666/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 42s Maven dependency ordering for branch +1 mvninstall 6m 56s branch-2 passed +1 compile 2m 22s branch-2 passed with JDK v1.8.0_111 +1 compile 2m 13s branch-2 passed with JDK v1.7.0_121 +1 checkstyle 0m 59s branch-2 passed +1 mvnsite 3m 35s branch-2 passed +1 mvneclipse 0m 45s branch-2 passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 17s branch-2 passed +1 javadoc 1m 51s branch-2 passed with JDK v1.8.0_111 +1 javadoc 2m 10s branch-2 passed with JDK v1.7.0_121 0 mvndep 0m 11s Maven dependency ordering for patch +1 mvninstall 3m 4s the patch passed +1 compile 1m 49s the patch passed with JDK v1.8.0_111 -1 javac 1m 49s hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 generated 1 new + 58 unchanged - 1 fixed = 59 total (was 59) +1 compile 2m 11s the patch passed with JDK v1.7.0_121 +1 javac 2m 11s the patch passed -0 checkstyle 0m 59s hadoop-yarn-project/hadoop-yarn: The patch generated 166 new + 1662 unchanged - 20 fixed = 1828 total (was 1682) +1 mvnsite 3m 17s the patch passed +1 mvneclipse 0m 38s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 24s the patch passed +1 javadoc 1m 40s the patch passed with JDK v1.8.0_111 +1 javadoc 1m 58s the patch passed with JDK v1.7.0_121 -1 unit 65m 17s hadoop-yarn in the patch failed with JDK v1.7.0_121. -1 unit 40m 40s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121. +1 asflicense 0m 22s The patch does not generate ASF License warnings. 255m 19s Reason Tests JDK v1.8.0_111 Failed junit tests hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.TestContainerManagerSecurity JDK v1.7.0_121 Failed junit tests hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12847712/YARN-5864.branch-2.007_2.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux dc77cbfef0e6 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 861e275 Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_111 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14666/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_121.txt JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14666/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14666/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        sunilg Sunil G added a comment -

        Latest patch and the approach looks fine for me.

        Show
        sunilg Sunil G added a comment - Latest patch and the approach looks fine for me.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Thanks for review, Sunil G.

        I plan to commit the patch by end of this week, please feel free to share any comments/concerns.

        Show
        leftnoteasy Wangda Tan added a comment - Thanks for review, Sunil G . I plan to commit the patch by end of this week, please feel free to share any comments/concerns.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Committed to trunk, thanks reviews from Sunil G/Eric Payne, and initial suggestions from Carlo Curino.

        Attached patch for branch-2 (ver.8), fixed javac warnings.

        Show
        leftnoteasy Wangda Tan added a comment - Committed to trunk, thanks reviews from Sunil G / Eric Payne , and initial suggestions from Carlo Curino . Attached patch for branch-2 (ver.8), fixed javac warnings.
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11161 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11161/)
        YARN-5864. Capacity Scheduler - Queue Priorities. (wangda) (wangda: rev ce832059db077fa95922198b066a737ed4f609fe)

        • (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyMockFramework.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
        • (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/QueueOrderingPolicy.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
        • (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestPreemptionForQueueWithPriorities.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
        • (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/TestFairOrderingPolicy.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/AbstractPreemptableResourceCalculator.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerContext.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
        • (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PartitionedQueueComparator.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
        • (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
        • (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/QueuePriorityContainerCandidateSelector.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerPreemptionTestBase.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/PreemptionCandidatesSelector.java
        • (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempSchedulerNode.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11161 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11161/ ) YARN-5864 . Capacity Scheduler - Queue Priorities. (wangda) (wangda: rev ce832059db077fa95922198b066a737ed4f609fe) (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/TestPriorityUtilizationQueueOrderingPolicy.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyMockFramework.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/QueueOrderingPolicy.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestPreemptionForQueueWithPriorities.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimitsByPartition.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerQueueManager.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/TestFairOrderingPolicy.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/AbstractPreemptableResourceCalculator.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerContext.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PartitionedQueueComparator.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/QueuePriorityContainerCandidateSelector.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerPreemptionTestBase.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/PreemptionCandidatesSelector.java (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempSchedulerNode.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 13m 59s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 16 new or modified test files.
        0 mvndep 0m 48s Maven dependency ordering for branch
        +1 mvninstall 6m 41s branch-2 passed
        +1 compile 1m 57s branch-2 passed with JDK v1.8.0_121
        +1 compile 2m 15s branch-2 passed with JDK v1.7.0_121
        +1 checkstyle 1m 0s branch-2 passed
        +1 mvnsite 3m 22s branch-2 passed
        +1 mvneclipse 0m 41s branch-2 passed
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 10s branch-2 passed
        +1 javadoc 1m 41s branch-2 passed with JDK v1.8.0_121
        +1 javadoc 2m 4s branch-2 passed with JDK v1.7.0_121
        0 mvndep 0m 10s Maven dependency ordering for patch
        +1 mvninstall 2m 46s the patch passed
        +1 compile 1m 57s the patch passed with JDK v1.8.0_121
        +1 javac 1m 57s the patch passed
        +1 compile 2m 16s the patch passed with JDK v1.7.0_121
        +1 javac 2m 16s the patch passed
        -0 checkstyle 1m 1s hadoop-yarn-project/hadoop-yarn: The patch generated 165 new + 1662 unchanged - 20 fixed = 1827 total (was 1682)
        +1 mvnsite 3m 22s the patch passed
        +1 mvneclipse 0m 39s the patch passed
        +1 whitespace 0m 1s The patch has no whitespace issues.
        +1 xml 0m 0s The patch has no ill-formed XML file.
        0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn
        +1 findbugs 1m 22s the patch passed
        +1 javadoc 1m 40s the patch passed with JDK v1.8.0_121
        +1 javadoc 2m 2s the patch passed with JDK v1.7.0_121
        -1 unit 60m 40s hadoop-yarn in the patch failed with JDK v1.7.0_121.
        -1 unit 41m 0s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121.
        +1 asflicense 0m 21s The patch does not generate ASF License warnings.
        266m 29s



        Reason Tests
        JDK v1.8.0_121 Failed junit tests hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
          hadoop.yarn.server.TestContainerManagerSecurity
        JDK v1.7.0_121 Failed junit tests hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
          hadoop.yarn.server.TestContainerManagerSecurity
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler
          hadoop.yarn.server.resourcemanager.TestRMRestart
          hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:b59b8b7
        JIRA Issue YARN-5864
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848978/YARN-5864.branch-2.008.patch
        Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle
        uname Linux 807e8511e02a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision branch-2 / 94b326f
        Default Java 1.7.0_121
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121
        findbugs v3.0.0
        checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14735/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14735/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt
        unit https://builds.apache.org/job/PreCommit-YARN-Build/14735/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_121.txt
        JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14735/testReport/
        modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn
        Console output https://builds.apache.org/job/PreCommit-YARN-Build/14735/console
        Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 13m 59s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 16 new or modified test files. 0 mvndep 0m 48s Maven dependency ordering for branch +1 mvninstall 6m 41s branch-2 passed +1 compile 1m 57s branch-2 passed with JDK v1.8.0_121 +1 compile 2m 15s branch-2 passed with JDK v1.7.0_121 +1 checkstyle 1m 0s branch-2 passed +1 mvnsite 3m 22s branch-2 passed +1 mvneclipse 0m 41s branch-2 passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 10s branch-2 passed +1 javadoc 1m 41s branch-2 passed with JDK v1.8.0_121 +1 javadoc 2m 4s branch-2 passed with JDK v1.7.0_121 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 2m 46s the patch passed +1 compile 1m 57s the patch passed with JDK v1.8.0_121 +1 javac 1m 57s the patch passed +1 compile 2m 16s the patch passed with JDK v1.7.0_121 +1 javac 2m 16s the patch passed -0 checkstyle 1m 1s hadoop-yarn-project/hadoop-yarn: The patch generated 165 new + 1662 unchanged - 20 fixed = 1827 total (was 1682) +1 mvnsite 3m 22s the patch passed +1 mvneclipse 0m 39s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. +1 xml 0m 0s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn +1 findbugs 1m 22s the patch passed +1 javadoc 1m 40s the patch passed with JDK v1.8.0_121 +1 javadoc 2m 2s the patch passed with JDK v1.7.0_121 -1 unit 60m 40s hadoop-yarn in the patch failed with JDK v1.7.0_121. -1 unit 41m 0s hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 266m 29s Reason Tests JDK v1.8.0_121 Failed junit tests hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.TestContainerManagerSecurity JDK v1.7.0_121 Failed junit tests hadoop.yarn.server.TestMiniYarnClusterNodeUtilization   hadoop.yarn.server.TestContainerManagerSecurity   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler Subsystem Report/Notes Docker Image:yetus/hadoop:b59b8b7 JIRA Issue YARN-5864 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12848978/YARN-5864.branch-2.008.patch Optional Tests asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle uname Linux 807e8511e02a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision branch-2 / 94b326f Default Java 1.7.0_121 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_121 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/14735/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14735/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/14735/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_121.txt JDK v1.7.0_121 Test Results https://builds.apache.org/job/PreCommit-YARN-Build/14735/testReport/ modules C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/14735/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        leftnoteasy Wangda Tan added a comment -

        Committed to branch-2 as well, test failures are not related. Resolving this ticket.

        Show
        leftnoteasy Wangda Tan added a comment - Committed to branch-2 as well, test failures are not related. Resolving this ticket.
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11184 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11184/)
        YARN-6123. YARN-5864 Add a test to make sure queues of orderingPolicy (sunilg: rev 165f07f51a03137d2e73e39ed1cb48385d963f39)

        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java
        • (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11184 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11184/ ) YARN-6123 . YARN-5864 Add a test to make sure queues of orderingPolicy (sunilg: rev 165f07f51a03137d2e73e39ed1cb48385d963f39) (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/policy/PriorityUtilizationQueueOrderingPolicy.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java

          People

          • Assignee:
            leftnoteasy Wangda Tan
            Reporter:
            leftnoteasy Wangda Tan
          • Votes:
            0 Vote for this issue
            Watchers:
            19 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development