Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-1051

YARN Admission Control/Planner: enhancing the resource allocation model with time.

    Details

    • Hadoop Flags:
      Incompatible change, Reviewed

      Description

      In this umbrella JIRA we propose to extend the YARN RM to handle time explicitly, allowing users to "reserve" capacity over time. This is an important step towards SLAs, long-running services, workflows, and helps for gang scheduling.

      1. curino_MSR-TR-2013-108.pdf
        2.86 MB
        Carlo Curino
      2. socc14-paper15.pdf
        1.32 MB
        Carlo Curino
      3. techreport.pdf
        2.02 MB
        Subru Krishnan
      4. YARN-1051.1.patch
        574 kB
        Subru Krishnan
      5. YARN-1051.patch
        571 kB
        Subru Krishnan
      6. YARN-1051-design.pdf
        600 kB
        Subru Krishnan

        Issue Links

          Activity

          Hide
          curino Carlo Curino added a comment -

          This umbrella JIRA proposes an extension of the YARN RM to allow for richer admission-control semantics (beside existing ACL checks).
          This allows jobs/users to negotiate with the RM at admission control time for time-bounded, guaranteed allocation of cluster resources (e.g., I need 100 containers for 2 hours at any time before 3pm today). Such request can be per-job or per-users (maybe we can call this a "session").
          It provides the RM with an understanding of future resource demand, and exposes jobs timeand resource constraints, hence enabling the RM to lookahead and plan resource allocation over time (e.g., a job submitted now, but with lots of time before its deadline might be run after a job showing up later but in a rush to complete).

          This is an important step towards SLAs on the resources received by a job/user over time, which seems useful for long-running services, workflows, and can help ameliorate some of the gang-scheduling concerns (admission control will guarantee the resources to be available, hence hoarding is not likely to produce deadlocks).

          This will require:

          • addictive modifications to the job-submission API (to capture job's resource demands)
          • an internal API between admission control / planner (working on the planning aspects) and the scheduler (enforcing the plan, and handling containers etc...)
          • changes to the underlying scheduler (we started with the CapacityScheduler) to support queue addition/removal/resizing and cross-queues job migration, but this should ideally be pushed to the YarnScheduler API and be cross-scheduler (from various conversations, this seem to be needed/useful indepedently).
          • changes to the RM tracking datastructures to maintain metering of how many resources have been allocated to a job until now (also enables billing and accounting on the RM side, and other history-aware planning and scheduling).
          • implementation of (simple first) admission control mechanism, that verify whether a job with a certain Contract can be admitted, and perform basic planning (knapsack-like to start, can be extended to sophisticated economics models).

          We will track this in Sub-JIRAs.

          Show
          curino Carlo Curino added a comment - This umbrella JIRA proposes an extension of the YARN RM to allow for richer admission-control semantics (beside existing ACL checks). This allows jobs/users to negotiate with the RM at admission control time for time-bounded, guaranteed allocation of cluster resources (e.g., I need 100 containers for 2 hours at any time before 3pm today). Such request can be per-job or per-users (maybe we can call this a "session"). It provides the RM with an understanding of future resource demand, and exposes jobs timeand resource constraints, hence enabling the RM to lookahead and plan resource allocation over time (e.g., a job submitted now, but with lots of time before its deadline might be run after a job showing up later but in a rush to complete). This is an important step towards SLAs on the resources received by a job/user over time, which seems useful for long-running services, workflows, and can help ameliorate some of the gang-scheduling concerns (admission control will guarantee the resources to be available, hence hoarding is not likely to produce deadlocks). This will require: addictive modifications to the job-submission API (to capture job's resource demands) an internal API between admission control / planner (working on the planning aspects) and the scheduler (enforcing the plan, and handling containers etc...) changes to the underlying scheduler (we started with the CapacityScheduler) to support queue addition/removal/resizing and cross-queues job migration, but this should ideally be pushed to the YarnScheduler API and be cross-scheduler (from various conversations, this seem to be needed/useful indepedently). changes to the RM tracking datastructures to maintain metering of how many resources have been allocated to a job until now (also enables billing and accounting on the RM side, and other history-aware planning and scheduling). implementation of (simple first) admission control mechanism, that verify whether a job with a certain Contract can be admitted, and perform basic planning (knapsack-like to start, can be extended to sophisticated economics models). We will track this in Sub-JIRAs.
          Hide
          acmurthy Arun C Murthy added a comment -

          +1, looks like a great addition to YARN.

          Look fwd to working with you, Chris Douglas et al to get this in. Thanks!

          Show
          acmurthy Arun C Murthy added a comment - +1, looks like a great addition to YARN. Look fwd to working with you, Chris Douglas et al to get this in. Thanks!
          Hide
          curino Carlo Curino added a comment -

          More work in this space, made us reconsider the changes to the submission protocol. We are opting for a new API to submit reservation requests (think of requesting of a time-bounded private queue) YARN-1708. This allows users to submit multiple jobs to a single reservation (important for pipelines).

          Show
          curino Carlo Curino added a comment - More work in this space, made us reconsider the changes to the submission protocol. We are opting for a new API to submit reservation requests (think of requesting of a time-bounded private queue) YARN-1708 . This allows users to submit multiple jobs to a single reservation (important for pipelines).
          Hide
          curino Carlo Curino added a comment -

          Microsoft technical report MSR-TR-2013-108 discussing an early prototype we based this JIRAs on.

          Show
          curino Carlo Curino added a comment - Microsoft technical report MSR-TR-2013-108 discussing an early prototype we based this JIRAs on.
          Hide
          curino Carlo Curino added a comment -

          I attach a technical report that reports some compelling experimental evidence to support this feature. The report provides a summary of
          our earlier implementation, and is our general blueprint of a solution, the actual code is being completely rewritten to clean-up and make
          easier to review/evolve. With respect to what's in the report, we are adding policies to enforce user quotas (YARN-1711), and we are leveraging
          more of the ResourceRequest and ResourceCalculator expressivity.

          Show
          curino Carlo Curino added a comment - I attach a technical report that reports some compelling experimental evidence to support this feature. The report provides a summary of our earlier implementation, and is our general blueprint of a solution, the actual code is being completely rewritten to clean-up and make easier to review/evolve. With respect to what's in the report, we are adding policies to enforce user quotas ( YARN-1711 ), and we are leveraging more of the ResourceRequest and ResourceCalculator expressivity.
          Hide
          subru Subru Krishnan added a comment -

          Attaching the approach doc that describes the overall intent for interested readers. The doc also lists the breakdown into incremental sub-tasks.Any suggestions/thoughts are welcome, we will incorporate feedback as it comes in.

          Show
          subru Subru Krishnan added a comment - Attaching the approach doc that describes the overall intent for interested readers. The doc also lists the breakdown into incremental sub-tasks.Any suggestions/thoughts are welcome, we will incorporate feedback as it comes in.
          Hide
          kkambatl Karthik Kambatla (Inactive) added a comment -

          Thanks Carlo and Subru for sharing the TR and design-doc.

          Can you please verify my understanding is right (haven't read the TR yet, just the design doc): The Admission Control box (Planning agent and Inventory) takes care of figuring out if a reservation is feasible and managing the corresponding inventory. Plan Follower would take care of actually submitting the applications to the RM (scheduler) and scheduler-queue configurations come show time; the updated scheduler-queue configurations would ensure these apps would actually get the resources they need.

          Show
          kkambatl Karthik Kambatla (Inactive) added a comment - Thanks Carlo and Subru for sharing the TR and design-doc. Can you please verify my understanding is right (haven't read the TR yet, just the design doc): The Admission Control box (Planning agent and Inventory) takes care of figuring out if a reservation is feasible and managing the corresponding inventory. Plan Follower would take care of actually submitting the applications to the RM (scheduler) and scheduler-queue configurations come show time; the updated scheduler-queue configurations would ensure these apps would actually get the resources they need.
          Hide
          curino Carlo Curino added a comment -

          Correct... You got the flow right.

          A couple more notes for clarity:

          For flexibility we are decoupling the creation of a reservation from what applications are ran into it. I could imagine to have something like Oozie
          to submit a reservation request (expressing the skyline of resource it will need for a pipeline of jobs), get back a session id (practically a queue name),
          and then submit the various jobs to it. Dynamically the Plan follower will ensure the queue exists,and has whatever capacity the admission control
          dedicated to it in every moment in time.

          We plan to handle sudden collapses in cluster capacity (rack gone bad), by reconsidering the plan as a whole (for now simple greedy replanner, deciding
          what sessions to kill/reposition). This is to be able to express reservations in absolute terms (100 containers),
          instead of relative (10% of capacity)... this is particularly important for gang jobs like Giraph that cannot deal with partial allocations well.

          Show
          curino Carlo Curino added a comment - Correct... You got the flow right. A couple more notes for clarity: For flexibility we are decoupling the creation of a reservation from what applications are ran into it. I could imagine to have something like Oozie to submit a reservation request (expressing the skyline of resource it will need for a pipeline of jobs), get back a session id (practically a queue name), and then submit the various jobs to it. Dynamically the Plan follower will ensure the queue exists,and has whatever capacity the admission control dedicated to it in every moment in time. We plan to handle sudden collapses in cluster capacity (rack gone bad), by reconsidering the plan as a whole (for now simple greedy replanner, deciding what sessions to kill/reposition). This is to be able to express reservations in absolute terms (100 containers), instead of relative (10% of capacity)... this is particularly important for gang jobs like Giraph that cannot deal with partial allocations well.
          Hide
          subru Subru Krishnan added a comment -

          Attaching an updated Tech Report which enunciates more clearly what we intend to achieve, results from our P-o-C and also aligns with the design doc on how we propose to implement the same in YARN.

          Show
          subru Subru Krishnan added a comment - Attaching an updated Tech Report which enunciates more clearly what we intend to achieve, results from our P-o-C and also aligns with the design doc on how we propose to implement the same in YARN.
          Hide
          acmurthy Arun C Murthy added a comment -

          Thanks Subru Krishnan, I'll take a look at the update.

          One thing I've mentioned to Carlo Curino offline is that I think we are better of relying on enhancing/reducing priorities for applications to effect reservations rather than relying on adding/removing queues.

          Priorities within the same queue is an often requested feature anyway - that way we can solve multiple goals (operational-feature/reservations) with the same underlying mechanism i.e. priorities.

          Show
          acmurthy Arun C Murthy added a comment - Thanks Subru Krishnan , I'll take a look at the update. One thing I've mentioned to Carlo Curino offline is that I think we are better of relying on enhancing/reducing priorities for applications to effect reservations rather than relying on adding/removing queues. Priorities within the same queue is an often requested feature anyway - that way we can solve multiple goals (operational-feature/reservations) with the same underlying mechanism i.e. priorities.
          Hide
          acmurthy Arun C Murthy added a comment - - edited

          More color on why I prefer priorities for reservations rather than adding/removing queues...

          In vast majority of deployments, queues are an organizational/economic concept (e.g. per-department queues are very common) and are queues (hierarchy, names etc.) are quite stable and well recognized to point of being part of the institutional memory.

          If we rely on adding/removing queues to provide reservations, I'm concerned it will cause some confusion among both admins and users. For e.g. a user/admin trying to debug his application will be quite challenged to figure demand/supply of resources when he has to go back in time to reconstruct a programmatically generated queue hierarchy, particularly after it's long gone.

          Priorities, OTOH, is quite a familiar concept to admins (think unix 'nice'); and more importantly is a natural fit to the problem at hand i.e. temporally increase/decrease the priority of the application based on it's reservation at a point in time.

          Furthermore, as I said previously, priorities are an often requested feature - especially by admins.

          Show
          acmurthy Arun C Murthy added a comment - - edited More color on why I prefer priorities for reservations rather than adding/removing queues... In vast majority of deployments, queues are an organizational/economic concept (e.g. per-department queues are very common) and are queues (hierarchy, names etc.) are quite stable and well recognized to point of being part of the institutional memory. If we rely on adding/removing queues to provide reservations, I'm concerned it will cause some confusion among both admins and users. For e.g. a user/admin trying to debug his application will be quite challenged to figure demand/supply of resources when he has to go back in time to reconstruct a programmatically generated queue hierarchy, particularly after it's long gone. Priorities, OTOH, is quite a familiar concept to admins (think unix 'nice'); and more importantly is a natural fit to the problem at hand i.e. temporally increase/decrease the priority of the application based on it's reservation at a point in time. Furthermore, as I said previously, priorities are an often requested feature - especially by admins.
          Hide
          curino Carlo Curino added a comment -

          Arun, I think the current design point is closer to what you describe than it seems (we changed a fair bit from the early conversation we had).

          We created two new types of queues: InvetoryQueue and SessionQueue, which respectively inherit form ParentQueue and LeafQueue...
          The distribution of resources among SessionQueues performed by the InventoryQueue (inherited by ParentQueue, but without a requirement
          of capacity of children to sum-up to 100%) corresponds very much to the priority mechanism you refer to (a SessionQueue with high nominal
          capacity and low utilization is favored, etc.). So in principle, we could change InventoryQueue to track apps directly with a priority list.

          On the other hand, since we envision each SessionQueue to potentially being used to submit multiple jobs (change of perspective from early
          design, e.g., hive, pig multi-job queries, or pipelines), inheriting from the LeafQueue and imposing the classic FIFO internal behavior + delay
          scheduling + other limits seems, allows to:
          1) make the notion of "Session" to be a rather consistent extension of a "Queue", a session is a queue with some time-evolving properties (e.g., capacity).
          2) reuse lots of tracking structures and well tested code.

          Supporting this notion of session by assigning individual priorities to jobs that share a session, and having multiple sessions per inventory seems harder to get right and maintain.

          We would also be happy to meet and talk this through, and then report in JIRA the result of our conversation.

          Show
          curino Carlo Curino added a comment - Arun, I think the current design point is closer to what you describe than it seems (we changed a fair bit from the early conversation we had). We created two new types of queues: InvetoryQueue and SessionQueue, which respectively inherit form ParentQueue and LeafQueue... The distribution of resources among SessionQueues performed by the InventoryQueue (inherited by ParentQueue, but without a requirement of capacity of children to sum-up to 100%) corresponds very much to the priority mechanism you refer to (a SessionQueue with high nominal capacity and low utilization is favored, etc.). So in principle, we could change InventoryQueue to track apps directly with a priority list. On the other hand, since we envision each SessionQueue to potentially being used to submit multiple jobs (change of perspective from early design, e.g., hive, pig multi-job queries, or pipelines), inheriting from the LeafQueue and imposing the classic FIFO internal behavior + delay scheduling + other limits seems, allows to: 1) make the notion of "Session" to be a rather consistent extension of a "Queue", a session is a queue with some time-evolving properties (e.g., capacity). 2) reuse lots of tracking structures and well tested code. Supporting this notion of session by assigning individual priorities to jobs that share a session, and having multiple sessions per inventory seems harder to get right and maintain. We would also be happy to meet and talk this through, and then report in JIRA the result of our conversation.
          Hide
          curino Carlo Curino added a comment -

          We have posted the first patch with the APIs on YARN-1708... comments welcome.

          Show
          curino Carlo Curino added a comment - We have posted the first patch with the APIs on YARN-1708 ... comments welcome.
          Hide
          subru Subru Krishnan added a comment -

          We have posted patches for YARN-1709 and YARN-2080, looking for feedback.

          Show
          subru Subru Krishnan added a comment - We have posted patches for YARN-1709 and YARN-2080 , looking for feedback.
          Hide
          curino Carlo Curino added a comment -

          We created a branch named "YARN-1051" where we are going to develop/commit this feature. Once it all looks good we will merge back to trunk.

          Show
          curino Carlo Curino added a comment - We created a branch named " YARN-1051 " where we are going to develop/commit this feature. Once it all looks good we will merge back to trunk.
          Hide
          subru Subru Krishnan added a comment -

          I am attaching a merge patch with trunk for easy reference. This patch is created after rebasing branch yarn-1051 with trunk. I ran test-patch against trunk with the attached patch in my box and got a +1.

          Show
          subru Subru Krishnan added a comment - I am attaching a merge patch with trunk for easy reference. This patch is created after rebasing branch yarn-1051 with trunk. I ran test-patch against trunk with the attached patch in my box and got a +1.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12671311/YARN-1051.patch
          against trunk revision 9f9a222.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 20 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 8 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat
          org.apache.hadoop.mapred.TestJavaSerialization
          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5133//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/5133//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5133//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671311/YARN-1051.patch against trunk revision 9f9a222. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 20 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 8 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat org.apache.hadoop.mapred.TestJavaSerialization org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5133//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-YARN-Build/5133//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5133//console This message is automatically generated.
          Hide
          subru Subru Krishnan added a comment -

          Attaching a patch with the fixes from YARN-2611.

          • MAPREDUCE-6094 is already tracking the fix for TestMRCJCFileInputFormat.testAddInputPath() test case failure
          • MAPREDUCE-6048 has been opened for the intermittaent failure of TestJavaSerialization
          Show
          subru Subru Krishnan added a comment - Attaching a patch with the fixes from YARN-2611 . MAPREDUCE-6094 is already tracking the fix for TestMRCJCFileInputFormat.testAddInputPath() test case failure MAPREDUCE-6048 has been opened for the intermittaent failure of TestJavaSerialization
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12671361/YARN-1051.1.patch
          against trunk revision f435724.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 21 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5139//testReport/
          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5139//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671361/YARN-1051.1.patch against trunk revision f435724. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 21 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/5139//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5139//console This message is automatically generated.
          Hide
          curino Carlo Curino added a comment -

          Pre-camera ready version of SoCC paper.

          Show
          curino Carlo Curino added a comment - Pre-camera ready version of SoCC paper.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12671498/socc14-paper15.pdf
          against trunk revision 55302cc.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5149//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671498/socc14-paper15.pdf against trunk revision 55302cc. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5149//console This message is automatically generated.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6189 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6189/)
          YARN-2611. Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051. Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java
            YARN-1051. Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390)
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6189 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6189/ ) YARN-2611 . Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051 . Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java YARN-1051 . Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390) YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
          Hide
          chris.douglas Chris Douglas added a comment -

          Merged to trunk, per vote thread: http://s.apache.org/Oe5

          Thanks Carlo and Subru!

          Show
          chris.douglas Chris Douglas added a comment - Merged to trunk, per vote thread: http://s.apache.org/Oe5 Thanks Carlo and Subru!
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #700 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/700/)
          YARN-2611. Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051. Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java
            YARN-1051. Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390)
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #700 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/700/ ) YARN-2611 . Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051 . Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java YARN-1051 . Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
          Hide
          curino Carlo Curino added a comment -

          Thanks Chris for committing, and most importantly for the continuous assistance throughout the design, implementation, and polishing of this feature.
          Thanks to all the reviewers of individual subtasks, and the many folks in the community that gave us feedback.

          Show
          curino Carlo Curino added a comment - Thanks Chris for committing, and most importantly for the continuous assistance throughout the design, implementation, and polishing of this feature. Thanks to all the reviewers of individual subtasks, and the many folks in the community that gave us feedback.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1891 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1891/)
          YARN-2611. Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051. Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
            YARN-1051. Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390)
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1891 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1891/ ) YARN-2611 . Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051 . Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java YARN-1051 . Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1916 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1916/)
          YARN-2611. Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051. Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java
            YARN-1051. Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390)
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
          • YARN-1051-CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1916 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1916/ ) YARN-2611 . Fixing jenkins findbugs warning and TestRMWebServicesCapacitySched for branch YARN-1051 . Contributed by Subru Krishnan and Carlo Curino. (cdouglas: rev a2986234be4e02f9ccb589f9ff5f7ffb28bc6400) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/SimpleCapacityReplanner.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacitySchedulerPlanFollower.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInterval.java YARN-1051 . Add a system for creating reservations of cluster capacity. (cdouglas: rev c8212bacb1b2a7e6ee83cc56f72297465ce99390) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto YARN-1051 -CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6197 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6197/)
          Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6197 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6197/ ) Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786) hadoop-yarn-project/CHANGES.txt
          Hide
          subru Subru Krishnan added a comment -

          Thanks Chris Douglas for shepherding us all the way through. Thanks to all others (you know who you are ) who took the time to review and whose insightful feedback helped us get this into a much better shape.

          Show
          subru Subru Krishnan added a comment - Thanks Chris Douglas for shepherding us all the way through. Thanks to all others (you know who you are ) who took the time to review and whose insightful feedback helped us get this into a much better shape.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #704 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/704/)
          Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #704 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/704/ ) Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1894 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1894/)
          Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1894 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1894/ ) Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1919 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1919/)
          Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786)

          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1919 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1919/ ) Move YARN-1051 to 2.6 (cdouglas: rev 8380ca37237a21638e1bcad0dd0e4c7e9f1a1786) hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8220 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8220/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8220 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8220/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #267 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/267/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #267 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/267/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #997 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/997/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #997 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/997/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2194 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2194/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          • hadoop-yarn-project/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2194 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2194/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java hadoop-yarn-project/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #256 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/256/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #256 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/256/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #264 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/264/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #264 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/264/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2213 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2213/)
          YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7)

          • hadoop-yarn-project/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2213 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2213/ ) YARN-3973 . Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda) (wangda: rev a3bd7b4a59b3664273dc424f240356838213d4e7) hadoop-yarn-project/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
          Hide
          lars_francke Lars Francke added a comment -

          Is there any documentation on this beside the design doc and the patch itself?

          I still have trouble fully understanding how this is implemented/used.

          Show
          lars_francke Lars Francke added a comment - Is there any documentation on this beside the design doc and the patch itself? I still have trouble fully understanding how this is implemented/used.
          Hide
          curino Carlo Curino added a comment -

          Lars Francke, you can refer to the general tech-report for more of the top-level ideas/design. As for how-to-use documentation, you are right it is long overdue. A couple of follow up umbrella JIRAs YARN-2573 (HA for the reservation system), and YARN-2572 (various improvements/extensions and REST API work) can give you some more context of what is brewing. But, in particular, as part of the umbrella JIRA YARN-2572, I have just opened YARN-4468 that is intended to provide general documentation of the reservaiton system and its (recently added) REST api. We will try to get to it soon.

          Show
          curino Carlo Curino added a comment - Lars Francke , you can refer to the general tech-report for more of the top-level ideas/design. As for how-to-use documentation, you are right it is long overdue. A couple of follow up umbrella JIRAs YARN-2573 (HA for the reservation system), and YARN-2572 (various improvements/extensions and REST API work) can give you some more context of what is brewing. But, in particular, as part of the umbrella JIRA YARN-2572 , I have just opened YARN-4468 that is intended to provide general documentation of the reservaiton system and its (recently added) REST api. We will try to get to it soon.
          Hide
          curino Carlo Curino added a comment -

          YARN-2609 also provides an example of how to invoke this from the Java API.

          Show
          curino Carlo Curino added a comment - YARN-2609 also provides an example of how to invoke this from the Java API.
          Hide
          lars_francke Lars Francke added a comment -

          Thanks Carlo Curino for the pointers! Looking forward to the full documentation and will check out that example now.

          Show
          lars_francke Lars Francke added a comment - Thanks Carlo Curino for the pointers! Looking forward to the full documentation and will check out that example now.
          Hide
          grey Lei Guo added a comment -

          Carlo Curino, for enterprise customer, the scheduling policy will be complicate. How this planner to satisfy complex scheduling policy other than FIFO? As Arun C Murthy asked earlier, the priority based scheduling is one basic case on scheduling policy, what's the best practice for this?

          Show
          grey Lei Guo added a comment - Carlo Curino , for enterprise customer, the scheduling policy will be complicate. How this planner to satisfy complex scheduling policy other than FIFO? As Arun C Murthy asked earlier, the priority based scheduling is one basic case on scheduling policy, what's the best practice for this?
          Hide
          curino Carlo Curino added a comment -

          Lei Guo, I suggest you to read the attached techreport for full context, but let me try to summarize the ideas here.

          General Idea
          The reservation system receives reservation requests from users over a period of time. Note that each reservation can request resources much ahead of time (e.g., I need 10 containers for 1 hour tomorrow sometime between 3pm and 6pm). The planner will try to "fit" all these reservation in the plan agenda, while respecting the user constraints (e.g., amount of resources and start_time/deadline) and the physical constraints of the plan (which is a "queue", and thus has access to a portion of the cluster capacity). The APIs exposed to the users allow them to expose their flexibility (e.g., for a map-only job I can express the fact that I can run with up to 10 parallel containers, but also 1 container at a time), this allows the plan to fit more jobs by "deforming them". A side effect of this is that we can provide support for gang-semantics (e.g., I need 10 concurrent containers for 1 h).

          The key intuition is that each job might temporarily use a large amount of resources, but we control very explicitly when it should yield resources back to other jobs. This explicit time-multiplexing gives very strong guarantees to each job (i.e., if the reservation was accepted you will get your resources), but allows us to densely pack the cluster agenda (and thus get high utilization / high ROI). Moreover, best-effort jobs can be run on separate queues with the standard set of scheduling invariant provided by FairScheduler/CapacityScheduler.

          SharingPolicy
          Another interesting area in which enterprise settings can extend/innovate is the choice of "SharingPolicy". The SharingPolicy is a way for us to determine (beside physical resource availability) how much resources can a tenant/reservation ask for in the Plan. This is both
          per-reservation and across reservation from a user (or group). We contributed so far a couple of simple policies allowing to enforce instantaneous and over-time limits (e.g., each user can grab up to 30% of the plan instantaneously, but no more than an average of 5%
          over a 24h period of time). Internally at MS, we are developing other policies that are specific to business-rules we care to enforce in our clusters. By design, creating a new SharingPolicy that match your business settings is fairly easy (narrow API and easy configuration
          mechanics). Since the Plan stores past (up to a window of time), present, future reservations, the policy can be very sophisticated, and explicit. Also given the run-lenght-encoded representation of the allocations, algos can be quite efficient.

          ReservationAgent
          The reservation agents are the core of the placement logic. We developed a few, which optimize for different things (e.g., minimize cost of the allocation by smoothing out the plan, or placing as late/early as possible in the window of feasibility). Again this is an area of possible
          enhancement, where business logic can kick in and choose to prioritize certain types of allocations.

          Enforcement mechanics
          Finally, in order to "enforce" this planned decisions, we use dynamically created and resized queues (each reservation can contain one or more jobs, thus the queue mechanism is useful to reuse). Note that Arun C Murthy's comment was fairly technical, and related to this
          last point. He was proposing to leverage application priorities instead of queues as an enforcement mechanisms. Both are feasible, and have some pros and cons. Overall using queues allowed us to reuse some more of the mechanisms (e.g., rely on the preemption
          policy, and all of the advancement people are contributing there).

          Show
          curino Carlo Curino added a comment - Lei Guo , I suggest you to read the attached techreport for full context, but let me try to summarize the ideas here. General Idea The reservation system receives reservation requests from users over a period of time. Note that each reservation can request resources much ahead of time (e.g., I need 10 containers for 1 hour tomorrow sometime between 3pm and 6pm). The planner will try to "fit" all these reservation in the plan agenda, while respecting the user constraints (e.g., amount of resources and start_time/deadline) and the physical constraints of the plan (which is a "queue", and thus has access to a portion of the cluster capacity). The APIs exposed to the users allow them to expose their flexibility (e.g., for a map-only job I can express the fact that I can run with up to 10 parallel containers, but also 1 container at a time), this allows the plan to fit more jobs by "deforming them". A side effect of this is that we can provide support for gang-semantics (e.g., I need 10 concurrent containers for 1 h). The key intuition is that each job might temporarily use a large amount of resources, but we control very explicitly when it should yield resources back to other jobs. This explicit time-multiplexing gives very strong guarantees to each job (i.e., if the reservation was accepted you will get your resources), but allows us to densely pack the cluster agenda (and thus get high utilization / high ROI). Moreover, best-effort jobs can be run on separate queues with the standard set of scheduling invariant provided by FairScheduler/CapacityScheduler. SharingPolicy Another interesting area in which enterprise settings can extend/innovate is the choice of "SharingPolicy". The SharingPolicy is a way for us to determine (beside physical resource availability) how much resources can a tenant/reservation ask for in the Plan. This is both per-reservation and across reservation from a user (or group). We contributed so far a couple of simple policies allowing to enforce instantaneous and over-time limits (e.g., each user can grab up to 30% of the plan instantaneously, but no more than an average of 5% over a 24h period of time). Internally at MS, we are developing other policies that are specific to business-rules we care to enforce in our clusters. By design, creating a new SharingPolicy that match your business settings is fairly easy (narrow API and easy configuration mechanics). Since the Plan stores past (up to a window of time), present, future reservations, the policy can be very sophisticated, and explicit. Also given the run-lenght-encoded representation of the allocations, algos can be quite efficient. ReservationAgent The reservation agents are the core of the placement logic. We developed a few, which optimize for different things (e.g., minimize cost of the allocation by smoothing out the plan, or placing as late/early as possible in the window of feasibility). Again this is an area of possible enhancement, where business logic can kick in and choose to prioritize certain types of allocations. Enforcement mechanics Finally, in order to "enforce" this planned decisions, we use dynamically created and resized queues (each reservation can contain one or more jobs, thus the queue mechanism is useful to reuse). Note that Arun C Murthy 's comment was fairly technical, and related to this last point. He was proposing to leverage application priorities instead of queues as an enforcement mechanisms. Both are feasible, and have some pros and cons. Overall using queues allowed us to reuse some more of the mechanisms (e.g., rely on the preemption policy, and all of the advancement people are contributing there).

            People

            • Assignee:
              curino Carlo Curino
              Reporter:
              curino Carlo Curino
            • Votes:
              0 Vote for this issue
              Watchers:
              69 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development