Hadoop YARN
  1. Hadoop YARN
  2. YARN-193

Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.2-alpha, 3.0.0
    • Fix Version/s: 2.1.0-beta
    • Component/s: resourcemanager
    • Labels:
      None
    1. YARN-193.9.patch
      59 kB
      Zhijie Shen
    2. YARN-193.8.patch
      60 kB
      Zhijie Shen
    3. YARN-193.7.patch
      80 kB
      Zhijie Shen
    4. YARN-193.6.patch
      75 kB
      Zhijie Shen
    5. YARN-193.5.patch
      47 kB
      Hitesh Shah
    6. YARN-193.4.patch
      34 kB
      Hitesh Shah
    7. YARN-193.14.patch
      50 kB
      Zhijie Shen
    8. YARN-193.13.patch
      49 kB
      Zhijie Shen
    9. YARN-193.12.patch
      49 kB
      Zhijie Shen
    10. YARN-193.11.patch
      56 kB
      Zhijie Shen
    11. YARN-193.10.patch
      56 kB
      Zhijie Shen
    12. MR-3796.wip.patch
      14 kB
      Hitesh Shah
    13. MR-3796.3.patch
      54 kB
      Hitesh Shah
    14. MR-3796.2.patch
      32 kB
      Hitesh Shah
    15. MR-3796.1.patch
      27 kB
      Hitesh Shah

      Issue Links

        Activity

        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1392 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1392/)
        YARN-193. Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067)

        Result = SUCCESS
        bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067
        Files :

        • /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1392 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1392/ ) YARN-193 . Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067) Result = SUCCESS bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067 Files : /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1365 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1365/)
        YARN-193. Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067)

        Result = FAILURE
        bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067
        Files :

        • /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1365 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1365/ ) YARN-193 . Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067) Result = FAILURE bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067 Files : /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Yarn-trunk #176 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/176/)
        YARN-193. Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067)

        Result = SUCCESS
        bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067
        Files :

        • /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Show
        Hudson added a comment - Integrated in Hadoop-Yarn-trunk #176 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/176/ ) YARN-193 . Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067) Result = SUCCESS bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067 Files : /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-trunk-Commit #3570 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3570/)
        YARN-193. Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067)

        Result = SUCCESS
        bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067
        Files :

        • /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java
        • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Show
        Hudson added a comment - Integrated in Hadoop-trunk-Commit #3570 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3570/ ) YARN-193 . Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits (Zhijie Shen via bikas) (Revision 1465067) Result = SUCCESS bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1465067 Files : /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DefaultResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/DominantResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceCalculator.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Resources.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/InvalidResourceRequestException.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestFifoScheduler.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
        Hide
        Bikas Saha added a comment -

        +1. Thanks Zhijie. Committed to trunk and branch-2.

        Show
        Bikas Saha added a comment - +1. Thanks Zhijie. Committed to trunk and branch-2.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12576857/YARN-193.14.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 5 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 eclipse:eclipse. The patch failed to build with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/668//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/668//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576857/YARN-193.14.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. -1 eclipse:eclipse . The patch failed to build with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/668//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/668//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Fixed the buggy test TestResourceManager#testResourceManagerInitConfigValidation

        Show
        Zhijie Shen added a comment - Fixed the buggy test TestResourceManager#testResourceManagerInitConfigValidation
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12576820/YARN-193.13.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 5 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/662//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/662//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576820/YARN-193.13.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/662//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/662//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Fix the twice setting bug and change default max vcores to 4.

        Show
        Zhijie Shen added a comment - Fix the twice setting bug and change default max vcores to 4.
        Hide
        Bikas Saha added a comment -

        These values need to be on the conservative side so that they work on most installations. Given 24-32GB memory is becoming baseline nowadays 8GB default for max is ok IMO. Given 16 cores becoming baseline nowadays 4 cores sounds like a good default for max IMO. This is per container and its not easy to write code that actually maxes out 8 cores

        Show
        Bikas Saha added a comment - These values need to be on the conservative side so that they work on most installations. Given 24-32GB memory is becoming baseline nowadays 8GB default for max is ok IMO. Given 16 cores becoming baseline nowadays 4 cores sounds like a good default for max IMO. This is per container and its not easy to write code that actually maxes out 8 cores
        Hide
        Zhijie Shen added a comment -

        Default value of max-vcores of 32 might be too high.

        Why 32 is originally used?

        In http://hortonworks.com/blog/apache-hadoop-yarn-background-and-an-overview/, it is said:

        2012 – 16+ cores, 48-96GB of RAM, 12x2TB or 12x3TB of disk.

        How about we choosing 16?

        Why is conf being set 2 times for each value? Same for vcores.

        I'll fix the bug.

        Show
        Zhijie Shen added a comment - Default value of max-vcores of 32 might be too high. Why 32 is originally used? In http://hortonworks.com/blog/apache-hadoop-yarn-background-and-an-overview/ , it is said: 2012 – 16+ cores, 48-96GB of RAM, 12x2TB or 12x3TB of disk. How about we choosing 16? Why is conf being set 2 times for each value? Same for vcores. I'll fix the bug.
        Hide
        Bikas Saha added a comment -

        Default value of max-vcores of 32 might be too high.

        Why is conf being set 2 times for each value? Same for vcores.

        +    conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 2048);
        +    conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 1024);
        +    try {
        +      resourceManager.init(conf);
        +      fail("Exception is expected because the min memory allocation is" +
        +          " larger than the max memory allocation.");
        +    } catch (YarnException e) {
        +      // Exception is expected.
        +    }
        
        Show
        Bikas Saha added a comment - Default value of max-vcores of 32 might be too high. Why is conf being set 2 times for each value? Same for vcores. + conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 2048); + conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 1024); + try { + resourceManager.init(conf); + fail( "Exception is expected because the min memory allocation is" + + " larger than the max memory allocation." ); + } catch (YarnException e) { + // Exception is expected. + }
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12576680/YARN-193.12.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 5 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 eclipse:eclipse. The patch failed to build with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/653//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/653//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576680/YARN-193.12.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. -1 eclipse:eclipse . The patch failed to build with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/653//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/653//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        1. Remove the DISABLE_RESOURCELIMIT_CHECK feature, and its related test cases.

        2. Rewrite the log messages, and output them through LOG.warn.

        3. Add javadocs for InvalidResourceRequestException.

        4. Check whether thrown exception is InvalidResourceRequestException in TestClientRMService.

        5. Add the test case of ask > max in TestSchedulerUtils.

        6. Fixed other minor issues commented by Bikas and Hitesh (e.g., typo, unnecessary import).

        7. Rebase with YARN-382.

        Show
        Zhijie Shen added a comment - 1. Remove the DISABLE_RESOURCELIMIT_CHECK feature, and its related test cases. 2. Rewrite the log messages, and output them through LOG.warn. 3. Add javadocs for InvalidResourceRequestException. 4. Check whether thrown exception is InvalidResourceRequestException in TestClientRMService. 5. Add the test case of ask > max in TestSchedulerUtils. 6. Fixed other minor issues commented by Bikas and Hitesh (e.g., typo, unnecessary import). 7. Rebase with YARN-382 .
        Hide
        Bikas Saha added a comment -

        Also, why are there so many normalize functions and why are we creating a new Resource object every time we normalize? We should fix this in a different jira though.

        Show
        Bikas Saha added a comment - Also, why are there so many normalize functions and why are we creating a new Resource object every time we normalize? We should fix this in a different jira though.
        Hide
        Bikas Saha added a comment -

        Can we check that we are getting the expected exception and not some other one?

        +    try {
        +      rmService.submitApplication(submitRequest);
        +      Assert.fail("Application submission should fail because");
        +    } catch (YarnRemoteException e) {
        +      // Exception is expected
        +    }
        +  }
        

        Setting the same config twice? In second set, why not use a -ve value instead of the DISABLE value? Its not clear whether we want to disable check or set a -ve value. same for others.

        +    conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 0);
        +    conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB,
        +        ResourceCalculator.DISABLE_RESOURCELIMIT_CHECK);
        +    try {
        +      resourceManager.init(conf);
        +      fail("Exception is expected because the min memory allocation is" +
        +          " non-positive.");
        +    } catch (YarnException e) {
        +      // Exception is expected.
        

        Lets also add a test for case when memory is more than max. Normalize should always reduce that to max. Same for DRF

        +    // max is not a multiple of min
        +    maxResource = Resources.createResource(maxMemory - 10, 0);
        +    ask.setCapability(Resources.createResource(maxMemory - 100));
        +    // multiple of minMemory > maxMemory, then reduce to maxMemory
        +    SchedulerUtils.normalizeRequest(ask, resourceCalculator, null,
        +        minResource, maxResource);
        +    assertEquals(maxResource.getMemory(), ask.getCapability().getMemory());
           }
        

        Rename testAppSubmitError() to show that its testing invalid resource request?

        TestAMRMClient. Why is this change needed?

        +    amResource.setMemory(
        +        YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB);
        +    amContainer.setResource(amResource);
        

        Dont we need to throw?

        +      } catch (InvalidResourceRequestException e) {
        +        LOG.info("Resource request was not able to be alloacated for" +
        +            " application attempt " + appAttemptId + " because it" +
        +            " failed to pass the validation. " + e.getMessage());
        +        RPCUtil.getRemoteException(e);
        +      }
        

        typo

        +    // validate scheduler vcors allocation setting
        

        This will need to be rebased after YARN-382 which I am going to commit shortly.

        I am fine with requiring that a max allocation limit be set. We should also make sure that max allocation from config can be matched by at least 1 machine in the cluster. That should be a different jira.

        IMO, Normalization should be called only inside the scheduler. It is an artifact of the scheduler logic. Nothing in the RM requires resources to be normalized to a multiple of min. Only the scheduler needs it to makes its life easier and it could choose to not do so.

        Show
        Bikas Saha added a comment - Can we check that we are getting the expected exception and not some other one? + try { + rmService.submitApplication(submitRequest); + Assert.fail( "Application submission should fail because" ); + } catch (YarnRemoteException e) { + // Exception is expected + } + } Setting the same config twice? In second set, why not use a -ve value instead of the DISABLE value? Its not clear whether we want to disable check or set a -ve value. same for others. + conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, 0); + conf.setInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, + ResourceCalculator.DISABLE_RESOURCELIMIT_CHECK); + try { + resourceManager.init(conf); + fail( "Exception is expected because the min memory allocation is" + + " non-positive." ); + } catch (YarnException e) { + // Exception is expected. Lets also add a test for case when memory is more than max. Normalize should always reduce that to max. Same for DRF + // max is not a multiple of min + maxResource = Resources.createResource(maxMemory - 10, 0); + ask.setCapability(Resources.createResource(maxMemory - 100)); + // multiple of minMemory > maxMemory, then reduce to maxMemory + SchedulerUtils.normalizeRequest(ask, resourceCalculator, null , + minResource, maxResource); + assertEquals(maxResource.getMemory(), ask.getCapability().getMemory()); } Rename testAppSubmitError() to show that its testing invalid resource request? TestAMRMClient. Why is this change needed? + amResource.setMemory( + YarnConfiguration.DEFAULT_RM_SCHEDULER_MINIMUM_ALLOCATION_MB); + amContainer.setResource(amResource); Dont we need to throw? + } catch (InvalidResourceRequestException e) { + LOG.info( "Resource request was not able to be alloacated for " + + " application attempt " + appAttemptId + " because it" + + " failed to pass the validation. " + e.getMessage()); + RPCUtil.getRemoteException(e); + } typo + // validate scheduler vcors allocation setting This will need to be rebased after YARN-382 which I am going to commit shortly. I am fine with requiring that a max allocation limit be set. We should also make sure that max allocation from config can be matched by at least 1 machine in the cluster. That should be a different jira. IMO, Normalization should be called only inside the scheduler. It is an artifact of the scheduler logic. Nothing in the RM requires resources to be normalized to a multiple of min. Only the scheduler needs it to makes its life easier and it could choose to not do so.
        Hide
        Zhijie Shen added a comment -

        I am not sure if we should allow disabling of the max memory and max vcores setting. Was it supported earlier or does the patch introduce this support?

        Yes, the patch introduces the support. It is already there in your previous patch. I inherit it and and some description in yarn-default.xml. I'm fine with whether the function need to be supported or not. One risk I can image if the function is supported is that AM memory can exceeds "yarn.nodemanager.resource.memory-mb" when DISABLE_RESOURCELIMIT_CHECK is set. Then, the problem described in YARN-389 will occur.

        Question - should normalization of resource requests be done inside the scheduler or in the ApplicationMasterService itself which handles the allocate call?

        I think it should be better to do normalization outside allocate, because allocate is not only called in ApplicationMasterService and it is not necessary that normalize is called every time when allocate is called. For example, RMAppAttemptImpl#ScheduleTransition#transition doesn't require to do normalization because the resource has been validated during the submission stage. For another example, RMAppAttemptImpl#AMContainerAllocatedTransition#transition supplies an empty ask.

        Unrelated to this patch but when throwing/logging errors related to configs, we should always point to the configuration property to let the user know which property needs to be changed. Please file a separate jira for the above.

        I'll do that, and modify the log information when exception is thrown in this patch.

        For InvalidResourceRequestException, missing javadocs for class description.

        I'll add the description.

        If maxMemory or maxVcores is set to -1, what will happen when normalize() is called?

        The normalized value has not upper bound.

        Show
        Zhijie Shen added a comment - I am not sure if we should allow disabling of the max memory and max vcores setting. Was it supported earlier or does the patch introduce this support? Yes, the patch introduces the support. It is already there in your previous patch. I inherit it and and some description in yarn-default.xml. I'm fine with whether the function need to be supported or not. One risk I can image if the function is supported is that AM memory can exceeds "yarn.nodemanager.resource.memory-mb" when DISABLE_RESOURCELIMIT_CHECK is set. Then, the problem described in YARN-389 will occur. Question - should normalization of resource requests be done inside the scheduler or in the ApplicationMasterService itself which handles the allocate call? I think it should be better to do normalization outside allocate, because allocate is not only called in ApplicationMasterService and it is not necessary that normalize is called every time when allocate is called. For example, RMAppAttemptImpl#ScheduleTransition#transition doesn't require to do normalization because the resource has been validated during the submission stage. For another example, RMAppAttemptImpl#AMContainerAllocatedTransition#transition supplies an empty ask. Unrelated to this patch but when throwing/logging errors related to configs, we should always point to the configuration property to let the user know which property needs to be changed. Please file a separate jira for the above. I'll do that, and modify the log information when exception is thrown in this patch. For InvalidResourceRequestException, missing javadocs for class description. I'll add the description. If maxMemory or maxVcores is set to -1, what will happen when normalize() is called? The normalized value has not upper bound.
        Hide
        Hitesh Shah added a comment -
        +    and will get capped to this value. When it is set to -1, checking against the
        +    maximum allocation should be disable.</description>
        

        I am not sure if we should allow disabling of the max memory and max vcores setting. Was it supported earlier or does the patch introduce this support?

        Spelling mistake: alloacated

        +        LOG.info("Resource request was not able to be alloacated for" +
        +            " application attempt " + appAttemptId + " because it" +
        +            " failed to pass the validation. " + e.getMessage());
        

        The above could be made more simple and brief. For example, "LOG.warn("Invalid resource ask by application " + appAttemptId, e);" . Also, please use LOG.level(message, throwable) when trying to log an exception.

        +        RPCUtil.getRemoteException(e);
        

        Above is missing a throw.

        Likewise, in handling of submitApplication, please change log level to warn and also use the correct log function instead of using e.getMessage().

             if (globalMaxAppAttempts <= 0) {
               throw new YarnException(
                   "The global max attempts should be a positive integer.");
             }
        

        Unrelated to this patch but when throwing/logging errors related to configs, we should always point to the configuration property to let the user know which property needs to be changed. Please file a separate jira for the above. With respect to this, it may be useful to point to the property when throwing exceptions for invalid min/max memory/vcores.

        Unnecessary import in RMAppAttemptImpl:

         +import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
        

        For InvalidResourceRequestException, missing javadocs for class description.

        Question - should normalization of resource requests be done inside the scheduler or in the ApplicationMasterService itself which handles the allocate call?

        If maxMemory or maxVcores is set to -1, what will happen when normalize() is called? I think there are missing tests related to use of DISABLE_RESOURCELIMIT_CHECK in both validate and normalize functions that should have caught this error. In any case, the main question is whether DISABLE_RESOURCELIMIT_CHECK should actually be allowed.

        Show
        Hitesh Shah added a comment - + and will get capped to this value. When it is set to -1, checking against the + maximum allocation should be disable.</description> I am not sure if we should allow disabling of the max memory and max vcores setting. Was it supported earlier or does the patch introduce this support? Spelling mistake: alloacated + LOG.info( "Resource request was not able to be alloacated for " + + " application attempt " + appAttemptId + " because it" + + " failed to pass the validation. " + e.getMessage()); The above could be made more simple and brief. For example, "LOG.warn("Invalid resource ask by application " + appAttemptId, e);" . Also, please use LOG.level(message, throwable) when trying to log an exception. + RPCUtil.getRemoteException(e); Above is missing a throw. Likewise, in handling of submitApplication, please change log level to warn and also use the correct log function instead of using e.getMessage(). if (globalMaxAppAttempts <= 0) { throw new YarnException( "The global max attempts should be a positive integer." ); } Unrelated to this patch but when throwing/logging errors related to configs, we should always point to the configuration property to let the user know which property needs to be changed. Please file a separate jira for the above. With respect to this, it may be useful to point to the property when throwing exceptions for invalid min/max memory/vcores. Unnecessary import in RMAppAttemptImpl: + import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils; For InvalidResourceRequestException, missing javadocs for class description. Question - should normalization of resource requests be done inside the scheduler or in the ApplicationMasterService itself which handles the allocate call? If maxMemory or maxVcores is set to -1, what will happen when normalize() is called? I think there are missing tests related to use of DISABLE_RESOURCELIMIT_CHECK in both validate and normalize functions that should have caught this error. In any case, the main question is whether DISABLE_RESOURCELIMIT_CHECK should actually be allowed.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12576476/YARN-193.11.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 eclipse:eclipse. The patch failed to build with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/643//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/643//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12576476/YARN-193.11.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. -1 eclipse:eclipse . The patch failed to build with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/643//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/643//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Clean up one unnecessary import as well.

        Show
        Zhijie Shen added a comment - Clean up one unnecessary import as well.
        Hide
        Zhijie Shen added a comment -

        I've done the following changes in the newest patch:

        1. Undo the InvalidResourceRequestException process in RMAppManager, and the corresponding test modification in TestRMAppManager

        2. Move the InvalidResourceRequestException process to ClientRMService, and add the test that requested resource is invalid in TestClientRMService.

        3. Fix the log information.

        4. Fix the description in yarn-default.xml.

        5. Do sanity check for vcores as well in ResourceManager#validateConfigs. Add the related test cases in TestResourceManager.

        6. Add the tests that the resource upper bound check is disable in TestSchedulerUtils.

        Show
        Zhijie Shen added a comment - I've done the following changes in the newest patch: 1. Undo the InvalidResourceRequestException process in RMAppManager, and the corresponding test modification in TestRMAppManager 2. Move the InvalidResourceRequestException process to ClientRMService, and add the test that requested resource is invalid in TestClientRMService. 3. Fix the log information. 4. Fix the description in yarn-default.xml. 5. Do sanity check for vcores as well in ResourceManager#validateConfigs. Add the related test cases in TestResourceManager. 6. Add the tests that the resource upper bound check is disable in TestSchedulerUtils.
        Hide
        Zhijie Shen added a comment -

        @Bikas,

        Sorry, I didn't complete the last comment but clicked the add button by mistake.

        On the same note, why can this validation not be done in ClientRMService just like its been done in ApplicationMasterService?

        This is because the validation is done in RMAppManager#submitApplication, which is likely to throw InvalidResourceRequestException, be encapsulated into a YarnException, and finally be passed to ClientRMService#submitApplication.

        Show
        Zhijie Shen added a comment - @Bikas, Sorry, I didn't complete the last comment but clicked the add button by mistake. On the same note, why can this validation not be done in ClientRMService just like its been done in ApplicationMasterService? This is because the validation is done in RMAppManager#submitApplication, which is likely to throw InvalidResourceRequestException, be encapsulated into a YarnException, and finally be passed to ClientRMService#submitApplication.
        Hide
        Zhijie Shen added a comment -

        @Bikas

        This and others like it are back-incompatible but might be ok since we are still in alpha

        It's confusing the name and the default use core and vcore respectively. Therefore, as we're still in the alpha stage, IMHO, we'd better make them consistent.

        It should be disabled. Same for other places.

        Will be fixed.

        This and other places, a LOG in the catch would be good.

        Log will be added.

        Incorrect log message.

        Log message will be fixed.

        Also, in this method, why are we throwing an exception in the inner block and catching it in the outer block. Why is the inner try catch needed (instead of catching the exception in the outer catch)?

        This is because InvalidResourceRequestException extends YarnException, while the outer block only catches the IOException, which YarnRemoteException extends.

        On the same note, why can this validation not be done in ClientRMService just like its been done in ApplicationMasterService?

        This is because the validation is done in submitApplication

        Where are we testing that normalize is being set to the next higher multiple of min but not more than the max (for DRF case)? OR that checking against max is disabled by setting MAX allowed to -1. I am sorry if I have missed it.

        There's one test in TestSchedulerUtils#testNormalizeRequest for the former case, while I'll had tests for the latter case and ResourceManager#validateConfigs as well.

        Show
        Zhijie Shen added a comment - @Bikas This and others like it are back-incompatible but might be ok since we are still in alpha It's confusing the name and the default use core and vcore respectively. Therefore, as we're still in the alpha stage, IMHO, we'd better make them consistent. It should be disabled. Same for other places. Will be fixed. This and other places, a LOG in the catch would be good. Log will be added. Incorrect log message. Log message will be fixed. Also, in this method, why are we throwing an exception in the inner block and catching it in the outer block. Why is the inner try catch needed (instead of catching the exception in the outer catch)? This is because InvalidResourceRequestException extends YarnException, while the outer block only catches the IOException, which YarnRemoteException extends. On the same note, why can this validation not be done in ClientRMService just like its been done in ApplicationMasterService? This is because the validation is done in submitApplication Where are we testing that normalize is being set to the next higher multiple of min but not more than the max (for DRF case)? OR that checking against max is disabled by setting MAX allowed to -1. I am sorry if I have missed it. There's one test in TestSchedulerUtils#testNormalizeRequest for the former case, while I'll had tests for the latter case and ResourceManager#validateConfigs as well.
        Hide
        Bikas Saha added a comment -

        Also, do we really need to create a new Resource object every time we call normalize? This should be a different jira though.

        Show
        Bikas Saha added a comment - Also, do we really need to create a new Resource object every time we call normalize? This should be a different jira though.
        Hide
        Bikas Saha added a comment -

        This and others like it are back-incompatible but might be ok since we are still in alpha

        -  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_CORES = 32;
        +  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 32;
        

        It should be disabled. Same for other places.

        +    maximum allocation is disable.</description>
        

        This and other places, a LOG in the catch would be good.
        Also, I am not warming up to the idea of having to put a try catch around every validate.

        +      // sanity check
        +      try {
        +        SchedulerUtils.validateResourceRequests(ask,
        +            rScheduler.getMaximumResourceCapability());
        +      } catch (InvalidResourceRequestException e) {
        +        RPCUtil.getRemoteException(e);
        +      }
        

        Incorrect log message.

        +        try {
        +          SchedulerUtils.validateResourceRequest(amReq,
        +              scheduler.getMaximumResourceCapability());
        +        } catch (InvalidResourceRequestException e) {
        +          LOG.info("RM App submission failed in normalize AM Resource Request "
        +              + "for application with id " + applicationId + " : "
        +              + e.getMessage());
        

        Also, in this method, why are we throwing an exception in the inner block and catching it in the outer block. Why is the inner try catch needed (instead of catching the exception in the outer catch)?
        On the same note, why can this validation not be done in ClientRMService just like its been done in ApplicationMasterService? That maintains symmetry and is easier to understand/correlate. It will also work when RMAppManager.handle() is not called synchronously from ClientRMService.

        Where are we testing that normalize is being set to the next higher multiple of min but not more than the max (for DRF case)? OR that checking against max is disabled by setting MAX allowed to -1. I am sorry if I have missed it.

        Show
        Bikas Saha added a comment - This and others like it are back-incompatible but might be ok since we are still in alpha - public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_CORES = 32; + public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 32; It should be disabled. Same for other places. + maximum allocation is disable.</description> This and other places, a LOG in the catch would be good. Also, I am not warming up to the idea of having to put a try catch around every validate. + // sanity check + try { + SchedulerUtils.validateResourceRequests(ask, + rScheduler.getMaximumResourceCapability()); + } catch (InvalidResourceRequestException e) { + RPCUtil.getRemoteException(e); + } Incorrect log message. + try { + SchedulerUtils.validateResourceRequest(amReq, + scheduler.getMaximumResourceCapability()); + } catch (InvalidResourceRequestException e) { + LOG.info( "RM App submission failed in normalize AM Resource Request " + + " for application with id " + applicationId + " : " + + e.getMessage()); Also, in this method, why are we throwing an exception in the inner block and catching it in the outer block. Why is the inner try catch needed (instead of catching the exception in the outer catch)? On the same note, why can this validation not be done in ClientRMService just like its been done in ApplicationMasterService? That maintains symmetry and is easier to understand/correlate. It will also work when RMAppManager.handle() is not called synchronously from ClientRMService. Where are we testing that normalize is being set to the next higher multiple of min but not more than the max (for DRF case)? OR that checking against max is disabled by setting MAX allowed to -1. I am sorry if I have missed it.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12575991/YARN-193.9.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/625//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/625//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12575991/YARN-193.9.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/625//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/625//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Merge agains the latest trunk, and replace newly introduced "*" with ResourceRequest.ANY, as YARN-450 has been committed.

        Show
        Zhijie Shen added a comment - Merge agains the latest trunk, and replace newly introduced "*" with ResourceRequest.ANY, as YARN-450 has been committed.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12575844/YARN-193.8.patch
        against trunk revision .

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/621//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12575844/YARN-193.8.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/621//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Had offline discussion with Bikas and Hitesh. We agreed to simplify the solution, and isolate it from the fix of YARN-382.

        Show
        Zhijie Shen added a comment - Had offline discussion with Bikas and Hitesh. We agreed to simplify the solution, and isolate it from the fix of YARN-382 .
        Hide
        Bikas Saha added a comment -

        I am not sure if the normalization errors should reach all the way to the RMAppAttemptImpl and cause failures. AM container request should be validated and normalized in ApplicationMasterService.submitApplication() as the first thing, even before sending it to RMAppManager. Task container requests should be validated in ApplicationMasterService.allocate() as the first thing before calling scheduler.allocate(). This is like a sanity check. This also ensures that we are not calling into the scheduler and changing its internal state (eg it could return completed container or newly allocated container which would be lost if we throw an exception).
        RMAppAttempImpl could assert that the allocated container has same size as the requested container.

        Normalization should simply cap the resource to the max allowed. Normalize can be called from anywhere and so its not necessary to always validate before normalizing. In fact we could choose to normalize requests > max to max instead of throwing an exception.

        Validate should not throw an exception IMO. Its like a helper function that tell if the value is valid or not. Different users can choose to do different things based on the result of validate().

        Show
        Bikas Saha added a comment - I am not sure if the normalization errors should reach all the way to the RMAppAttemptImpl and cause failures. AM container request should be validated and normalized in ApplicationMasterService.submitApplication() as the first thing, even before sending it to RMAppManager. Task container requests should be validated in ApplicationMasterService.allocate() as the first thing before calling scheduler.allocate(). This is like a sanity check. This also ensures that we are not calling into the scheduler and changing its internal state (eg it could return completed container or newly allocated container which would be lost if we throw an exception). RMAppAttempImpl could assert that the allocated container has same size as the requested container. Normalization should simply cap the resource to the max allowed. Normalize can be called from anywhere and so its not necessary to always validate before normalizing. In fact we could choose to normalize requests > max to max instead of throwing an exception. Validate should not throw an exception IMO. Its like a helper function that tell if the value is valid or not. Different users can choose to do different things based on the result of validate().
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12575594/YARN-193.7.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 9 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/609//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/609//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12575594/YARN-193.7.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/609//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/609//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Clean up the warnings in TestRMAppAttemptTransitions and fix the broken test cases in it and TestClientRMService.

        Show
        Zhijie Shen added a comment - Clean up the warnings in TestRMAppAttemptTransitions and fix the broken test cases in it and TestClientRMService.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12575546/YARN-193.6.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 8 new or modified test files.

        -1 javac. The applied patch generated 1365 javac compiler warnings (more than the trunk's current 1361 warnings).

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

        org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService
        org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/603//testReport/
        Javac warnings: https://builds.apache.org/job/PreCommit-YARN-Build/603//artifact/trunk/patchprocess/diffJavacWarnings.txt
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/603//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12575546/YARN-193.6.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 8 new or modified test files. -1 javac . The applied patch generated 1365 javac compiler warnings (more than the trunk's current 1361 warnings). +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/603//testReport/ Javac warnings: https://builds.apache.org/job/PreCommit-YARN-Build/603//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-YARN-Build/603//console This message is automatically generated.
        Hide
        Zhijie Shen added a comment -

        Based on @Hitesh's previous patch, I've made the following changes in the newest one:

        1. Modify the boundary case of judging valid resource value ("< 0" => "<= 0").

        2. maxMem doesn't need to be the multiple times of minMem.

        3. To fix YARN-382, in RMAppManager, AM CLC still need to be updated after request normalization is executed, such that AM CLC knows the updated resource if possible, which will be equal to the resource of the allocated container. To ensure the equivalence, assert is added in RMAppAttemptImpl$AMContainerAllocatedTransition. Changes in YARN-370 is also reverted.

        Therefore, if this jira is fixed, YARN-382 can be fixed as well.

        4. InvalidResourceException, which is extended from IOException, is created and used when the requested resource is invalid in terms of its numbers. Modify the related functions to either throw or capture the exceptions. In particular, in the transitions of RMAppAttemptImpl, when the exception is captured the attempt will transit to FAILED state.

        When YARN-142 gets fixed, the customized exception need to be updated.

        5. Reorganize the code.

        6. Add more test cases.

        Comments, please. Thanks!

        Show
        Zhijie Shen added a comment - Based on @Hitesh's previous patch, I've made the following changes in the newest one: 1. Modify the boundary case of judging valid resource value ("< 0" => "<= 0"). 2. maxMem doesn't need to be the multiple times of minMem. 3. To fix YARN-382 , in RMAppManager, AM CLC still need to be updated after request normalization is executed, such that AM CLC knows the updated resource if possible, which will be equal to the resource of the allocated container. To ensure the equivalence, assert is added in RMAppAttemptImpl$AMContainerAllocatedTransition. Changes in YARN-370 is also reverted. Therefore, if this jira is fixed, YARN-382 can be fixed as well. 4. InvalidResourceException, which is extended from IOException, is created and used when the requested resource is invalid in terms of its numbers. Modify the related functions to either throw or capture the exceptions. In particular, in the transitions of RMAppAttemptImpl, when the exception is captured the attempt will transit to FAILED state. When YARN-142 gets fixed, the customized exception need to be updated. 5. Reorganize the code. 6. Add more test cases. Comments, please. Thanks!
        Hide
        Hitesh Shah added a comment -

        @Zhijie, distributedshell is an example application and therefore explains how to write a "good" application which checks what the limits are and changes its requests accordingly.

        IMO, for applications which do not respect the limits, instead of reducing their defined requirements to the max value, we should throw an error as we are not sure if the app really needs that high amount of resources and whether it will actually if we reduce that amount to the max value.

        Does that make sense?

        Show
        Hitesh Shah added a comment - @Zhijie, distributedshell is an example application and therefore explains how to write a "good" application which checks what the limits are and changes its requests accordingly. IMO, for applications which do not respect the limits, instead of reducing their defined requirements to the max value, we should throw an error as we are not sure if the app really needs that high amount of resources and whether it will actually if we reduce that amount to the max value. Does that make sense?
        Hide
        Zhijie Shen added a comment -

        Hi Hitesh Shah, I've one comment on the patch.

        I've found that in the patch, the normalization with through exception if the requested resource is larger than the configured max value. IMHO, it's better to normalize the requested resource to the multiple of the min value, which is also not larger than max value. For example min = 1024, max = 2560, resource = 2300. If only min value is considered, the resource will be normalized to 3072. Then, if max value is considered, the resource should be reduce to the max multiple of min value, but no larger than max value, i.e., 2048.

        Otherwise, if the exception is anyway to be thrown in normalization, Client and ApplicationMaster of distributed shell should be modified accordingly, because the request resource are reset to the max value there if it is larger than the max value. I'm feeling that it's better to have the consistent behavior.

        Show
        Zhijie Shen added a comment - Hi Hitesh Shah , I've one comment on the patch. I've found that in the patch, the normalization with through exception if the requested resource is larger than the configured max value. IMHO, it's better to normalize the requested resource to the multiple of the min value, which is also not larger than max value. For example min = 1024, max = 2560, resource = 2300. If only min value is considered, the resource will be normalized to 3072. Then, if max value is considered, the resource should be reduce to the max multiple of min value, but no larger than max value, i.e., 2048. Otherwise, if the exception is anyway to be thrown in normalization, Client and ApplicationMaster of distributed shell should be modified accordingly, because the request resource are reset to the max value there if it is larger than the max value. I'm feeling that it's better to have the consistent behavior.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12564526/YARN-193.5.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 7 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/340//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/340//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12564526/YARN-193.5.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/340//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/340//console This message is automatically generated.
        Hide
        Hitesh Shah added a comment -

        Re-based against changes for cpu-based scheduling.

        Show
        Hitesh Shah added a comment - Re-based against changes for cpu-based scheduling.
        Hide
        Hitesh Shah added a comment -

        @Vinod, the .4 patch was against trunk as of 8th Jan. Will rebase and upload again.

        Show
        Hitesh Shah added a comment - @Vinod, the .4 patch was against trunk as of 8th Jan. Will rebase and upload again.
        Hide
        Vinod Kumar Vavilapalli added a comment -

        Hitesh, it doesn't apply for me anymore. Is it against latest trunk?

        Show
        Vinod Kumar Vavilapalli added a comment - Hitesh, it doesn't apply for me anymore. Is it against latest trunk?
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12563808/YARN-193.4.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 7 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-YARN-Build/326//testReport/
        Console output: https://builds.apache.org/job/PreCommit-YARN-Build/326//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563808/YARN-193.4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/326//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/326//console This message is automatically generated.
        Hide
        Hitesh Shah added a comment -

        Re-based patch against latest code.

        Show
        Hitesh Shah added a comment - Re-based patch against latest code.
        Hide
        Hitesh Shah added a comment -

        Made min/max settings global at the RM level instead of the per-scheduler settings.

        Show
        Hitesh Shah added a comment - Made min/max settings global at the RM level instead of the per-scheduler settings.
        Hide
        Vinod Kumar Vavilapalli added a comment -

        I think we need to move the min/max allocation configuration to be a global RM-level setting

        Sure, +1. I had done that as part of MAPREDUCE-3812, but that ticket needs more effort. Would favor getting that part in here.

        Show
        Vinod Kumar Vavilapalli added a comment - I think we need to move the min/max allocation configuration to be a global RM-level setting Sure, +1. I had done that as part of MAPREDUCE-3812 , but that ticket needs more effort. Would favor getting that part in here.
        Hide
        Hitesh Shah added a comment -

        @Vinod, true - had thought of that but for that to be done, I think we need to move the min/max allocation configuration to be a global RM-level setting and not a scheduler-specific configuration option.

        Show
        Hitesh Shah added a comment - @Vinod, true - had thought of that but for that to be done, I think we need to move the min/max allocation configuration to be a global RM-level setting and not a scheduler-specific configuration option.
        Hide
        Vinod Kumar Vavilapalli added a comment -

        Looked at the patch briefly. Thinking that moving the normalization of the requests out of the scheduler into the 'leaf-services' like ClientRMService and ApplicationMasterService simplifies things a lot. It seems more natural to me too, as we can recover from or report errors sooner than later. What do you say?

        Show
        Vinod Kumar Vavilapalli added a comment - Looked at the patch briefly. Thinking that moving the normalization of the requests out of the scheduler into the 'leaf-services' like ClientRMService and ApplicationMasterService simplifies things a lot. It seems more natural to me too, as we can recover from or report errors sooner than later. What do you say?
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12514773/MR-3796.2.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 9 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1879//testReport/
        Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1879//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12514773/MR-3796.2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 9 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1879//testReport/ Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1879//console This message is automatically generated.
        Hide
        Hitesh Shah added a comment -

        max/min allocation field variables are added into findbugs-exclude as they are initialized only once and from then read-only.

        Show
        Hitesh Shah added a comment - max/min allocation field variables are added into findbugs-exclude as they are initialized only once and from then read-only.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12513209/MR-3796.1.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1772//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1772//artifact/trunk/hadoop-mapreduce-project/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
        Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1772//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12513209/MR-3796.1.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1772//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1772//artifact/trunk/hadoop-mapreduce-project/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1772//console This message is automatically generated.
        Hide
        Hitesh Shah added a comment -

        Ignore the patch. Still need to work some stuff out with respect to the state machine changes.

        Show
        Hitesh Shah added a comment - Ignore the patch. Still need to work some stuff out with respect to the state machine changes.
        Hide
        Hitesh Shah added a comment -

        Still a work in progress. Tests pending. Changing the state machine slightly in case anyone wants to take a quick look.

        Show
        Hitesh Shah added a comment - Still a work in progress. Tests pending. Changing the state machine slightly in case anyone wants to take a quick look.

          People

          • Assignee:
            Zhijie Shen
            Reporter:
            Hitesh Shah
          • Votes:
            1 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development