Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-4311

Capacity scheduler.xml does not accept decimal values for capacity and maximum-capacity settings

    XMLWordPrintableJSON

Details

    • Incompatible change, Reviewed

    Description

      if capacity scheduler capacity or max capacity set with decimal it errors:

      • Error starting ResourceManager

      java.lang.NumberFormatException: For input string: "10.5"
      at
      java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
      at java.lang.Integer.parseInt(Integer.java:458)
      at java.lang.Integer.parseInt(Integer.java:499)
      at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:713)
      at
      org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:147)
      at
      org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:147)
      at
      org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:297)
      at

      0.20 used to take decimal and this could be an issue on large clusters that would have queues with small allocations.

      Attachments

        1. MR-4311.patch
          15 kB
          Karthik Kambatla

        Issue Links

          Activity

            People

              kasha Karthik Kambatla
              tgraves Thomas Graves
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: