Index: hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-common-project/hadoop-common/src/site/markdown/release/0.23.0/RELEASENOTES.0.23.0.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -702,7 +702,7 @@ * [MAPREDUCE-3186](https://issues.apache.org/jira/browse/MAPREDUCE-3186) | *Blocker* | **User jobs are getting hanged if the Resource manager process goes down and comes up while job is getting executed.** -New Yarn configuration property: +New YARN configuration property: Name: yarn.app.mapreduce.am.scheduler.connection.retries Description: Number of times AM should retry to contact RM if connection is lost. Index: hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-common-project/hadoop-common/src/site/markdown/release/2.1.0-beta/RELEASENOTES.2.1.0-beta.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -287,7 +287,7 @@ --- -* [MAPREDUCE-5156](https://issues.apache.org/jira/browse/MAPREDUCE-5156) | *Blocker* | **Hadoop-examples-1.x.x.jar cannot run on Yarn** +* [MAPREDUCE-5156](https://issues.apache.org/jira/browse/MAPREDUCE-5156) | *Blocker* | **Hadoop-examples-1.x.x.jar cannot run on YARN** **WARNING: No release note provided for this incompatible change.** @@ -518,7 +518,7 @@ --- -* [YARN-787](https://issues.apache.org/jira/browse/YARN-787) | *Blocker* | **Remove resource min from Yarn client API** +* [YARN-787](https://issues.apache.org/jira/browse/YARN-787) | *Blocker* | **Remove resource min from YARN client API** **WARNING: No release note provided for this incompatible change.** Index: hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-common-project/hadoop-common/src/site/markdown/release/2.5.0/RELEASENOTES.2.5.0.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -52,7 +52,7 @@ --- -* [HADOOP-9919](https://issues.apache.org/jira/browse/HADOOP-9919) | *Major* | **Update hadoop-metrics2.properties examples to Yarn** +* [HADOOP-9919](https://issues.apache.org/jira/browse/HADOOP-9919) | *Major* | **Update hadoop-metrics2.properties examples to YARN** Remove MRv1 settings from hadoop-metrics2.properties, add YARN settings instead. Index: hadoop-common-project/hadoop-common/src/site/markdown/release/2.8.0/RELEASENOTES.2.8.0.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-common-project/hadoop-common/src/site/markdown/release/2.8.0/RELEASENOTES.2.8.0.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-common-project/hadoop-common/src/site/markdown/release/2.8.0/RELEASENOTES.2.8.0.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -1069,7 +1069,7 @@ --- -* [YARN-6177](https://issues.apache.org/jira/browse/YARN-6177) | *Major* | **Yarn client should exit with an informative error message if an incompatible Jersey library is used at client** +* [YARN-6177](https://issues.apache.org/jira/browse/YARN-6177) | *Major* | **YARN client should exit with an informative error message if an incompatible Jersey library is used at client** Let yarn client exit with an informative error message if an incompatible Jersey library is used from client side. Index: hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha4/RELEASENOTES.3.0.0-alpha4.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -108,7 +108,7 @@ --- -* [YARN-6177](https://issues.apache.org/jira/browse/YARN-6177) | *Major* | **Yarn client should exit with an informative error message if an incompatible Jersey library is used at client** +* [YARN-6177](https://issues.apache.org/jira/browse/YARN-6177) | *Major* | **YARN client should exit with an informative error message if an incompatible Jersey library is used at client** Let yarn client exit with an informative error message if an incompatible Jersey library is used from client side. Index: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -166,7 +166,7 @@ The mount table is read when the job is submitted to the cluster. The `XInclude` in `core-site.xml` is expanded at job submission time. This means that if the mount table are changed then the jobs need to be resubmitted. Due to this reason, we want to implement merge-mount which will greatly reduce the need to change mount tables. Further, we would like to read the mount tables via another mechanism that is initialized at job start time in the future. -6. **Will JobTracker (or Yarn’s Resource Manager) itself use the ViewFs?** +6. **Will JobTracker (or YARN’s Resource Manager) itself use the ViewFs?** No, it does not need to. Neither does the NodeManager. Index: hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-tools/hadoop-archive-logs/src/site/markdown/HadoopArchiveLogs.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -21,7 +21,7 @@ Overview -------- -For clusters with a lot of Yarn aggregated logs, it can be helpful to combine +For clusters with a lot of YARN aggregated logs, it can be helpful to combine them into hadoop archives in order to reduce the number of small files, and hence the stress on the NameNode. This tool provides an easy way to do this. Aggregated logs in hadoop archives can still be read by the Job History Server @@ -50,7 +50,7 @@ to be eligible (default: 20) -noProxy When specified, all processing will be done as the user running this command (or - the Yarn user if DefaultContainerExecutor + the YARN user if DefaultContainerExecutor is in use). When not specified, all processing will be done as the user who owns that application; if the user @@ -86,7 +86,7 @@ its aggregated log files with the resulting archive. The ``-noProxy`` option makes the tool process everything as the user who is -currently running it, or the Yarn user if DefaultContainerExecutor is in use. +currently running it, or the YARN user if DefaultContainerExecutor is in use. When not specified, all processing will be done by the user who owns that application; if the user running this command is not allowed to impersonate that user, it will fail. This is useful if you want an admin user to handle all Index: hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -12,10 +12,10 @@ limitations under the License. See accompanying LICENSE file. --> -Yarn Scheduler Load Simulator (SLS) +YARN Scheduler Load Simulator (SLS) =================================== -* [Yarn Scheduler Load Simulator (SLS)](#Yarn_Scheduler_Load_Simulator_SLS) +* [YARN Scheduler Load Simulator (SLS)](#Yarn_Scheduler_Load_Simulator_SLS) * [Overview](#Overview) * [Overview](#Overview) * [Goals](#Goals) @@ -39,11 +39,11 @@ ### Overview -The Yarn scheduler is a fertile area of interest with different implementations, e.g., Fifo, Capacity and Fair schedulers. Meanwhile, several optimizations are also made to improve scheduler performance for different scenarios and workload. Each scheduler algorithm has its own set of features, and drives scheduling decisions by many factors, such as fairness, capacity guarantee, resource availability, etc. It is very important to evaluate a scheduler algorithm very well before we deploy in a production cluster. Unfortunately, currently it is non-trivial to evaluate a scheduler algorithm. Evaluating in a real cluster is always time and cost consuming, and it is also very hard to find a large-enough cluster. Hence, a simulator which can predict how well a scheduler algorithm for some specific workload would be quite useful. +The YARN scheduler is a fertile area of interest with different implementations, e.g., Fifo, Capacity and Fair schedulers. Meanwhile, several optimizations are also made to improve scheduler performance for different scenarios and workload. Each scheduler algorithm has its own set of features, and drives scheduling decisions by many factors, such as fairness, capacity guarantee, resource availability, etc. It is very important to evaluate a scheduler algorithm very well before we deploy in a production cluster. Unfortunately, currently it is non-trivial to evaluate a scheduler algorithm. Evaluating in a real cluster is always time and cost consuming, and it is also very hard to find a large-enough cluster. Hence, a simulator which can predict how well a scheduler algorithm for some specific workload would be quite useful. -The Yarn Scheduler Load Simulator (SLS) is such a tool, which can simulate large-scale Yarn clusters and application loads in a single machine.This simulator would be invaluable in furthering Yarn by providing a tool for researchers and developers to prototype new scheduler features and predict their behavior and performance with reasonable amount of confidence, thereby aiding rapid innovation. +The YARN Scheduler Load Simulator (SLS) is such a tool, which can simulate large-scale YARN clusters and application loads in a single machine.This simulator would be invaluable in furthering YARN by providing a tool for researchers and developers to prototype new scheduler features and predict their behavior and performance with reasonable amount of confidence, thereby aiding rapid innovation. o -The simulator will exercise the real Yarn `ResourceManager` removing the network factor by simulating `NodeManagers` and `ApplicationMasters` via handling and dispatching `NM`/`AMs` heartbeat events from within the same JVM. To keep tracking of scheduler behavior and performance, a scheduler wrapper will wrap the real scheduler. +The simulator will exercise the real YARN `ResourceManager` removing the network factor by simulating `NodeManagers` and `ApplicationMasters` via handling and dispatching `NM`/`AMs` heartbeat events from within the same JVM. To keep tracking of scheduler behavior and performance, a scheduler wrapper will wrap the real scheduler. The size of the cluster and the application load can be loaded from configuration files, which are generated from job history files directly by adopting [Apache Rumen](../hadoop-rumen/Rumen.html). @@ -74,7 +74,7 @@ ![The architecture of the simulator](images/sls_arch.png) -The simulator takes input of workload traces, or synthetic load distributions and generaters the cluster and applications information. For each NM and AM, the simulator builds a simulator to simulate their running. All NM/AM simulators run in a thread pool. The simulator reuses Yarn Resource Manager, and builds a wrapper out of the scheduler. The Scheduler Wrapper can track the scheduler behaviors and generates several logs, which are the outputs of the simulator and can be further analyzed. +The simulator takes input of workload traces, or synthetic load distributions and generaters the cluster and applications information. For each NM and AM, the simulator builds a simulator to simulate their running. All NM/AM simulators run in a thread pool. The simulator reuses YARN Resource Manager, and builds a wrapper out of the scheduler. The Scheduler Wrapper can track the scheduler behaviors and generates several logs, which are the outputs of the simulator and can be further analyzed. ### Usecases @@ -110,9 +110,9 @@ ### Step 1: Configure Hadoop and the simulator -Before we start, make sure Hadoop and the simulator are configured well. All configuration files for Hadoop and the simulator should be placed in directory `$HADOOP_ROOT/etc/hadoop`, where the `ResourceManager` and Yarn scheduler load their configurations. Directory `$HADOOP_ROOT/share/hadoop/tools/sls/sample-conf/` provides several example configurations, that can be used to start a demo. +Before we start, make sure Hadoop and the simulator are configured well. All configuration files for Hadoop and the simulator should be placed in directory `$HADOOP_ROOT/etc/hadoop`, where the `ResourceManager` and YARN scheduler load their configurations. Directory `$HADOOP_ROOT/share/hadoop/tools/sls/sample-conf/` provides several example configurations, that can be used to start a demo. -For configuration of Hadoop and Yarn scheduler, users can refer to Yarn’s website (). +For configuration of Hadoop and YARN scheduler, users can refer to YARN’s website (). For the simulator, it loads configuration information from file `$HADOOP_ROOT/etc/hadoop/sls-runner.xml`. @@ -244,7 +244,7 @@ Metrics ------- -The Yarn Scheduler Load Simulator has integrated [Metrics](http://metrics.codahale.com/) to measure the behaviors of critical components and operations, including running applications and containers, cluster available resources, scheduler operation timecost, et al. If the switch `yarn.sls.runner.metrics.switch` is set `ON`, `Metrics` will run and output it logs in `--output-dir` directory specified by users. Users can track these information during simulator running, and can also analyze these logs after running to evaluate the scheduler performance. +The YARN Scheduler Load Simulator has integrated [Metrics](http://metrics.codahale.com/) to measure the behaviors of critical components and operations, including running applications and containers, cluster available resources, scheduler operation timecost, et al. If the switch `yarn.sls.runner.metrics.switch` is set `ON`, `Metrics` will run and output it logs in `--output-dir` directory specified by users. Users can track these information during simulator running, and can also analyze these logs after running to evaluate the scheduler performance. ### Real-time Tracking @@ -320,7 +320,7 @@ ### Resources -[YARN-1021](https://issues.apache.org/jira/browse/YARN-1021) is the main JIRA that introduces Yarn Scheduler Load Simulator to Hadoop Yarn project. +[YARN-1021](https://issues.apache.org/jira/browse/YARN-1021) is the main JIRA that introduces YARN Scheduler Load Simulator to Hadoop YARN project. [YARN-6363](https://issues.apache.org/jira/browse/YARN-6363) is the main JIRA that introduces the Synthetic Load Generator to SLS. ### SLS JSON input file format Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -134,7 +134,7 @@ - This configures the HTTP endpoint for Yarn Daemons.The following + This configures the HTTP endpoint for YARN Daemons.The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https @@ -1063,14 +1063,14 @@ DeletionService will delete the application's localized file directory and log directory. - To diagnose Yarn application problems, set this property's value large + To diagnose YARN application problems, set this property's value large enough (for example, to 600 = 10 minutes) to permit examination of these directories. After changing the property's value, you must restart the nodemanager in order for it to have an effect. - The roots of Yarn applications' work directories is configurable with + The roots of YARN applications' work directories is configurable with the yarn.nodemanager.local-dirs property (see below), and the roots - of the Yarn applications' log directories is configurable with the + of the YARN applications' log directories is configurable with the yarn.nodemanager.log-dirs property (see also below). yarn.nodemanager.delete.debug-delay-sec @@ -1510,28 +1510,45 @@ The cgroups hierarchy under which to place YARN proccesses (cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false - (that is, if cgroups have been pre-configured) and the Yarn user has write + (that is, if cgroups have been pre-configured) and the YARN user has write access to the parent directory, then the directory will be created. - If the directory already exists, the administrator has to give Yarn + If the directory already exists, the administrator has to give YARN write permissions to it recursively. - Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. + This property only applies when the LCE resources handler is set to + CgroupsLCEResourcesHandler. yarn.nodemanager.linux-container-executor.cgroups.hierarchy /hadoop-yarn Whether the LCE should attempt to mount cgroups if not found. - Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler. + This property only applies when the LCE resources handler is set to + CgroupsLCEResourcesHandler. + yarn.nodemanager.linux-container-executor.cgroups.mount false - Where the LCE should attempt to mount cgroups if not found. Common locations - include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux - distribution in use. This path must exist before the NodeManager is launched. - Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and - yarn.nodemanager.linux-container-executor.cgroups.mount is true. + This property sets the path from which YARN will read the + CGroups configuration. YARN has built-in functionality to discover the + system CGroup mount paths, so use this property only if YARN's automatic + mount path discovery does not work. + + The path specified by this property must exist before the NodeManager is + launched. + If yarn.nodemanager.linux-container-executor.cgroups.mount is set to true, + YARN will first try to mount the CGroups at the specified path before + reading them. + If yarn.nodemanager.linux-container-executor.cgroups.mount is set to + false, YARN will read the CGroups at the specified path. + If this property is empty, YARN tries to detect the CGroups location. + + Please refer to NodeManagerCgroups.html in the documentation for further + details. + This property only applies when the LCE resources handler is set to + CgroupsLCEResourcesHandler. + yarn.nodemanager.linux-container-executor.cgroups.mount-path Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -23,6 +23,9 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import java.util.HashSet; +import java.util.Set; + /** * Provides CGroups functionality. Implementations are expected to be * thread-safe @@ -54,6 +57,18 @@ String getName() { return name; } + + /** + * Get the list of valid cgroup names. + * @return The set of cgroup name strings + */ + public static Set getValidCGroups() { + HashSet validCgroups = new HashSet<>(); + for (CGroupController controller : CGroupController.values()) { + validCgroups.add(controller.getName()); + } + return validCgroups; + } } String CGROUP_FILE_TASKS = "tasks"; Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -83,7 +83,7 @@ * @param mtab mount file location * @throws ResourceHandlerException if initialization failed */ - public CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor + CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor privilegedOperationExecutor, String mtab) throws ResourceHandlerException { this.cGroupPrefix = conf.get(YarnConfiguration. @@ -115,7 +115,7 @@ * PrivilegedContainerOperations * @throws ResourceHandlerException if initialization failed */ - public CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor + CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor privilegedOperationExecutor) throws ResourceHandlerException { this(conf, privilegedOperationExecutor, MTAB_FILE); } @@ -142,11 +142,18 @@ // the same hierarchy will be mounted at each mount point with the same // subsystem set. - Map> newMtab; + Map> newMtab = null; Map cPaths; try { - // parse mtab - newMtab = parseMtab(mtabFile); + if (this.cGroupMountPath != null && !this.enableCGroupMount) { + newMtab = ResourceHandlerModule. + parseConfiguredCGroupPath(this.cGroupMountPath); + } + + if (newMtab == null) { + // parse mtab + newMtab = parseMtab(mtabFile); + } // find cgroup controller paths cPaths = initializeControllerPathsFromMtab(newMtab); @@ -203,10 +210,8 @@ throws IOException { Map> ret = new HashMap<>(); BufferedReader in = null; - HashSet validCgroups = new HashSet<>(); - for (CGroupController controller : CGroupController.values()) { - validCgroups.add(controller.getName()); - } + Set validCgroups = + CGroupsHandler.CGroupController.getValidCGroups(); try { FileInputStream fis = new FileInputStream(new File(mtab)); @@ -487,7 +492,8 @@ try (BufferedReader inl = new BufferedReader(new InputStreamReader(new FileInputStream(cgf + "/tasks"), "UTF-8"))) { - if ((str = inl.readLine()) != null) { + str = inl.readLine(); + if (str != null) { LOG.debug("First line in cgroup tasks file: " + cgf + " " + str); } } catch (IOException e) { Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -31,6 +31,13 @@ import org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler; import org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler; +import java.io.File; +import java.io.IOException; +import java.util.Set; +import java.util.HashSet; +import java.util.Map; +import java.util.HashMap; +import java.util.Arrays; import java.util.ArrayList; import java.util.List; @@ -113,8 +120,8 @@ } private static TrafficControlBandwidthHandlerImpl - getTrafficControlBandwidthHandler(Configuration conf) - throws ResourceHandlerException { + getTrafficControlBandwidthHandler(Configuration conf) + throws ResourceHandlerException { if (conf.getBoolean(YarnConfiguration.NM_NETWORK_RESOURCE_ENABLED, YarnConfiguration.DEFAULT_NM_NETWORK_RESOURCE_ENABLED)) { if (trafficControlBandwidthHandler == null) { @@ -137,8 +144,8 @@ } public static OutboundBandwidthResourceHandler - getOutboundBandwidthResourceHandler(Configuration conf) - throws ResourceHandlerException { + getOutboundBandwidthResourceHandler(Configuration conf) + throws ResourceHandlerException { return getTrafficControlBandwidthHandler(conf); } @@ -176,7 +183,7 @@ } private static CGroupsMemoryResourceHandlerImpl - getCgroupsMemoryResourceHandler( + getCgroupsMemoryResourceHandler( Configuration conf) throws ResourceHandlerException { if (cGroupsMemoryResourceHandler == null) { synchronized (MemoryResourceHandler.class) { @@ -229,4 +236,45 @@ static void nullifyResourceHandlerChain() throws ResourceHandlerException { resourceHandlerChain = null; } + + /** + * If a cgroup mount directory is specified, it returns cgroup directories + * with valid names. + * The requirement is that each hierarchy has to be named with the comma + * separated names of subsystems supported. + * For example: /sys/fs/cgroup/cpu,cpuacct + * @param cgroupMountPath Root cgroup mount path (/sys/fs/cgroup in the + * example above) + * @return A path to cgroup subsystem set mapping in the same format as + * {@link CGroupsHandlerImpl#parseMtab(String)} + * @throws IOException if the specified directory cannot be listed + */ + public static Map> parseConfiguredCGroupPath( + String cgroupMountPath) throws IOException { + File cgroupDir = new File(cgroupMountPath); + File[] list = cgroupDir.listFiles(); + if (list == null) { + throw new IOException("Empty cgroup mount directory specified: " + + cgroupMountPath); + } + + Map> pathSubsystemMappings = new HashMap<>(); + Set validCGroups = + CGroupsHandler.CGroupController.getValidCGroups(); + for (File candidate: list) { + Set cgroupList = + new HashSet<>(Arrays.asList(candidate.getName().split(","))); + // Collect the valid subsystem names + cgroupList.retainAll(validCGroups); + if (!cgroupList.isEmpty()) { + if (candidate.isDirectory() && candidate.canWrite()) { + pathSubsystemMappings.put(candidate.getAbsolutePath(), cgroupList); + } else { + LOG.warn("The following cgroup is not a directory or it is not" + + " writable" + candidate.getAbsolutePath()); + } + } + } + return pathSubsystemMappings; + } } Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -27,6 +27,7 @@ import java.io.OutputStreamWriter; import java.io.PrintWriter; import java.io.Writer; +import java.util.Arrays; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; @@ -39,7 +40,6 @@ import com.google.common.annotations.VisibleForTesting; -import com.google.common.collect.Sets; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; @@ -51,6 +51,8 @@ import org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor; import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperation; import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsCpuResourceHandlerImpl; +import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandler; +import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerModule; import org.apache.hadoop.yarn.util.Clock; import org.apache.hadoop.yarn.util.ResourceCalculatorPlugin; import org.apache.hadoop.yarn.util.SystemClock; @@ -87,11 +89,11 @@ private long deleteCgroupTimeout; private long deleteCgroupDelay; - // package private for testing purposes + @VisibleForTesting Clock clock; private float yarnProcessors; - int nodeVCores; + private int nodeVCores; public CgroupsLCEResourcesHandler() { this.controllerPaths = new HashMap(); @@ -132,8 +134,10 @@ this.strictResourceUsageMode = conf .getBoolean( - YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE, - YarnConfiguration.DEFAULT_NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE); + YarnConfiguration + .NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE, + YarnConfiguration + .DEFAULT_NM_LINUX_CONTAINER_CGROUPS_STRICT_RESOURCE_USAGE); int len = cgroupPrefix.length(); if (cgroupPrefix.charAt(len - 1) == '/') { @@ -169,8 +173,10 @@ if (systemProcessors != (int) yarnProcessors) { LOG.info("YARN containers restricted to " + yarnProcessors + " cores"); int[] limits = getOverallLimits(yarnProcessors); - updateCgroup(CONTROLLER_CPU, "", CPU_PERIOD_US, String.valueOf(limits[0])); - updateCgroup(CONTROLLER_CPU, "", CPU_QUOTA_US, String.valueOf(limits[1])); + updateCgroup(CONTROLLER_CPU, "", CPU_PERIOD_US, + String.valueOf(limits[0])); + updateCgroup(CONTROLLER_CPU, "", CPU_QUOTA_US, + String.valueOf(limits[1])); } else if (CGroupsCpuResourceHandlerImpl.cpuLimitsExist( pathForCgroup(CONTROLLER_CPU, ""))) { LOG.info("Removing CPU constraints for YARN containers."); @@ -178,8 +184,8 @@ } } - int[] getOverallLimits(float yarnProcessors) { - return CGroupsCpuResourceHandlerImpl.getOverallLimits(yarnProcessors); + int[] getOverallLimits(float yarnProcessorsArg) { + return CGroupsCpuResourceHandlerImpl.getOverallLimits(yarnProcessorsArg); } @@ -204,7 +210,7 @@ LOG.debug("createCgroup: " + path); } - if (! new File(path).mkdir()) { + if (!new File(path).mkdir()) { throw new IOException("Failed to create cgroup at " + path); } } @@ -251,7 +257,8 @@ try (BufferedReader inl = new BufferedReader(new InputStreamReader(new FileInputStream(cgf + "/tasks"), "UTF-8"))) { - if ((str = inl.readLine()) != null) { + str = inl.readLine(); + if (str != null) { LOG.debug("First line in cgroup tasks file: " + cgf + " " + str); } } catch (IOException e) { @@ -337,9 +344,9 @@ (containerVCores * yarnProcessors) / (float) nodeVCores; int[] limits = getOverallLimits(containerCPU); updateCgroup(CONTROLLER_CPU, containerName, CPU_PERIOD_US, - String.valueOf(limits[0])); + String.valueOf(limits[0])); updateCgroup(CONTROLLER_CPU, containerName, CPU_QUOTA_US, - String.valueOf(limits[1])); + String.valueOf(limits[1])); } } } @@ -400,6 +407,8 @@ private Map> parseMtab() throws IOException { Map> ret = new HashMap>(); BufferedReader in = null; + Set validCgroups = + CGroupsHandler.CGroupController.getValidCGroups(); try { FileInputStream fis = new FileInputStream(new File(getMtabFileName())); @@ -415,8 +424,11 @@ String options = m.group(3); if (type.equals(CGROUPS_FSTYPE)) { - HashSet value = Sets.newHashSet(options.split(",")); - ret.put(path, value); + Set cgroupList = + new HashSet<>(Arrays.asList(options.split(","))); + // Collect the valid subsystem names + cgroupList.retainAll(validCgroups); + ret.put(path, cgroupList); } } } @@ -448,7 +460,16 @@ private void initializeControllerPaths() throws IOException { String controllerPath; - Map> parsedMtab = parseMtab(); + Map> parsedMtab = null; + + if (this.cgroupMountPath != null && !this.cgroupMount) { + parsedMtab = ResourceHandlerModule. + parseConfiguredCGroupPath(this.cgroupMountPath); + } + + if (parsedMtab == null) { + parsedMtab = parseMtab(); + } // CPU Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -573,4 +573,29 @@ new File(new File(newMountPoint, "cpu"), this.hierarchy); assertTrue("Yarn cgroup should exist", hierarchyFile.exists()); } + + + @Test + public void testManualCgroupSetting() throws ResourceHandlerException { + YarnConfiguration conf = new YarnConfiguration(); + conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_MOUNT_PATH, tmpPath); + conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_HIERARCHY, + "/hadoop-yarn"); + File cpu = new File(new File(tmpPath, "cpuacct,cpu"), "/hadoop-yarn"); + + try { + Assert.assertTrue("temp dir should be created", cpu.mkdirs()); + + CGroupsHandlerImpl cGroupsHandler = new CGroupsHandlerImpl(conf, null); + cGroupsHandler.initializeCGroupController( + CGroupsHandler.CGroupController.CPU); + + Assert.assertEquals("CPU CGRoup path was not set", cpu.getAbsolutePath(), + new File(cGroupsHandler.getPathForCGroup( + CGroupsHandler.CGroupController.CPU, "")).getAbsolutePath()); + + } finally { + FileUtils.deleteQuietly(cpu); + } + } } \ No newline at end of file Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -41,6 +41,8 @@ import java.util.Set; import java.util.concurrent.CountDownLatch; +import static org.mockito.Mockito.when; + @Deprecated public class TestCgroupsLCEResourcesHandler { private static File cgroupDir = null; @@ -388,4 +390,33 @@ FileUtils.deleteQuietly(memory); } } + + @Test + public void testManualCgroupSetting() throws IOException { + CgroupsLCEResourcesHandler handler = new CgroupsLCEResourcesHandler(); + YarnConfiguration conf = new YarnConfiguration(); + conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_MOUNT_PATH, + cgroupDir.getAbsolutePath()); + handler.setConf(conf); + File cpu = new File(new File(cgroupDir, "cpuacct,cpu"), "/hadoop-yarn"); + + try { + Assert.assertTrue("temp dir should be created", cpu.mkdirs()); + + final int numProcessors = 4; + ResourceCalculatorPlugin plugin = + Mockito.mock(ResourceCalculatorPlugin.class); + Mockito.doReturn(numProcessors).when(plugin).getNumProcessors(); + Mockito.doReturn(numProcessors).when(plugin).getNumCores(); + when(plugin.getNumProcessors()).thenReturn(8); + handler.init(null, plugin); + + Assert.assertEquals("CPU CGRoup path was not set", cpu.getParent(), + handler.getControllerPaths().get("cpu")); + + } finally { + FileUtils.deleteQuietly(cpu); + } + } + } Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -13,7 +13,7 @@ --> -Graceful Decommission of Yarn Nodes +Graceful Decommission of YARN Nodes =============== * [Overview](#overview) @@ -29,19 +29,19 @@ Overview -------- -Yarn is scalable very easily: any new NodeManager could join to the configured ResourceManager and start to execute jobs. But to achieve full elasticity we need a decommissioning process which helps to remove existing nodes and down-scale the cluster. +YARN is scalable very easily: any new NodeManager could join to the configured ResourceManager and start to execute jobs. But to achieve full elasticity we need a decommissioning process which helps to remove existing nodes and down-scale the cluster. -Yarn Nodes could be decommissioned NORMAL or GRACEFUL. +YARN Nodes could be decommissioned NORMAL or GRACEFUL. -Normal Decommission of Yarn Nodes means an immediate shutdown. +Normal Decommission of YARN Nodes means an immediate shutdown. -Graceful Decommission of Yarn Nodes is the mechanism to decommission NMs while minimize the impact to running applications. Once a node is in DECOMMISSIONING state, RM won't schedule new containers on it and will wait for running containers and applications to complete (or until decommissioning timeout exceeded) before transition the node into DECOMMISSIONED. +Graceful Decommission of YARN Nodes is the mechanism to decommission NMs while minimize the impact to running applications. Once a node is in DECOMMISSIONING state, RM won't schedule new containers on it and will wait for running containers and applications to complete (or until decommissioning timeout exceeded) before transition the node into DECOMMISSIONED. ## Quick start To do a normal decommissioning: -1. Start a Yarn cluster (with NodeManageres and ResourceManager) +1. Start a YARN cluster (with NodeManageres and ResourceManager) 2. Start a yarn job (for example with `yarn jar...` ) 3. Add `yarn.resourcemanager.nodes.exclude-path` property to your `yarn-site.xml` (Note: you don't need to restart the ResourceManager) 4. Create a text file (the location is defined in the previous step) with one line which contains the name of a selected NodeManager Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -17,7 +17,7 @@ -CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage. Currently, CGroups is only used for limiting CPU usage. +CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage. CGroups Configuration --------------------- @@ -30,9 +30,9 @@ |:---- |:---- | | `yarn.nodemanager.container-executor.class` | This should be set to "org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor". CGroups is a Linux kernel feature and is exposed via the LinuxContainerExecutor. | | `yarn.nodemanager.linux-container-executor.resources-handler.class` | This should be set to "org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler". Using the LinuxContainerExecutor doesn't force you to use CGroups. If you wish to use CGroups, the resource-handler-class must be set to CGroupsLCEResourceHandler. | -| `yarn.nodemanager.linux-container-executor.cgroups.hierarchy` | The cgroups hierarchy under which to place YARN proccesses(cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured) and the Yarn user has write access to the parent directory, then the directory will be created. If the directory already exists, the administrator has to give Yarn write permissions to it recursively. | +| `yarn.nodemanager.linux-container-executor.cgroups.hierarchy` | The cgroups hierarchy under which to place YARN proccesses(cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured) and the YARN user has write access to the parent directory, then the directory will be created. If the directory already exists, the administrator has to give YARN write permissions to it recursively. | | `yarn.nodemanager.linux-container-executor.cgroups.mount` | Whether the LCE should attempt to mount cgroups if not found - can be true or false. | -| `yarn.nodemanager.linux-container-executor.cgroups.mount-path` | Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. A point to note here is that the container-executor binary will try to mount the path specified + "/" + the subsystem. In our case, since we are trying to limit CPU the binary tries to mount the path specified + "/cpu" and that's the path it expects to exist. | +| `yarn.nodemanager.linux-container-executor.cgroups.mount-path` | Optional. Where CGroups are located. LCE will try to mount them here, if `yarn.nodemanager.linux-container-executor.cgroups.mount` is true. LCE will try to use CGroups from this location, if `yarn.nodemanager.linux-container-executor.cgroups.mount` is false. If specified, this path and its subdirectories (CGroup hierarchies) must exist and they should be readable and writable by YARN before the NodeManager is launched. See CGroups mount options below for details. | | `yarn.nodemanager.linux-container-executor.group` | The Unix group of the NodeManager. It should match the setting in "container-executor.cfg". This configuration is required for validating the secure access of the container-executor binary. | The following settings are related to limiting resource usage of YARN containers: @@ -42,6 +42,17 @@ | `yarn.nodemanager.resource.percentage-physical-cpu-limit` | This setting lets you limit the cpu usage of all YARN containers. It sets a hard upper limit on the cumulative CPU usage of the containers. For example, if set to 60, the combined CPU usage of all YARN containers will not exceed 60%. | | `yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage` | CGroups allows cpu usage limits to be hard or soft. When this setting is true, containers cannot use more CPU usage than allocated even if spare CPU is available. This ensures that containers can only use CPU that they were allocated. When set to false, containers can use spare CPU if available. It should be noted that irrespective of whether set to true or false, at no time can the combined CPU usage of all containers exceed the value specified in "yarn.nodemanager.resource.percentage-physical-cpu-limit". | +CGroups mount options +--------------------- + +YARN uses CGroups through a directory structure mounted into the file system by the kernel. There are three options to attach to CGroups. + +| Option | Description | +|:---- |:---- | +| Discover CGroups mounted already | This should be used on newer systems like RHEL7 or Ubuntu16 or if the administrator mounts CGroups before YARN starts. Set `yarn.nodemanager.linux-container-executor.cgroups.mount` to false and leave other settings set to their defaults. YARN will locate the mount points in `/proc/mounts`. Common locations include `/sys/fs/cgroup` and `/cgroup`. The default location can vary depending on the Linux distribution in use.| +| CGroups mounted by YARN | If the system does not have CGroups mounted or it is mounted to an inaccessible location then point `yarn.nodemanager.linux-container-executor.cgroups.mount-path` to an empty directory. Set `yarn.nodemanager.linux-container-executor.cgroups.mount` to true. A point to note here is that the container-executor binary will try to create and mount each subsystem as a subdirectory under this path. If `cpu` is already mounted somewhere with `cpuacct`, then the directory `cpu,cpuacct` will be created for the hierarchy.| +| CGroups mounted already or linked but not in `/proc/mounts` | If cgroups is accessible through lxcfs or simulated by another filesystem, then point `yarn.nodemanager.linux-container-executor.cgroups.mount-path` to your CGroups root directory. Set `yarn.nodemanager.linux-container-executor.cgroups.mount` to false. YARN tries to use this path first, before any CGroup mount point discovery. The path should have a subdirectory for each CGroup hierarchy named by the comma separated CGroup subsystems supported like `/cpu,cpuacct`. Valid subsystem names are `cpu, cpuacct, cpuset, memory, net_cls, blkio, freezer, devices`.| + CGroups and security -------------------- Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -56,7 +56,7 @@ * Under very rare circumstances, programmer may want to directly use the 3 protocols to implement an application. However, note that *such behaviors are no longer encouraged for general use cases*. -Writing a Simple Yarn Application +Writing a Simple YARN Application --------------------------------- ### Writing a simple Client @@ -574,4 +574,4 @@ Sample Code ----------- -Yarn distributed shell: in `hadoop-yarn-applications-distributedshell` project after you set up your development environment. +YARN distributed shell: in `hadoop-yarn-applications-distributedshell` project after you set up your development environment. Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -84,7 +84,7 @@ ## The binding problem Hadoop YARN allows applications to run on the Hadoop cluster. Some of these are -batch jobs or queries that can managed via Yarn’s existing API using its +batch jobs or queries that can managed via YARN’s existing API using its application ID. In addition YARN can deploy ong-lived services instances such a pool of Apache Tomcat web servers or an Apache HBase cluster. YARN will deploy them across the cluster depending on the individual each component requirements @@ -121,7 +121,7 @@ /services/yarn /services/oozie -Yarn-deployed services belonging to individual users. +YARN-deployed services belonging to individual users. /users/joe/org-apache-hbase/demo1 /users/joe/org-apache-hbase/demo1/components/regionserver1 @@ -148,7 +148,7 @@ ## Unsupported Registration use cases: -1. A short-lived Yarn application is registered automatically in the registry, +1. A short-lived YARN application is registered automatically in the registry, including all its containers. and unregistered when the job terminates. Short-lived applications with many containers will place excessive load on a registry. All YARN applications will be given the option of registering, but it @@ -259,7 +259,7 @@ namespace to be the root of the service registry ( default: `yarnRegistry`). On top this base implementation we build our registry service API and the -naming conventions that Yarn will use for its services. The registry will be +naming conventions that YARN will use for its services. The registry will be accessed by the registry API, not directly via ZK - ZK is just an implementation choice (although unlikely to change in the future). @@ -297,7 +297,7 @@ 6. Core services will be registered using the following convention: `/services/{servicename}` e.g. `/services/hdfs`. -7. Yarn services SHOULD be registered using the following convention: +7. YARN services SHOULD be registered using the following convention: /users/{username}/{serviceclass}/{instancename} @@ -823,8 +823,8 @@ ## Security The registry will allow a service instance can only be registered under the -path where it has permissions. Yarn will create directories with appropriate -permissions for users where Yarn deployed services can be registered by a user. +path where it has permissions. YARN will create directories with appropriate +permissions for users where YARN deployed services can be registered by a user. of the user account of the service instance. The admin will also create directories (such as `/services`) with appropriate permissions (where core Hadoop services can register themselves. Index: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md IDEA additional info: Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP <+>UTF-8 =================================================================== --- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md (revision 11ece0bda1f6e5dd9d0f828b7c29acacf6087baa) +++ hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md (revision dff41b51a752e7fdb7dfaa8dd949c591f10a099e) @@ -15,9 +15,9 @@ limitations under the License. --> -# Yarn UI +# YARN UI -The Yarn UI is an Ember based web-app that provides visualization of the applications running on the Apache Hadoop YARN framework. +The YARN UI is an Ember based web-app that provides visualization of the applications running on the Apache Hadoop YARN framework. ## Configurations @@ -49,14 +49,14 @@ **Warning: Do not edit the _package.json_ or _bower.json_ files manually. This could make them out-of-sync with the respective lock or shrinkwrap files.** -Yarn UI has replaced NPM with Yarn package manager. And hence Yarn would be used to manage dependencies defined in package.json. +YARN UI has replaced NPM with YARN package manager. And hence YARN would be used to manage dependencies defined in package.json. -* Please use the Yarn and Bower command-line tools to add new dependencies. And the tool version must be same as those defined in Prerequisites section. +* Please use the YARN and Bower command-line tools to add new dependencies. And the tool version must be same as those defined in Prerequisites section. * Once any dependency is added: * If it's in package.json. Make sure that the respective, and only those changes are reflected in yarn.lock file. * If it's in bower.json. Make sure that the respective, and only those changes are reflected in bower-shrinkwrap.json file. * Commands to add using CLI tools: - * Yarn: yarn add [package-name] + * YARN: yarn add [package-name] * Bower: bower install --save [package-name] ### Adding new routes (pages), controllers, components etc.