Details

    • Type: Sub-task
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: yarn
    • Labels:
      None

      Description

      Introduction

      Mounting files or directories from the host is one way of passing configuration and other information into a docker container.
      We could allow the user to set a list of mounts in the environment of ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2).
      These would be mounted read-only to the specified target locations. This has been resolved in YARN-4595

      2.Problem Definition

      Bug mounting arbitrary volumes into a Docker container can be a security risk.

      3.Possible solutions

      one approach to provide safe mounts is to allow the cluster administrator to configure a set of parent directories as white list mounting directories.
      Add a property named yarn.nodemanager.volume-mounts.white-list, when container executor do mount checking, only the allowed directories or sub-directories can be mounted.

      1. YARN-5534.003.patch
        41 kB
        Shane Kumpf
      2. YARN-5534.002.patch
        4 kB
        luhuichun
      3. YARN-5534.001.patch
        4 kB
        luhuichun

        Issue Links

          Activity

          Hide
          ebadger Eric Badger added a comment -

          Eric Yang, ah yes good point. I'll try and take a look at those unit tests. But yea, we can keep this open until that's resolved

          Show
          ebadger Eric Badger added a comment - Eric Yang , ah yes good point. I'll try and take a look at those unit tests. But yea, we can keep this open until that's resolved
          Hide
          eyang Eric Yang added a comment -

          Eric Badger YARN-6623 is committed, but there seems to have some issues in the implementation that it worked on Ubuntu but not CentOS. We need to monitor the development of YARN-7344 to determine if we are on the right path. Can we wait a few days before deciding closure of this JIRA? Thanks

          Show
          eyang Eric Yang added a comment - Eric Badger YARN-6623 is committed, but there seems to have some issues in the implementation that it worked on Ubuntu but not CentOS. We need to monitor the development of YARN-7344 to determine if we are on the right path. Can we wait a few days before deciding closure of this JIRA? Thanks
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 6s YARN-5534 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Issue YARN-5534
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12879707/YARN-5534.003.patch
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/17985/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 6s YARN-5534 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue YARN-5534 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12879707/YARN-5534.003.patch Console output https://builds.apache.org/job/PreCommit-YARN-Build/17985/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          ebadger Eric Badger added a comment -

          I think that we can close this as it's been completely superceded by YARN-6623. Shane Kumpf, do you agree?

          Show
          ebadger Eric Badger added a comment - I think that we can close this as it's been completely superceded by YARN-6623 . Shane Kumpf , do you agree?
          Hide
          eyang Eric Yang added a comment -

          Miklos Szegedi White list should be visible to all users who have access to the system.

          Show
          eyang Eric Yang added a comment - Miklos Szegedi White list should be visible to all users who have access to the system.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment -

          Thank you, Eric Yang for sharing your thoughts. Sorry, I am confused. Are you suggesting to make the whitelist visible to more users or less visible?

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - Thank you, Eric Yang for sharing your thoughts. Sorry, I am confused. Are you suggesting to make the whitelist visible to more users or less visible?
          Hide
          eyang Eric Yang added a comment -

          Miklos Szegedi I think core-site.xml make most sense to ensure both hdfs and yarn can agree on same security setting even though hdfs service doesn't require knowledge of this today. The idea of global white list and job specific white lists, have their own attractiveness.

          However, I think having global white list in container-executor.cfg might be risky. If the information is leaked and admin did not secure white list mount point properly, then the system could be vulnerable to attack. For white list, more eye balls can examine the configuration, would make the system more secure. On the other hand, if a black list is to be implemented, then it might have advantage to be in container-executor.cfg and readable by root only. Basic security through obscurity can be performed with some level of effectiveness.

          Show
          eyang Eric Yang added a comment - Miklos Szegedi I think core-site.xml make most sense to ensure both hdfs and yarn can agree on same security setting even though hdfs service doesn't require knowledge of this today. The idea of global white list and job specific white lists, have their own attractiveness. However, I think having global white list in container-executor.cfg might be risky. If the information is leaked and admin did not secure white list mount point properly, then the system could be vulnerable to attack. For white list, more eye balls can examine the configuration, would make the system more secure. On the other hand, if a black list is to be implemented, then it might have advantage to be in container-executor.cfg and readable by root only. Basic security through obscurity can be performed with some level of effectiveness.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment -

          Thank you, Eric Yang for the comment. Can you please clarify where the user reads the whitelist from? yarn-site.xml? Would it be useful to have the whitelist in both Java and container-executor like Vinod Kumar Vavilapalli suggested above? The right location depends also on whether privileged containers are enabled or not, which is the preference of the administrator.

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - Thank you, Eric Yang for the comment. Can you please clarify where the user reads the whitelist from? yarn-site.xml? Would it be useful to have the whitelist in both Java and container-executor like Vinod Kumar Vavilapalli suggested above? The right location depends also on whether privileged containers are enabled or not, which is the preference of the administrator.
          Hide
          eyang Eric Yang added a comment -

          Miklos Szegedi It's a cute perspective, but there might be usability issues. Today, it is possible to keep container-executor.cfg read only to root and yarn user. Authorized and banned users are only known to root user and yarn user. This is similar to sudoers file that managers who has sudoers rights.

          Other the other hand, file system mount points needs to be known by all users who would like to use mount points. It would be more convenient to give everyone read access to file system mount point file, like /etc/fstab.

          If volume white list is mixed with user privileges control, then we lose some flexibility to keep banned users a secret or we lose ability to know what mount points can be used. With this reason, I prefer to keep white list volume separated from container-executor.cfg for separation of duty from security point of view.
          However, black list volume maintained in container-executor.cfg, can make attack more difficult.

          Show
          eyang Eric Yang added a comment - Miklos Szegedi It's a cute perspective, but there might be usability issues. Today, it is possible to keep container-executor.cfg read only to root and yarn user. Authorized and banned users are only known to root user and yarn user. This is similar to sudoers file that managers who has sudoers rights. Other the other hand, file system mount points needs to be known by all users who would like to use mount points. It would be more convenient to give everyone read access to file system mount point file, like /etc/fstab. If volume white list is mixed with user privileges control, then we lose some flexibility to keep banned users a secret or we lose ability to know what mount points can be used. With this reason, I prefer to keep white list volume separated from container-executor.cfg for separation of duty from security point of view. However, black list volume maintained in container-executor.cfg, can make attack more difficult.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - - edited

          Eric Yang I would approach this from the user point of view. container-executor and container-executor.cfg should govern the rules, how the yarn user can get root access or access it does not have otherwise. If the yarn user cannot access a directory, then mounting it should be whitelisted in container-executor.cfg.

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - - edited Eric Yang I would approach this from the user point of view. container-executor and container-executor.cfg should govern the rules, how the yarn user can get root access or access it does not have otherwise. If the yarn user cannot access a directory, then mounting it should be whitelisted in container-executor.cfg.
          Hide
          eyang Eric Yang added a comment - - edited

          Yarn-site.xml and core-site.xml are trusted configuration from Hdoop server point of view. Hadoop Kerberos, and proxy users configuration are defined in the *.xml files. WIthout trusting those configurations, Hadoop security fall apart. There is keyword like final to lock configuration in place therefore an overlay of Hadoop configuration can not alter the configuration values. Volume white list in core-site.xml or yarn-site.xml are secured. There should be very little configuration in container-executor.cfg file to govern uid and banned user. The rest of the logic in core-site.xml is preferred to ensure the logic is preprocessed in yarn user before handing off to root for execution. YARN can act as a shielding user in pre-processing to make exploits more difficult.

          Show
          eyang Eric Yang added a comment - - edited Yarn-site.xml and core-site.xml are trusted configuration from Hdoop server point of view. Hadoop Kerberos, and proxy users configuration are defined in the *.xml files. WIthout trusting those configurations, Hadoop security fall apart. There is keyword like final to lock configuration in place therefore an overlay of Hadoop configuration can not alter the configuration values. Volume white list in core-site.xml or yarn-site.xml are secured. There should be very little configuration in container-executor.cfg file to govern uid and banned user. The rest of the logic in core-site.xml is preferred to ensure the logic is preprocessed in yarn user before handing off to root for execution. YARN can act as a shielding user in pre-processing to make exploits more difficult.
          Hide
          ebadger Eric Badger added a comment -

          For example(just made up), an admin may want to mount /data-volume into every container by some subset of users. container-executor.cfg should have a setting permitting the mounting of /data-volume but yarn-site.xml should have feature to mount it into every container for those users. Does that make sense?

          I still don't see why the overall mounting setting would be in container-executor.cfg while the user-specific setting would be in yarn-site.xml. If we're looking at this from a security perspective, the volume mount is either a potential attack vector or not. If it's not, then we don't really care whether anyone can mount it and then I would say we should just put everything in yarn-site.xml. If we assume that it is a potential attack vector, then we very much care that only certain users can mount that volume. In that case, I don't see why we would put that whitelist of users in yarn-site.xml, if we're also assuming that yarn-site.xml is potentially untrusted (I assume the reason we're putting things into container-executor.cfg is because it is only root read/writable).

          So basically my main points are:
          1. If yarn-site.xml is untrusted, then we can't put any configs with potential security-related consequences in there (e.g. which volumes are whitelisted)
          2. If yarn-site.xml is trusted, then I don't know why we need to move any of the configs into container-executor.cfg

          Show
          ebadger Eric Badger added a comment - For example(just made up), an admin may want to mount /data-volume into every container by some subset of users. container-executor.cfg should have a setting permitting the mounting of /data-volume but yarn-site.xml should have feature to mount it into every container for those users. Does that make sense? I still don't see why the overall mounting setting would be in container-executor.cfg while the user-specific setting would be in yarn-site.xml. If we're looking at this from a security perspective, the volume mount is either a potential attack vector or not. If it's not, then we don't really care whether anyone can mount it and then I would say we should just put everything in yarn-site.xml. If we assume that it is a potential attack vector, then we very much care that only certain users can mount that volume. In that case, I don't see why we would put that whitelist of users in yarn-site.xml, if we're also assuming that yarn-site.xml is potentially untrusted (I assume the reason we're putting things into container-executor.cfg is because it is only root read/writable). So basically my main points are: 1. If yarn-site.xml is untrusted, then we can't put any configs with potential security-related consequences in there (e.g. which volumes are whitelisted) 2. If yarn-site.xml is trusted, then I don't know why we need to move any of the configs into container-executor.cfg
          Hide
          vvasudev Varun Vasudev added a comment -

          It's going to end up being a combination. Some settings have to be done in the container-executor.cfg(like whitelisted volume mounts), and some will go into yarn-site.xml.

          For example(just made up), an admin may want to mount /data-volume into every container by some subset of users. container-executor.cfg should have a setting permitting the mounting of /data-volume but yarn-site.xml should have feature to mount it into every container for those users. Does that make sense?

          Show
          vvasudev Varun Vasudev added a comment - It's going to end up being a combination. Some settings have to be done in the container-executor.cfg(like whitelisted volume mounts), and some will go into yarn-site.xml. For example(just made up), an admin may want to mount /data-volume into every container by some subset of users. container-executor.cfg should have a setting permitting the mounting of /data-volume but yarn-site.xml should have feature to mount it into every container for those users. Does that make sense?
          Hide
          ebadger Eric Badger added a comment -

          I emailed Miklos Szegedi about this offline, but I'd like to get some additional perspectives here possibly from Varun Vasudev, Shane Kumpf, Vinod Kumar Vavilapalli, Daniel Templeton, Jason Lowe. My thoughts on the matter are in the email I sent to Miklos Szegedi below. The overall question is whether we should be putting the docker configs in yarn-site.xml, container-executor.cfg, some in each, or some/all in both. I would like to come to a consensus so that we can move forward on this JIRA and others.

          I'm a little confused about a few things here. First, putting docker properties in multiple places seems like a bad idea for the less than expert admin. They will see some configs in one place (yarn-site.xml or container-executor.cfg) and assume that those are all of the configs when really there's others in a different place. Maybe this is more of an inconvenience, but it doesn't make sense to me to have them in 2 different places.

          Second, I don't see why some properties should be protected under the veil of root via the container-executor.cfg but not others. In the current docker implementation, you get to specify the image that you want to use. I could easily put a setuid binary in there and get root in the container. There are constantly new exploits on how to get out of the container if you get root (assuming you're not using user namespace remapping, which we aren't). This could possibly be mitigated by dropping the SETUID capability for the container, but that's also a property in yarn-site.xml and not container-executor.cfg. So I don't see why the volume whitelist belongs in container-executor.cfg, but these other properties belong in yarn-site.xml. Seems like they should all belong in container-executor.cfg or none of them should.

          I'm not sure if this is the best place for discussion to occur since this topic is bigger than simply whitelisting volume mounts. If there's a better place, then we can move the discussion there.

          Show
          ebadger Eric Badger added a comment - I emailed Miklos Szegedi about this offline, but I'd like to get some additional perspectives here possibly from Varun Vasudev , Shane Kumpf , Vinod Kumar Vavilapalli , Daniel Templeton , Jason Lowe . My thoughts on the matter are in the email I sent to Miklos Szegedi below. The overall question is whether we should be putting the docker configs in yarn-site.xml, container-executor.cfg, some in each, or some/all in both. I would like to come to a consensus so that we can move forward on this JIRA and others. I'm a little confused about a few things here. First, putting docker properties in multiple places seems like a bad idea for the less than expert admin. They will see some configs in one place (yarn-site.xml or container-executor.cfg) and assume that those are all of the configs when really there's others in a different place. Maybe this is more of an inconvenience, but it doesn't make sense to me to have them in 2 different places. Second, I don't see why some properties should be protected under the veil of root via the container-executor.cfg but not others. In the current docker implementation, you get to specify the image that you want to use. I could easily put a setuid binary in there and get root in the container. There are constantly new exploits on how to get out of the container if you get root (assuming you're not using user namespace remapping, which we aren't). This could possibly be mitigated by dropping the SETUID capability for the container, but that's also a property in yarn-site.xml and not container-executor.cfg. So I don't see why the volume whitelist belongs in container-executor.cfg, but these other properties belong in yarn-site.xml. Seems like they should all belong in container-executor.cfg or none of them should. I'm not sure if this is the best place for discussion to occur since this topic is bigger than simply whitelisting volume mounts. If there's a better place, then we can move the discussion there.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment -

          Eric Badger, only the ones that need root access.

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - Eric Badger , only the ones that need root access.
          Hide
          ebadger Eric Badger added a comment -

          So is the assumption here that yarn-site.xml is untrusted and can be tampered with? If that's the case, then we need to add all of the docker properties to container-executor.cfg. Otherwise, the assumption would be that the attacker can set the docker capabilities, whether they can run as a privileged container, and the network that they use. Currently those are all set via yarn-site.xml.

          Show
          ebadger Eric Badger added a comment - So is the assumption here that yarn-site.xml is untrusted and can be tampered with? If that's the case, then we need to add all of the docker properties to container-executor.cfg. Otherwise, the assumption would be that the attacker can set the docker capabilities, whether they can run as a privileged container, and the network that they use. Currently those are all set via yarn-site.xml.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - - edited

          Thank you, Shane Kumpf and Vinod Kumar Vavilapalli for the details. As Shane said, Java knows the configuration letting launch the container and seeing it fail in C. If the system is sending so many invalid privileged requests that it affects system performance because of this, there is already something wrong with that system.
          However, one more thing. While having a general config to enable/disable privileged is nice, I think eventually admins will need to specify the users that should be allowed to elevate to privileged. This can be applied probably to the whitelist as well. Sorry for raising too many design questions late in the development.

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - - edited Thank you, Shane Kumpf and Vinod Kumar Vavilapalli for the details. As Shane said, Java knows the configuration letting launch the container and seeing it fail in C. If the system is sending so many invalid privileged requests that it affects system performance because of this, there is already something wrong with that system. However, one more thing. While having a general config to enable/disable privileged is nice, I think eventually admins will need to specify the users that should be allowed to elevate to privileged. This can be applied probably to the whitelist as well. Sorry for raising too many design questions late in the development.
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          From a usability stand point, I have to agree with Miklos Szegedi, expecting admins to define the white list in two places is not ideal. If the two configs get out of sync, it will lead to surprising behavior. While I'm not a fan of the current direction of moving more and more functionality into container-executor, it seems there is no way around doing so with the current design. I will need to move all of the whitelist validation into container-executor to keep it in a single place. One pitfall of this approach is that we can no longer fail fast and must spawn the container-executor process before the validation occurs. If this is the consensus on how we need to handle the whitelist, I will start to rework the patch to move the configuration to container-executor.cfg and do all of whitelist validation in container-executor.

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - From a usability stand point, I have to agree with Miklos Szegedi , expecting admins to define the white list in two places is not ideal. If the two configs get out of sync, it will lead to surprising behavior. While I'm not a fan of the current direction of moving more and more functionality into container-executor, it seems there is no way around doing so with the current design. I will need to move all of the whitelist validation into container-executor to keep it in a single place. One pitfall of this approach is that we can no longer fail fast and must spawn the container-executor process before the validation occurs. If this is the consensus on how we need to handle the whitelist, I will start to rework the patch to move the configuration to container-executor.cfg and do all of whitelist validation in container-executor.
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          In general I think this is redundant. Each feature should have only one config location otherwise it affect usability (for the admins).

          The reason why we need it both areas is because (a) the java land only reads yarn-site.xml and the C land only reads container-executor.cfg and both need to know if a feature is enabled or not (b) the files have different ownerships - yarn user vs root.

          This is an existing pattern given the NM -> Container-Executor separation. Unrolling it would mostly mean forcing the java land also read container-executor.cfg. The opposite will likely not happen - C land reading multiple config files will increase the surface area.

          getCGroupsCpuResourceHandler(), where any of the config settings implied that the user needs a resource handler without any other config knob.

          getCGroupsCpuResourceHandler() works because all the cgroups heavy-lifting is done in the java land and so this split code problem doesn't exist there. There is only one privileged operation in the c land - setup of cgroups, which one can argue shouldn't be enabled by default either.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - In general I think this is redundant. Each feature should have only one config location otherwise it affect usability (for the admins). The reason why we need it both areas is because (a) the java land only reads yarn-site.xml and the C land only reads container-executor.cfg and both need to know if a feature is enabled or not (b) the files have different ownerships - yarn user vs root. This is an existing pattern given the NM -> Container-Executor separation. Unrolling it would mostly mean forcing the java land also read container-executor.cfg. The opposite will likely not happen - C land reading multiple config files will increase the surface area. getCGroupsCpuResourceHandler(), where any of the config settings implied that the user needs a resource handler without any other config knob. getCGroupsCpuResourceHandler() works because all the cgroups heavy-lifting is done in the java land and so this split code problem doesn't exist there. There is only one privileged operation in the c land - setup of cgroups, which one can argue shouldn't be enabled by default either.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment -

          The config should be there in both the places - yarn-site.xml as well as container-executor.cfg so that the java code can read from yarn-site.xml (if needed) and C code from container-executor.cfg and the C code can double check what's coming in from the java land with what is there in container-executor.cfg which is an official blessing by root.

          Vinod Kumar Vavilapalli, in general I think this is redundant. Each feature should have only one config location otherwise it affect usability (for the admins). Example: I actually like the way you and Varun solved getCGroupsCpuResourceHandler(), where any of the config settings implied that the user needs a resource handler without any other config knob.

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - The config should be there in both the places - yarn-site.xml as well as container-executor.cfg so that the java code can read from yarn-site.xml (if needed) and C code from container-executor.cfg and the C code can double check what's coming in from the java land with what is there in container-executor.cfg which is an official blessing by root. Vinod Kumar Vavilapalli , in general I think this is redundant. Each feature should have only one config location otherwise it affect usability (for the admins). Example: I actually like the way you and Varun solved getCGroupsCpuResourceHandler(), where any of the config settings implied that the user needs a resource handler without any other config knob.
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          It does look like YARN-6033 is very close. In that case, you can simply base this patch on top of the one at YARN-6033.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - It does look like YARN-6033 is very close. In that case, you can simply base this patch on top of the one at YARN-6033 .
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Quick question, should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml?

          The config should be there in both the places - yarn-site.xml as well as container-executor.cfg so that the java code can read from yarn-site.xml (if needed) and C code from container-executor.cfg and the C code can double check what's coming in from the java land with what is there in container-executor.cfg which is an official blessing by root.

          Once YARN-6033 is committed, I plan to rewrite it to do invocations via a config file and we can add the checks into the container-executor.cfg.

          if we check in this jira with yarn-site.xml as the location for the whitelist, we have to keep it backward compatible throughout the lifecycle of 3.0. I would wait with this jira until your container-executor changes get in.

          YARN-6033 simplifies the configuration management, and there is existing configuration outside of this patch that YARN-6033 should give a compatibility story for. So YARN-6033 doesn't need to be a blocker for this JIRA, me thinks. If YARN-6033 also makes it into 3.0, which it should, the new config added in JIRA can simply be removed.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Quick question, should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml? The config should be there in both the places - yarn-site.xml as well as container-executor.cfg so that the java code can read from yarn-site.xml (if needed) and C code from container-executor.cfg and the C code can double check what's coming in from the java land with what is there in container-executor.cfg which is an official blessing by root. Once YARN-6033 is committed, I plan to rewrite it to do invocations via a config file and we can add the checks into the container-executor.cfg. if we check in this jira with yarn-site.xml as the location for the whitelist, we have to keep it backward compatible throughout the lifecycle of 3.0. I would wait with this jira until your container-executor changes get in. YARN-6033 simplifies the configuration management, and there is existing configuration outside of this patch that YARN-6033 should give a compatibility story for. So YARN-6033 doesn't need to be a blocker for this JIRA, me thinks. If YARN-6033 also makes it into 3.0, which it should, the new config added in JIRA can simply be removed.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment -

          Shane Kumpf, container-executor.cfg is only writable by root.
          Varun Vasudev, my only concern, is that if we check in this jira with yarn-site.xml as the location for the whitelist, we have to keep it backward compatible throughout the lifecycle of 3.0. I would wait with this jira until your container-executor changes get in.

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - Shane Kumpf , container-executor.cfg is only writable by root. Varun Vasudev , my only concern, is that if we check in this jira with yarn-site.xml as the location for the whitelist, we have to keep it backward compatible throughout the lifecycle of 3.0. I would wait with this jira until your container-executor changes get in.
          Hide
          vvasudev Varun Vasudev added a comment -

          Thank you for the patch Shane Kumpf. Quick question, should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml?

          Ideally it should be but the the current NM->container-exectuor interface doesn't allow for it. Once YARN-6033 is committed, I plan to rewrite it to do invocations via a config file and we can add the checks into the container-executor.cfg.

          Show
          vvasudev Varun Vasudev added a comment - Thank you for the patch Shane Kumpf. Quick question, should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml? Ideally it should be but the the current NM->container-exectuor interface doesn't allow for it. Once YARN-6033 is committed, I plan to rewrite it to do invocations via a config file and we can add the checks into the container-executor.cfg.
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml?

          Can you help me understand what the benefit would be? For the current localized resource mounts, the checking is handled in the container runtime.

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml? Can you help me understand what the benefit would be? For the current localized resource mounts, the checking is handled in the container runtime.
          Hide
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment -

          Thank you for the patch Shane Kumpf. Quick question, should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml?

          Show
          miklos.szegedi@cloudera.com Miklos Szegedi added a comment - Thank you for the patch Shane Kumpf . Quick question, should not white-list-volume-mounts be a setting in container-executor.cfg instead of yarn-site.xml?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 20s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 41s Maven dependency ordering for branch
          +1 mvninstall 14m 50s trunk passed
          +1 compile 9m 46s trunk passed
          +1 checkstyle 1m 0s trunk passed
          +1 mvnsite 1m 34s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          -1 findbugs 0m 55s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings.
          +1 javadoc 1m 4s trunk passed
                Patch Compile Tests
          0 mvndep 0m 10s Maven dependency ordering for patch
          +1 mvninstall 1m 0s the patch passed
          +1 compile 5m 53s the patch passed
          +1 javac 5m 53s the patch passed
          -0 checkstyle 1m 0s hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 225 unchanged - 2 fixed = 232 total (was 227)
          -1 mvnsite 0m 17s hadoop-yarn-site in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site
          +1 findbugs 2m 8s the patch passed
          +1 javadoc 1m 2s the patch passed
                Other Tests
          -1 unit 0m 30s hadoop-yarn-api in the patch failed.
          +1 unit 13m 26s hadoop-yarn-server-nodemanager in the patch passed.
          +1 unit 0m 16s hadoop-yarn-site in the patch passed.
          +1 asflicense 0m 30s The patch does not generate ASF License warnings.
          66m 22s



          Reason Tests
          Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue YARN-5534
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12879707/YARN-5534.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux ccbf448dbbe4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3e23415
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16628/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/16628/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 20s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.       trunk Compile Tests 0 mvndep 0m 41s Maven dependency ordering for branch +1 mvninstall 14m 50s trunk passed +1 compile 9m 46s trunk passed +1 checkstyle 1m 0s trunk passed +1 mvnsite 1m 34s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site -1 findbugs 0m 55s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. +1 javadoc 1m 4s trunk passed       Patch Compile Tests 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 1m 0s the patch passed +1 compile 5m 53s the patch passed +1 javac 5m 53s the patch passed -0 checkstyle 1m 0s hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 225 unchanged - 2 fixed = 232 total (was 227) -1 mvnsite 0m 17s hadoop-yarn-site in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site +1 findbugs 2m 8s the patch passed +1 javadoc 1m 2s the patch passed       Other Tests -1 unit 0m 30s hadoop-yarn-api in the patch failed. +1 unit 13m 26s hadoop-yarn-server-nodemanager in the patch passed. +1 unit 0m 16s hadoop-yarn-site in the patch passed. +1 asflicense 0m 30s The patch does not generate ASF License warnings. 66m 22s Reason Tests Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-5534 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12879707/YARN-5534.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux ccbf448dbbe4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3e23415 Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16628/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16628/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/16628/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          ebadger Eric Badger added a comment -

          IMO, I think that feature might be better suited as a separate patch though, since it will essentially bypass the whitelist.

          I'm ok with it being a separate patch. It's fundamentally different since it doesn't depend on user input, while the whitelist volumes would. So I think that makes sense. And I'd be happy to work on that patch if that's what we decide to do.

          Show
          ebadger Eric Badger added a comment - IMO, I think that feature might be better suited as a separate patch though, since it will essentially bypass the whitelist. I'm ok with it being a separate patch. It's fundamentally different since it doesn't depend on user input, while the whitelist volumes would. So I think that makes sense. And I'd be happy to work on that patch if that's what we decide to do.
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          Thanks Eric Badger and Daniel Templeton for the feedback.

          I was thinking of the current code where we are bind-mounting "/sys/fs/cgroup" for every container.

          Part of the point of the mount whitelist is so we can remove the hard coded /sys/fs/cgroup mount. That really doesn't apply to all containers, for instance CentOS 6, and actually introduces odd behavior on systems with many cores.

          For my use case, we would always want to bind mount "/var/run/nscd" so that users can do lookups inside of the container and utilize the host's configs and cache. With the current state of affairs over in YARN-4266, if we enter the container as a UID:GID pair, MRAppMaster will fail if we don't bind-mount "/var/run/nscd".

          I think we could solve the need above through documentation, but I can understand the convenience of having an auto bind mount list. IMO, I think that feature might be better suited as a separate patch though, since it will essentially bypass the whitelist.

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - Thanks Eric Badger and Daniel Templeton for the feedback. I was thinking of the current code where we are bind-mounting "/sys/fs/cgroup" for every container. Part of the point of the mount whitelist is so we can remove the hard coded /sys/fs/cgroup mount. That really doesn't apply to all containers, for instance CentOS 6, and actually introduces odd behavior on systems with many cores. For my use case, we would always want to bind mount "/var/run/nscd" so that users can do lookups inside of the container and utilize the host's configs and cache. With the current state of affairs over in YARN-4266 , if we enter the container as a UID:GID pair, MRAppMaster will fail if we don't bind-mount "/var/run/nscd". I think we could solve the need above through documentation, but I can understand the convenience of having an auto bind mount list. IMO, I think that feature might be better suited as a separate patch though, since it will essentially bypass the whitelist.
          Hide
          ebadger Eric Badger added a comment -

          Can you help me understand the use case here? While there are mounts that will be commonly needed by containers, I'm not sure of any bind mounts that every container will require.

          I was thinking of the current code where we are bind-mounting "/sys/fs/cgroup" for every container. For my use case, we would always want to bind mount "/var/run/nscd" so that users can do lookups inside of the container and utilize the host's configs and cache. With the current state of affairs over in YARN-4266, if we enter the container as a UID:GID pair, MRAppMaster will fail if we don't bind-mount "/var/run/nscd".

          Given that these mounts are read-only and wholly at the discretion of the admin, I don't see that it should be much of a risk.

          I think that I agree with this. The mounts have to be provided by the admin, so if they have malicious content in them, that's on them.

          Show
          ebadger Eric Badger added a comment - Can you help me understand the use case here? While there are mounts that will be commonly needed by containers, I'm not sure of any bind mounts that every container will require. I was thinking of the current code where we are bind-mounting "/sys/fs/cgroup" for every container. For my use case, we would always want to bind mount "/var/run/nscd" so that users can do lookups inside of the container and utilize the host's configs and cache. With the current state of affairs over in YARN-4266 , if we enter the container as a UID:GID pair, MRAppMaster will fail if we don't bind-mount "/var/run/nscd". Given that these mounts are read-only and wholly at the discretion of the admin, I don't see that it should be much of a risk. I think that I agree with this. The mounts have to be provided by the admin, so if they have malicious content in them, that's on them.
          Hide
          templedf Daniel Templeton added a comment -

          I agree with the opt-in model guarded by the admin-defined whitelist. I also fail to see the use case for admin-enforced mounts. The nature of a container is that it's inscrutable by the system, so there's no telling what's in there or whether any given mount point makes any sense.

          Given that these mounts are read-only and wholly at the discretion of the admin, I don't see that it should be much of a risk. The main use case for the feature is to make the Hadoop directory mountable by the container, and I see no risk there. As long as we clearly document the risks in the feature docs, I don't see the need to add training wheels to try to keep admins from shooting themselves in the foot.

          Show
          templedf Daniel Templeton added a comment - I agree with the opt-in model guarded by the admin-defined whitelist. I also fail to see the use case for admin-enforced mounts. The nature of a container is that it's inscrutable by the system, so there's no telling what's in there or whether any given mount point makes any sense. Given that these mounts are read-only and wholly at the discretion of the admin, I don't see that it should be much of a risk. The main use case for the feature is to make the Hadoop directory mountable by the container, and I see no risk there. As long as we clearly document the risks in the feature docs, I don't see the need to add training wheels to try to keep admins from shooting themselves in the foot.
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          So you're proposing having a whitelist of volumes that can be bind-mounted that is defined by the NM and then have the user supply a list of volumes that need to be a subset of that whitelist?

          That is correct. The user will opt-in to bind mounts they require, and those bind mount must be in the whitelist (or must be localized resources) for the operation to succeed.

          What about volumes that the NM always wants to mount regardless of the user?

          Can you help me understand the use case here? While there are mounts that will be commonly needed by containers, I'm not sure of any bind mounts that every container will require. I'd prefer an opt-in model so we don't needless expose host artifacts when they aren't required. However, it wouldn't be very difficult to add this feature, so let me know and I can work to add it.

          My question is whether they can leverage these mount points to gain root in the container if minimal capabilities (aka not SETUID/SETGID/etc.) are given.

          Great questions. I agree it is possible for them to shoot themselves in the foot, but I don't believe that adding support for bind mounts opens up additional risk with regard to overriding libraries and binaries. Avoiding privileged containers and limiting capabilities is use case dependent, but best practices should be followed to limit the attack surface.

          Having said that, it seems there could be a need for admins to be able to control the destination mount path within the container. However, the implementation becomes less straight forward for localized resources/distributed cache. Currently we support arbitrary destination paths within the container for localized resources. Consider the hbase container use case, where hbase-site.xml is localized and the hbase processes in the container expect hbase-site.xml to be in /etc/hbase/conf. The admin doesn't know the full path to the localized resources up front, so it wouldn't be possible for the admin to define these localized resources in the whitelist. We could possibly address this through special syntax (i.e. $$LOCALIZED_PATH$$/hbase-site.xml:/etc/hbase/conf/hbase-site.xml:ro") if this is a concern. Thoughts?

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - So you're proposing having a whitelist of volumes that can be bind-mounted that is defined by the NM and then have the user supply a list of volumes that need to be a subset of that whitelist? That is correct. The user will opt-in to bind mounts they require, and those bind mount must be in the whitelist (or must be localized resources) for the operation to succeed. What about volumes that the NM always wants to mount regardless of the user? Can you help me understand the use case here? While there are mounts that will be commonly needed by containers, I'm not sure of any bind mounts that every container will require. I'd prefer an opt-in model so we don't needless expose host artifacts when they aren't required. However, it wouldn't be very difficult to add this feature, so let me know and I can work to add it. My question is whether they can leverage these mount points to gain root in the container if minimal capabilities (aka not SETUID/SETGID/etc.) are given. Great questions. I agree it is possible for them to shoot themselves in the foot, but I don't believe that adding support for bind mounts opens up additional risk with regard to overriding libraries and binaries. Avoiding privileged containers and limiting capabilities is use case dependent, but best practices should be followed to limit the attack surface. Having said that, it seems there could be a need for admins to be able to control the destination mount path within the container. However, the implementation becomes less straight forward for localized resources/distributed cache. Currently we support arbitrary destination paths within the container for localized resources. Consider the hbase container use case, where hbase-site.xml is localized and the hbase processes in the container expect hbase-site.xml to be in /etc/hbase/conf. The admin doesn't know the full path to the localized resources up front, so it wouldn't be possible for the admin to define these localized resources in the whitelist. We could possibly address this through special syntax (i.e. $$LOCALIZED_PATH$$/hbase-site.xml:/etc/hbase/conf/hbase-site.xml:ro") if this is a concern. Thoughts?
          Hide
          ebadger Eric Badger added a comment -

          The admin will define a comma separated list of <src>:<mode> (ro or rw) mounts, the requesting user will supply <src>:<dest>:<mode> - mode must be equal to or lesser than the admin defined mode (i.e. admin defines mount as rw, user can bind mount as rw OR ro).

          I'm not sure I understand this correctly. Let me know if I have this right. So you're proposing having a whitelist of volumes that can be bind-mounted that is defined by the NM and then have the user supply a list of volumes that need to be a subset of that whitelist? What about volumes that the NM always wants to mount regardless of the user?

          One question here, does any feel there is value in allowing the admin to restrict the destination mount point within the container?

          Well they could certainly shoot themselves in the foot pretty easily by mounting over an important directory within the image (e.g. /bin), but I'm not sure if that will ever lead to anything that could prove malicious. Maybe a possibility is that they overwrite /bin with their mount that has a bunch of crafted malicious binaries. Though I'm not sure how they would get the malicious binaries in the src volume on the node. And also, I'm not sure if this is anything different/worse than putting a setuid binary in the distributed cache. Or another possibility would be overwriting glibc with a malicious version. Basically allowing arbitrary mount points allows the user to overwrite things owned by root, which makes me a little uneasy. My question is whether they can leverage these mount points to gain root in the container if minimal capabilities (aka not SETUID/SETGID/etc.) are given.

          Show
          ebadger Eric Badger added a comment - The admin will define a comma separated list of <src>:<mode> (ro or rw) mounts, the requesting user will supply <src>:<dest>:<mode> - mode must be equal to or lesser than the admin defined mode (i.e. admin defines mount as rw, user can bind mount as rw OR ro). I'm not sure I understand this correctly. Let me know if I have this right. So you're proposing having a whitelist of volumes that can be bind-mounted that is defined by the NM and then have the user supply a list of volumes that need to be a subset of that whitelist? What about volumes that the NM always wants to mount regardless of the user? One question here, does any feel there is value in allowing the admin to restrict the destination mount point within the container? Well they could certainly shoot themselves in the foot pretty easily by mounting over an important directory within the image (e.g. /bin), but I'm not sure if that will ever lead to anything that could prove malicious. Maybe a possibility is that they overwrite /bin with their mount that has a bunch of crafted malicious binaries. Though I'm not sure how they would get the malicious binaries in the src volume on the node. And also, I'm not sure if this is anything different/worse than putting a setuid binary in the distributed cache. Or another possibility would be overwriting glibc with a malicious version. Basically allowing arbitrary mount points allows the user to overwrite things owned by root, which makes me a little uneasy. My question is whether they can leverage these mount points to gain root in the container if minimal capabilities (aka not SETUID/SETGID/etc.) are given.
          Hide
          templedf Daniel Templeton added a comment -

          I don't see any need to restrict the mount point in the container.

          Show
          templedf Daniel Templeton added a comment - I don't see any need to restrict the mount point in the container.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 21s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
                trunk Compile Tests
          0 mvndep 0m 40s Maven dependency ordering for branch
          +1 mvninstall 14m 48s trunk passed
          +1 compile 9m 31s trunk passed
          +1 checkstyle 0m 54s trunk passed
          +1 mvnsite 1m 10s trunk passed
          -1 findbugs 0m 51s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings.
          +1 javadoc 0m 49s trunk passed
                Patch Compile Tests
          0 mvndep 0m 10s Maven dependency ordering for patch
          -1 mvninstall 0m 18s hadoop-yarn-server-nodemanager in the patch failed.
          -1 compile 0m 57s hadoop-yarn in the patch failed.
          -1 javac 0m 57s hadoop-yarn in the patch failed.
          -0 checkstyle 0m 47s hadoop-yarn-project/hadoop-yarn: The patch generated 4 new + 210 unchanged - 0 fixed = 214 total (was 210)
          -1 mvnsite 0m 19s hadoop-yarn-server-nodemanager in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 0m 17s hadoop-yarn-server-nodemanager in the patch failed.
          +1 javadoc 0m 35s the patch passed
                Other Tests
          -1 unit 0m 26s hadoop-yarn-api in the patch failed.
          -1 unit 0m 19s hadoop-yarn-server-nodemanager in the patch failed.
          +1 asflicense 0m 22s The patch does not generate ASF License warnings.
          44m 4s



          Reason Tests
          Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue YARN-5534
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842850/YARN-5534.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux df770f559e03 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / b0e78ae
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
          mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          compile https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn.txt
          javac https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn.txt
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
          mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16460/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/16460/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 21s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.       trunk Compile Tests 0 mvndep 0m 40s Maven dependency ordering for branch +1 mvninstall 14m 48s trunk passed +1 compile 9m 31s trunk passed +1 checkstyle 0m 54s trunk passed +1 mvnsite 1m 10s trunk passed -1 findbugs 0m 51s hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. +1 javadoc 0m 49s trunk passed       Patch Compile Tests 0 mvndep 0m 10s Maven dependency ordering for patch -1 mvninstall 0m 18s hadoop-yarn-server-nodemanager in the patch failed. -1 compile 0m 57s hadoop-yarn in the patch failed. -1 javac 0m 57s hadoop-yarn in the patch failed. -0 checkstyle 0m 47s hadoop-yarn-project/hadoop-yarn: The patch generated 4 new + 210 unchanged - 0 fixed = 214 total (was 210) -1 mvnsite 0m 19s hadoop-yarn-server-nodemanager in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 0m 17s hadoop-yarn-server-nodemanager in the patch failed. +1 javadoc 0m 35s the patch passed       Other Tests -1 unit 0m 26s hadoop-yarn-api in the patch failed. -1 unit 0m 19s hadoop-yarn-server-nodemanager in the patch failed. +1 asflicense 0m 22s The patch does not generate ASF License warnings. 44m 4s Reason Tests Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue YARN-5534 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842850/YARN-5534.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux df770f559e03 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b0e78ae Default Java 1.8.0_131 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html mvninstall https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt compile https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn.txt javac https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn.txt checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt mvnsite https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt findbugs https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/16460/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/16460/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/16460/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          Eric Badger - sorry for the delay here. I'm actively working on this.

          Couple of comments on the approach:

          1. YARN-4595 addressed read-only mounts for local resources. I'm planning to consolidate the mount whitelist and local resource mounts into a single ENV variable.
          2. Local resources will be implicitly added to the whitelist in read-only mode.
          3. There is currently an issue with multiple mounts and MapReduce due to how environment variables are parsed. See YARN-6830.
          4. The admin will define a comma separated list of <src>:<mode> (ro or rw) mounts, the requesting user will supply <src>:<dest>:<mode> - mode must be equal to or lesser than the admin defined mode (i.e. admin defines mount as rw, user can bind mount as rw OR ro).

          One question here, does any feel there is value in allowing the admin to restrict the destination mount point within the container? I can't think of a use case for this, and expect most admins would likely just wildcard the field for all the mounts. Currently, the plan for the admin supplied whitelist does not include restricting the destination within the container.

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - Eric Badger - sorry for the delay here. I'm actively working on this. Couple of comments on the approach: YARN-4595 addressed read-only mounts for local resources. I'm planning to consolidate the mount whitelist and local resource mounts into a single ENV variable. Local resources will be implicitly added to the whitelist in read-only mode. There is currently an issue with multiple mounts and MapReduce due to how environment variables are parsed. See YARN-6830 . The admin will define a comma separated list of <src>:<mode> (ro or rw) mounts, the requesting user will supply <src>:<dest>:<mode> - mode must be equal to or lesser than the admin defined mode (i.e. admin defines mount as rw, user can bind mount as rw OR ro). One question here, does any feel there is value in allowing the admin to restrict the destination mount point within the container? I can't think of a use case for this, and expect most admins would likely just wildcard the field for all the mounts. Currently, the plan for the admin supplied whitelist does not include restricting the destination within the container.
          Hide
          ebadger Eric Badger added a comment -

          Any update on this?

          Show
          ebadger Eric Badger added a comment - Any update on this?
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          Thanks, luhuichun!

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - Thanks, luhuichun !
          Hide
          luhuichun luhuichun added a comment -

          Shane Kumpf ok it's ok for me

          Show
          luhuichun luhuichun added a comment - Shane Kumpf ok it's ok for me
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          luhuichun Zhankun Tang - We're close on this one. Would you like me to take the lead on this and get it wrapped up? Thanks!

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - luhuichun Zhankun Tang - We're close on this one. Would you like me to take the lead on this and get it wrapped up? Thanks!
          Hide
          templedf Daniel Templeton added a comment - - edited

          Thanks for updating the patch, luhuichun. The new patch matches more with what I had in mind. There are still a couple of issues:

          • In YarnConfiguration, you should add a Javadoc comment for the new property
          • In DLCR.isArbitraryMount(), instead of the for loop, you should use a foreach.
          • DSCR.isArbitraryMount() always returns false.
                File child = new File(mount);
                for (int i = 0; i < whiteList.length; i++){
                  File parent = new File(mount);
                  if (isSubDirectory(parent, child)){
                    return false;
                  }
                }

            It's always true that parent.equals(child), so isSubdirectory() will always return true.

          • You should parse the white list property once and store it instead of doing it every time the method is called
          • Instead of using String.split(), you could use Pattern.split() to allow for whitespace, making it a little more user friendly
          • Look at http://stackoverflow.com/questions/26530445/compare-directories-to-check-if-one-is-sub-directory-of-another for ideas of how to do the subdirectory check more efficiently. I like the Set idea.
          • In DLCR.isSubdirectory(), the naming around child, parent, and parentFile is pretty confusing.
          • In DLCR.isSubdirectory(), printing the stack trace is a bad idea. Do something more useful. At a minimum, log the exception instead of printing the stack trace to stderr.
          • You should have a space before the curly brace throughout, e.g.
              if (parent.equals(parentFile)){  
          Show
          templedf Daniel Templeton added a comment - - edited Thanks for updating the patch, luhuichun . The new patch matches more with what I had in mind. There are still a couple of issues: In YarnConfiguration , you should add a Javadoc comment for the new property In DLCR.isArbitraryMount() , instead of the for loop, you should use a foreach . DSCR.isArbitraryMount() always returns false . File child = new File(mount); for ( int i = 0; i < whiteList.length; i++){ File parent = new File(mount); if (isSubDirectory(parent, child)){ return false ; } } It's always true that parent.equals(child) , so isSubdirectory() will always return true. You should parse the white list property once and store it instead of doing it every time the method is called Instead of using String.split() , you could use Pattern.split() to allow for whitespace, making it a little more user friendly Look at http://stackoverflow.com/questions/26530445/compare-directories-to-check-if-one-is-sub-directory-of-another for ideas of how to do the subdirectory check more efficiently. I like the Set idea. In DLCR.isSubdirectory() , the naming around child , parent , and parentFile is pretty confusing. In DLCR.isSubdirectory() , printing the stack trace is a bad idea. Do something more useful. At a minimum, log the exception instead of printing the stack trace to stderr. You should have a space before the curly brace throughout, e.g. if (parent.equals(parentFile)){
          Hide
          shanekumpf@gmail.com Shane Kumpf added a comment -

          Thanks for the patch luhuichun!

          I agree with Daniel Templeton. YARN-4595 only allows for mounting localized resources, which isn't flexible enough for what we need here. We'd like to eliminate issues such as YARN-5042 (mounting /sys/fs/cgroup in every container) every time we have a similar need. Another example would be /dev/urandom, which is commonly mounted into containers that generate keys.

          The current implementation is moving towards allowing subdirectories under a white listed mount to be mounted into the docker container. What is the use case for allowing subdirectories vs forcing the user supplied mount to match the white list entry?

          Here are some items to address in the future patch:

          1)

          +
          +  public static final String NM_WHITE_LIST_VOLUME_MOUNT =
          +          NM_PREFIX + "white-list-volume-mount";
          +
          

          The configuration should be under the DOCKER_CONTAINER_RUNTIME_PREFIX.

          2)

                   if (!path.isAbsolute()) {
                     throw new ContainerExecutionException("Mount must be absolute: " +
          -              mount);
          +                  mount);
                   }
                   if (Files.isSymbolicLink(path)) {
                     throw new ContainerExecutionException("Mount cannot be a symlink: " +
          -              mount);
          +                  mount);
          

          Can you fix the formatting changes here?

          3)

          +  private boolean isSubDirectory(File parent, File child){
          +    try {
          +      parent = parent.getCanonicalFile();
          +      child = child.getCanonicalFile();
          +      File parentFile = child;
          +      while (parentFile != null){
          +        if (parent.equals(parentFile)){
          +          return true;
          +        }
          +        parentFile = parentFile.getParentFile();
          +      }
          +    } catch (IOException e) {
          +      e.printStackTrace();
          +    }
          +    return false;
             }
          

          Assuming we need subdirectories, I would expect a subdirectory check has been implemented elsewhere in the code base and doesn't need to be defined again, but I didn't find it in my non-exhaustive search. Perhaps someone else can chime in if they know of one. Otherwise, maybe it would be good to add this to a utility class vs leaving it in the runtime?

          Show
          shanekumpf@gmail.com Shane Kumpf added a comment - Thanks for the patch luhuichun ! I agree with Daniel Templeton . YARN-4595 only allows for mounting localized resources, which isn't flexible enough for what we need here. We'd like to eliminate issues such as YARN-5042 (mounting /sys/fs/cgroup in every container) every time we have a similar need. Another example would be /dev/urandom, which is commonly mounted into containers that generate keys. The current implementation is moving towards allowing subdirectories under a white listed mount to be mounted into the docker container. What is the use case for allowing subdirectories vs forcing the user supplied mount to match the white list entry? Here are some items to address in the future patch: 1) + + public static final String NM_WHITE_LIST_VOLUME_MOUNT = + NM_PREFIX + "white-list-volume-mount" ; + The configuration should be under the DOCKER_CONTAINER_RUNTIME_PREFIX . 2) if (!path.isAbsolute()) { throw new ContainerExecutionException( "Mount must be absolute: " + - mount); + mount); } if (Files.isSymbolicLink(path)) { throw new ContainerExecutionException( "Mount cannot be a symlink: " + - mount); + mount); Can you fix the formatting changes here? 3) + private boolean isSubDirectory(File parent, File child){ + try { + parent = parent.getCanonicalFile(); + child = child.getCanonicalFile(); + File parentFile = child; + while (parentFile != null ){ + if (parent.equals(parentFile)){ + return true ; + } + parentFile = parentFile.getParentFile(); + } + } catch (IOException e) { + e.printStackTrace(); + } + return false ; } Assuming we need subdirectories, I would expect a subdirectory check has been implemented elsewhere in the code base and doesn't need to be defined again, but I didn't find it in my non-exhaustive search. Perhaps someone else can chime in if they know of one. Otherwise, maybe it would be good to add this to a utility class vs leaving it in the runtime?
          Hide
          templedf Daniel Templeton added a comment -

          validateMount() already rejects anything that isn't a localized resource path. What this patch does is reject anything that's not also whitelisted, i.e. to accept a mount, it must be both a localized resource path and whitelisted. Because YARN-5298 already mounts all localized resource paths, YARN-4595 and this patch don't accomplish much.

          Show
          templedf Daniel Templeton added a comment - validateMount() already rejects anything that isn't a localized resource path. What this patch does is reject anything that's not also whitelisted, i.e. to accept a mount, it must be both a localized resource path and whitelisted. Because YARN-5298 already mounts all localized resource paths, YARN-4595 and this patch don't accomplish much.
          Hide
          luhuichun luhuichun added a comment -

          yes, Daniel. YARN-4595 and YARN-5298 only mounts localized directories. so my idea is to allow more directories from a whitelist, which would be helpful for some applications, so what do you mean "I'm not sure this JIRA will actually change anything."? you mean it's redundant?

          Show
          luhuichun luhuichun added a comment - yes, Daniel. YARN-4595 and YARN-5298 only mounts localized directories. so my idea is to allow more directories from a whitelist, which would be helpful for some applications, so what do you mean "I'm not sure this JIRA will actually change anything."? you mean it's redundant?
          Hide
          templedf Daniel Templeton added a comment - - edited

          Thanks for posting the patch, luhuichun. Sorry for taking so long to get around to reviewing it. I apparently also misread the issue description the first time.

          Given that the current volume mounts only allow mounting directories from the set of localized files, I'm not sure additional white listing is all that useful. And given that YARN-5298 already mounts all the localized directories, I'm not sure this JIRA will actually change anything.

          What I originally thought I read, and what I think would be useful, is allowing arbitrary volume mounts from a whitelist, not just mounting localized resources. For example, If I'm going to use a Docker image to execute MR jobs, I have to install Hadoop in that image. When I upgrade my cluster, I then have to upgrade or recreate all my Docker images. If the Hadoop directories were mountable, I could let YARN mount them from the host machine and not have to worry about it.

          Show
          templedf Daniel Templeton added a comment - - edited Thanks for posting the patch, luhuichun . Sorry for taking so long to get around to reviewing it. I apparently also misread the issue description the first time. Given that the current volume mounts only allow mounting directories from the set of localized files, I'm not sure additional white listing is all that useful. And given that YARN-5298 already mounts all the localized directories, I'm not sure this JIRA will actually change anything. What I originally thought I read, and what I think would be useful, is allowing arbitrary volume mounts from a whitelist, not just mounting localized resources. For example, If I'm going to use a Docker image to execute MR jobs, I have to install Hadoop in that image. When I upgrade my cluster, I then have to upgrade or recreate all my Docker images. If the Hadoop directories were mountable, I could let YARN mount them from the host machine and not have to worry about it.
          Hide
          vvasudev Varun Vasudev added a comment -

          luhuichun - can you please address the issues in the Jenkins report -
          1) Please add some unit tests for the patch
          2) Please address the failing unit test

          Thanks!

          Show
          vvasudev Varun Vasudev added a comment - luhuichun - can you please address the issues in the Jenkins report - 1) Please add some unit tests for the patch 2) Please address the failing unit test Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          0 mvndep 0m 53s Maven dependency ordering for branch
          +1 mvninstall 6m 58s trunk passed
          +1 compile 6m 40s trunk passed
          +1 checkstyle 0m 52s trunk passed
          +1 mvnsite 1m 15s trunk passed
          +1 mvneclipse 0m 45s trunk passed
          +1 findbugs 2m 10s trunk passed
          +1 javadoc 0m 57s trunk passed
          0 mvndep 0m 10s Maven dependency ordering for patch
          +1 mvninstall 0m 47s the patch passed
          +1 compile 4m 59s the patch passed
          +1 javac 4m 59s the patch passed
          -0 checkstyle 0m 49s hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 210 unchanged - 0 fixed = 213 total (was 210)
          +1 mvnsite 1m 14s the patch passed
          +1 mvneclipse 0m 45s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 25s the patch passed
          +1 javadoc 0m 55s the patch passed
          -1 unit 0m 36s hadoop-yarn-api in the patch failed.
          +1 unit 15m 46s hadoop-yarn-server-nodemanager in the patch passed.
          +1 asflicense 0m 39s The patch does not generate ASF License warnings.
          58m 25s



          Reason Tests
          Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:e809691
          JIRA Issue YARN-5534
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12836189/YARN-5534.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 3abe45fdbaac 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / f38a6d0
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13820/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
          unit https://builds.apache.org/job/PreCommit-YARN-Build/13820/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
          Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13820/testReport/
          modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn
          Console output https://builds.apache.org/job/PreCommit-YARN-Build/13820/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. 0 mvndep 0m 53s Maven dependency ordering for branch +1 mvninstall 6m 58s trunk passed +1 compile 6m 40s trunk passed +1 checkstyle 0m 52s trunk passed +1 mvnsite 1m 15s trunk passed +1 mvneclipse 0m 45s trunk passed +1 findbugs 2m 10s trunk passed +1 javadoc 0m 57s trunk passed 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 0m 47s the patch passed +1 compile 4m 59s the patch passed +1 javac 4m 59s the patch passed -0 checkstyle 0m 49s hadoop-yarn-project/hadoop-yarn: The patch generated 3 new + 210 unchanged - 0 fixed = 213 total (was 210) +1 mvnsite 1m 14s the patch passed +1 mvneclipse 0m 45s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 25s the patch passed +1 javadoc 0m 55s the patch passed -1 unit 0m 36s hadoop-yarn-api in the patch failed. +1 unit 15m 46s hadoop-yarn-server-nodemanager in the patch passed. +1 asflicense 0m 39s The patch does not generate ASF License warnings. 58m 25s Reason Tests Failed junit tests hadoop.yarn.conf.TestYarnConfigurationFields Subsystem Report/Notes Docker Image:yetus/hadoop:e809691 JIRA Issue YARN-5534 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12836189/YARN-5534.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 3abe45fdbaac 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / f38a6d0 Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-YARN-Build/13820/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt unit https://builds.apache.org/job/PreCommit-YARN-Build/13820/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt Test Results https://builds.apache.org/job/PreCommit-YARN-Build/13820/testReport/ modules C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn Console output https://builds.apache.org/job/PreCommit-YARN-Build/13820/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          luhuichun luhuichun added a comment -
          Show
          luhuichun luhuichun added a comment - Sidharta Seethana Varun Vasudev
          Hide
          templedf Daniel Templeton added a comment -

          A good use case for this is mounting in the Hadoop directories so that they don't have to be build into the container. Another use case is mounting in the local tool chain.

          Show
          templedf Daniel Templeton added a comment - A good use case for this is mounting in the Hadoop directories so that they don't have to be build into the container. Another use case is mounting in the local tool chain.

            People

            • Assignee:
              shanekumpf@gmail.com Shane Kumpf
              Reporter:
              luhuichun luhuichun
            • Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

              • Created:
                Updated:

                Development