Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: 2.0.2-alpha
    • Component/s: security
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is hardcoded.

      1. HADOOP-8581.patch
        52 kB
        Alejandro Abdelnur
      2. HADOOP-8581.patch
        52 kB
        Alejandro Abdelnur
      3. HADOOP-8581.patch
        51 kB
        Alejandro Abdelnur
      4. HADOOP-8581.patch
        51 kB
        Alejandro Abdelnur
      5. HADOOP-8581.patch
        51 kB
        Alejandro Abdelnur
      6. HADOOP-8581.patch
        51 kB
        Alejandro Abdelnur
      7. HADOOP-8581.patch
        54 kB
        Alejandro Abdelnur

        Issue Links

          Activity

          Hide
          Vinod Kumar Vavilapalli added a comment -

          I suppose we've already gone past the 'why revert' reasoning.

          +1 for the proposal, it simplifies things a great deal.

          Suresh Srinivas, may be add the proposal to the JIRA description?

          Vinod Kumar Vavilapalli, What problem are you facing with the HttpConfig class?

          HttpConfig depending on statics is a real pain as we saw in the previous patches in YARN.

          Omkar Vinit Joshi, AMs should be able to work with secure web UIs, this is a current limitation. For example, if Yarn issues an SSL certificate for the AM on the fly (using a local certificate authority), then seeds it in the distributedcache for the AM, then Yarn can trust that certificate and the proxy server would decrypt and re-encrypt using a fix certificate which is used for the user facing endpoint.

          It could work, but will have to think more. IAC please file a ticket under YARN-1280 for tracking.

          Show
          Vinod Kumar Vavilapalli added a comment - I suppose we've already gone past the 'why revert' reasoning. +1 for the proposal, it simplifies things a great deal. Suresh Srinivas , may be add the proposal to the JIRA description? Vinod Kumar Vavilapalli, What problem are you facing with the HttpConfig class? HttpConfig depending on statics is a real pain as we saw in the previous patches in YARN. Omkar Vinit Joshi, AMs should be able to work with secure web UIs, this is a current limitation. For example, if Yarn issues an SSL certificate for the AM on the fly (using a local certificate authority), then seeds it in the distributedcache for the AM, then Yarn can trust that certificate and the proxy server would decrypt and re-encrypt using a fix certificate which is used for the user facing endpoint. It could work, but will have to think more. IAC please file a ticket under YARN-1280 for tracking.
          Hide
          Alejandro Abdelnur added a comment -

          Suresh Srinivas, thanks for summarizing, LTGM. One thing you forgot to mention is the use of default ports, that diff ones will be used if HTTP or HTTPS, which after checking a couple of things I'm OK with it. Thanks again.

          Omkar Vinit Joshi, AMs should be able to work with secure web UIs, this is a current limitation. For example, if Yarn issues an SSL certificate for the AM on the fly (using a local certificate authority), then seeds it in the distributedcache for the AM, then Yarn can trust that certificate and the proxy server would decrypt and re-encrypt using a fix certificate which is used for the user facing endpoint.

          Show
          Alejandro Abdelnur added a comment - Suresh Srinivas , thanks for summarizing, LTGM. One thing you forgot to mention is the use of default ports, that diff ones will be used if HTTP or HTTPS, which after checking a couple of things I'm OK with it. Thanks again. Omkar Vinit Joshi , AMs should be able to work with secure web UIs, this is a current limitation. For example, if Yarn issues an SSL certificate for the AM on the fly (using a local certificate authority), then seeds it in the distributedcache for the AM, then Yarn can trust that certificate and the proxy server would decrypt and re-encrypt using a fix certificate which is used for the user facing endpoint.
          Hide
          Omkar Vinit Joshi added a comment -

          YARN and Job history server will have <project>.http.policy. This can be set to HTTP_ONLY or HTTPS_ONLY.

          yes.. yarn behavior is this way so it should be ok. HTTP_AND_HTTPS is not currently supported in yarn.

          AM should never have HTTPS because today in RM we server AM's web content via proxy server and proxy server can never trust AM's certificate even if it is issued.

          Show
          Omkar Vinit Joshi added a comment - YARN and Job history server will have <project>.http.policy. This can be set to HTTP_ONLY or HTTPS_ONLY. yes.. yarn behavior is this way so it should be ok. HTTP_AND_HTTPS is not currently supported in yarn. AM should never have HTTPS because today in RM we server AM's web content via proxy server and proxy server can never trust AM's certificate even if it is issued.
          Hide
          Suresh Srinivas added a comment - - edited

          I spoke to Alejandro Abdelnur. Here is the summary (tucu correct me if I missed any thing):

          Decisions:

          • Add support SSL per project instead of a single global configuration
          • Add support HTTP_ONLY, HTTPS_ONLY, HTTP_AND_HTTPS Policies

          Proposed changes:

          1. YARN and Job history server will have <project>.http.policy. This can be set to HTTP_ONLY or HTTPS_ONLY.
          2. HDFS will have hdfs.http.policy. This can be set to HTTP_ONLY, HTTPS_ONLY, and HTTP_AND_HTTPS
          3. hadoop.ssl.enable will be deprecated.
          4. When new configuration options are used, the old configurations are ignored with a warning.

          Migration paths:

          1. HDFS: for installations using hadoop.https.enable=true, the configuration will be mapped to hdfs.http.policy=HTTP_AND_HTTPS.
          2. Installations using hadoop.ssl.enabled=true: this will map to across the project to the policy HTTPS_ONLY. One incompatibility for such installations is, configured https port will be used for namenode and datanode, instead of configured http port where currently https is started.

          Future:

          • Add support for all the policies for YARN, Job history server, AM, and HDFS.
          Show
          Suresh Srinivas added a comment - - edited I spoke to Alejandro Abdelnur . Here is the summary (tucu correct me if I missed any thing): Decisions: Add support SSL per project instead of a single global configuration Add support HTTP_ONLY, HTTPS_ONLY, HTTP_AND_HTTPS Policies Proposed changes: YARN and Job history server will have <project>.http.policy. This can be set to HTTP_ONLY or HTTPS_ONLY. HDFS will have hdfs.http.policy. This can be set to HTTP_ONLY, HTTPS_ONLY, and HTTP_AND_HTTPS hadoop.ssl.enable will be deprecated. When new configuration options are used, the old configurations are ignored with a warning. Migration paths: HDFS: for installations using hadoop.https.enable=true, the configuration will be mapped to hdfs.http.policy=HTTP_AND_HTTPS. Installations using hadoop.ssl.enabled=true: this will map to across the project to the policy HTTPS_ONLY. One incompatibility for such installations is, configured https port will be used for namenode and datanode, instead of configured http port where currently https is started. Future: Add support for all the policies for YARN, Job history server, AM, and HDFS.
          Hide
          Suresh Srinivas added a comment -

          Happy to jump on the phone do chat about this. following some answers.

          Alejandro Abdelnur, this is important to be resolved quickly. I have few patches already in progress. Lets have a phone call; I will get in touch with you over email.

          Show
          Suresh Srinivas added a comment - Happy to jump on the phone do chat about this. following some answers. Alejandro Abdelnur , this is important to be resolved quickly. I have few patches already in progress. Lets have a phone call; I will get in touch with you over email.
          Hide
          Alejandro Abdelnur added a comment -

          Suresh,

          Happy to jump on the phone do chat about this. following some answers.

          > A lot of standard services support secure and non-secure ports. A deployment might choose to support both http and https. Depending on what an application is accessing the service for, an app can choose to use secure or insecure.

          Serving the same content over unsecure/secure endpoints without any control does not make sense and I would say we should prevent it because it would give users a false sense of security.

          > Currently uses http port for https

          I don't see this a reason for removing. This is exactly what the hadoop.ssl.enable property aims to do, to make sure all HTTP traffic goes over SSL (HTTPS).

          > This configuration is not backward compatible and is in conflict with the existing configuration by adding multiple ways to do the same thing.

          Unless I'm missing something, previously you could use SSL only for httpfs, that was the reason of the secure port.

          Also, you can still set dfs.https.enable without setting hadoop.ssl enable. This is the old behavior. How is this backwards incompatible.

          > Per project control to enforce policy is required instead of one global flag.

          This is great improvement, however I don't see reverting this JIRA as a requirement for this.

          > We want to support both http and https. With redirect from http to https options,
          migration to the new setting does not require the applications to change the URL they are currently using.

          This is an improvement too and it can be done without reverting this JIRA.

          Show
          Alejandro Abdelnur added a comment - Suresh, Happy to jump on the phone do chat about this. following some answers. > A lot of standard services support secure and non-secure ports. A deployment might choose to support both http and https. Depending on what an application is accessing the service for, an app can choose to use secure or insecure. Serving the same content over unsecure/secure endpoints without any control does not make sense and I would say we should prevent it because it would give users a false sense of security. > Currently uses http port for https I don't see this a reason for removing. This is exactly what the hadoop.ssl.enable property aims to do, to make sure all HTTP traffic goes over SSL (HTTPS). > This configuration is not backward compatible and is in conflict with the existing configuration by adding multiple ways to do the same thing. Unless I'm missing something, previously you could use SSL only for httpfs, that was the reason of the secure port. Also, you can still set dfs.https.enable without setting hadoop.ssl enable. This is the old behavior. How is this backwards incompatible. > Per project control to enforce policy is required instead of one global flag. This is great improvement, however I don't see reverting this JIRA as a requirement for this. > We want to support both http and https. With redirect from http to https options, migration to the new setting does not require the applications to change the URL they are currently using. This is an improvement too and it can be done without reverting this JIRA.
          Hide
          Suresh Srinivas added a comment -

          OK, reading again things, HDFS-5271 is a documentation issue now.

          I fail to understand. With hadoop.ssl.enabled 50070 which is configured as http port becomes https port. How is this just a documentation issue?

          Serving the same content over HTTP and HTTPS seems unnecessary.

          A lot of standard services support secure and non-secure ports. A deployment might choose to support both http and https. Depending on what an application is accessing the service for, an app can choose to use secure or insecure. The proposed solution gives flexibility of support both http and https and in cases where an admin wants to allow only https access, that is also possible.

          while browsers do this automatically, if i recall correctly Java does not follow redirections from HTTP to HTTPS. This may be an issue for fsimage and webhdfs.

          There are other tools that can handle the redirection. This gives an opportunity for such tools to continue to work even when server changes from http to https, without requiring need for URL change.

          Sounds good, but I would rather user http or https as value than numbers

          We did discuss using names instead of number. I prefer number instead of long strings that describe http, http and https, http redirect to https etc. But if others feel strings are better, I am okay.

          If we remove it from 2.2, what that exactly means? what functionality we lose?

          Most of the functionality added by this change already existed for hdfs. Only thing we lose is enforcing https only access.

          What else is a problem that would justify a revert?

          I thought I covered it earlier. Here is again for convenience:

          hadoop.ssl.enable property will be removed. The reasons for this are:

          • Currently uses http port for https
          • This configuration is not backward compatible and is in conflict with the existing configuration by adding multiple ways to do the same thing.
          • Per project control to enforce policy is required instead of one global flag.
          • We want to support both http and https. With redirect from http to https options, migration to the new setting does not require the applications to change the URL they are currently using.
          Show
          Suresh Srinivas added a comment - OK, reading again things, HDFS-5271 is a documentation issue now. I fail to understand. With hadoop.ssl.enabled 50070 which is configured as http port becomes https port. How is this just a documentation issue? Serving the same content over HTTP and HTTPS seems unnecessary. A lot of standard services support secure and non-secure ports. A deployment might choose to support both http and https. Depending on what an application is accessing the service for, an app can choose to use secure or insecure. The proposed solution gives flexibility of support both http and https and in cases where an admin wants to allow only https access, that is also possible. while browsers do this automatically, if i recall correctly Java does not follow redirections from HTTP to HTTPS. This may be an issue for fsimage and webhdfs. There are other tools that can handle the redirection. This gives an opportunity for such tools to continue to work even when server changes from http to https, without requiring need for URL change. Sounds good, but I would rather user http or https as value than numbers We did discuss using names instead of number. I prefer number instead of long strings that describe http, http and https, http redirect to https etc. But if others feel strings are better, I am okay. If we remove it from 2.2, what that exactly means? what functionality we lose? Most of the functionality added by this change already existed for hdfs. Only thing we lose is enforcing https only access. What else is a problem that would justify a revert? I thought I covered it earlier. Here is again for convenience: hadoop.ssl.enable property will be removed. The reasons for this are: Currently uses http port for https This configuration is not backward compatible and is in conflict with the existing configuration by adding multiple ways to do the same thing. Per project control to enforce policy is required instead of one global flag. We want to support both http and https. With redirect from http to https options, migration to the new setting does not require the applications to change the URL they are currently using.
          Hide
          Alejandro Abdelnur added a comment -

          OK, reading again things, HDFS-5271 is a documentation issue now.

          What else is a problem that would justify a revert?

          Vinod Kumar Vavilapalli, What problem are you facing with the HttpConfig class?

          Show
          Alejandro Abdelnur added a comment - OK, reading again things, HDFS-5271 is a documentation issue now. What else is a problem that would justify a revert? Vinod Kumar Vavilapalli , What problem are you facing with the HttpConfig class?
          Hide
          Alejandro Abdelnur added a comment -

          BTW, the problem reported in HDFS-5271, what is the exact concern? that there are 2 HTTPS endpoints? or is there anything broken?

          Show
          Alejandro Abdelnur added a comment - BTW, the problem reported in HDFS-5271 , what is the exact concern? that there are 2 HTTPS endpoints? or is there anything broken?
          Hide
          Alejandro Abdelnur added a comment -

          Apologies for the delay getting back on this. Overall approach seems reasonable, a few things though:

          allow access to both HTTPS and HTTP

          Serving the same content over HTTP and HTTPS seems unnecessary. And if set by mistake, could give the false sense of security to someone that intended setting https only.

          If we are talking about serving webpages over HTTP and webhdfs/fsimage over HTTPS then it makes sense.

          But this means we'll have to explicitly configure each servlet to be served over the correct transport only (HTTP or HTTPS). And give how servlets are added to HttpServer today this will be a careful task to ensure nothing ends up wrongfully served on both transport endpoints.

          redirecting from http to https

          while browsers do this automatically, if i recall correctly Java does not follow redirections from HTTP to HTTPS. This may be an issue for fsimage and webhdfs.

          • <project>.http.policy

          Sounds good, but I would rather user http or https as value than numbers

          Also, we'll have to refactor HttpServer to take as parameter the <service> prefix (I would use service rather than project)

          If we remove it from 2.2, what that exactly means? what functionality we lose?

          Show
          Alejandro Abdelnur added a comment - Apologies for the delay getting back on this. Overall approach seems reasonable, a few things though: allow access to both HTTPS and HTTP Serving the same content over HTTP and HTTPS seems unnecessary. And if set by mistake, could give the false sense of security to someone that intended setting https only. If we are talking about serving webpages over HTTP and webhdfs/fsimage over HTTPS then it makes sense. But this means we'll have to explicitly configure each servlet to be served over the correct transport only (HTTP or HTTPS). And give how servlets are added to HttpServer today this will be a careful task to ensure nothing ends up wrongfully served on both transport endpoints. redirecting from http to https while browsers do this automatically, if i recall correctly Java does not follow redirections from HTTP to HTTPS. This may be an issue for fsimage and webhdfs. <project>.http.policy Sounds good, but I would rather user http or https as value than numbers Also, we'll have to refactor HttpServer to take as parameter the <service> prefix (I would use service rather than project) If we remove it from 2.2, what that exactly means? what functionality we lose?
          Hide
          Suresh Srinivas added a comment -

          I had a quick discussion with Jing Zhao, Sanjay Radia, Haohui Mai and Vinod Kumar Vavilapalli. Here is the proposal:

          Use cases

          1. Admin must be able to configure both http and https port per project
          2. Admin must be able to enable the following access:
            • Only allow access to http and not https
            • Only allow access to https and not http
            • Allow access to both https and http
            • Allow access to only https with http redirecting to https.
              enable http and https and must be able redirect http to https to enforce all the access is only granted over https per project. This is important for backward compatibility.

          Solution

          1. Every project must have separate configuration for http and https for all the daemons that have http server support.
          2. Every project must have default port numbers for http and https. This should be used when configuration does not specify http/https port numbers.
          3. Every project must support a configuration for <project>.http.policy. Value 0 means support only http. Value 1 means support only https. Value 2 means support both http and https. Value 3 means support both http and https, and http redirects to https. If not specified, this value is defaulted to 0.
          4. hadoop.ssl.enable property will be removed. The reasons for this are:
            • Currently uses http port for https
            • This configuration is not backward compatible and is in conflict with the existing configuration by adding multiple ways to do the same thing.
            • Per project control to enforce policy is required instead of one global flag.
            • We want to support both http and https. With redirect from http to https options, migration to the new setting does not require the applications to change the URL they are currently using.

          Backward compatibility analysis for hdfs

          1. HDFS already supports http and https ports for namenode and datanode. TBD add config names.
          2. HDFS uses dfs.https.enable to enable https listener. For backward compatibility, this maps to the proposed configuration as follows:
            • If dfs.https.enable == false, then the dfs.https.policy will be set to 0.
            • If dfs.https.enable == true, then the dfs.https.policy will be set to 1.

          Backward compatibility for users using hadoop.ssl.enable=true

          • Note that this cannot be support in a backward compatible manner because currently this flag causes incorrect behavior where http port is used for https.

          Changes

          1. Remove hadoop.ssl.enable
          2. Per project make the proposed solution changes
          3. Change httpserver not to read configuration, instead use the arguments passed to it. The applications using http server determine how to read it's own configuration and start http server appropriately.

          I propose removing hadoop.ssl.enable flag in 2.2. Rest of the changes can be done in backward compatible way and can come in 2.3. Thoughts?

          Show
          Suresh Srinivas added a comment - I had a quick discussion with Jing Zhao , Sanjay Radia , Haohui Mai and Vinod Kumar Vavilapalli . Here is the proposal: Use cases Admin must be able to configure both http and https port per project Admin must be able to enable the following access: Only allow access to http and not https Only allow access to https and not http Allow access to both https and http Allow access to only https with http redirecting to https. enable http and https and must be able redirect http to https to enforce all the access is only granted over https per project. This is important for backward compatibility. Solution Every project must have separate configuration for http and https for all the daemons that have http server support. Every project must have default port numbers for http and https. This should be used when configuration does not specify http/https port numbers. Every project must support a configuration for <project>.http.policy. Value 0 means support only http. Value 1 means support only https. Value 2 means support both http and https. Value 3 means support both http and https, and http redirects to https. If not specified, this value is defaulted to 0. hadoop.ssl.enable property will be removed. The reasons for this are: Currently uses http port for https This configuration is not backward compatible and is in conflict with the existing configuration by adding multiple ways to do the same thing. Per project control to enforce policy is required instead of one global flag. We want to support both http and https. With redirect from http to https options, migration to the new setting does not require the applications to change the URL they are currently using. Backward compatibility analysis for hdfs HDFS already supports http and https ports for namenode and datanode. TBD add config names. HDFS uses dfs.https.enable to enable https listener. For backward compatibility, this maps to the proposed configuration as follows: If dfs.https.enable == false, then the dfs.https.policy will be set to 0. If dfs.https.enable == true, then the dfs.https.policy will be set to 1. Backward compatibility for users using hadoop.ssl.enable=true Note that this cannot be support in a backward compatible manner because currently this flag causes incorrect behavior where http port is used for https. Changes Remove hadoop.ssl.enable Per project make the proposed solution changes Change httpserver not to read configuration, instead use the arguments passed to it. The applications using http server determine how to read it's own configuration and start http server appropriately. I propose removing hadoop.ssl.enable flag in 2.2. Rest of the changes can be done in backward compatible way and can come in 2.3. Thoughts?
          Hide
          Vinod Kumar Vavilapalli added a comment -

          The other related problem is the use of HttpConfig APIs. It is causing unbearable pain Ideally, HttpServer should take in a config key that directs it whether ssl is enabled or not.

          Show
          Vinod Kumar Vavilapalli added a comment - The other related problem is the use of HttpConfig APIs. It is causing unbearable pain Ideally, HttpServer should take in a config key that directs it whether ssl is enabled or not.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          We started building on this in YARN, but we can change. We need to decide on few things.

          Today, there is

          • a common hadoop.ssl.enabled flag as added in the patch
          • a hdfs specific dfs.https.enabled flag.
          • We haven't added a new config in YARN assuming that we can just use the common flag.

          We could deprecate the dfs flag in favor of the common flag, but that's a decision to be made.

          Irrespective of the above decision, I think that if a https specific port is configured, then NN/DN should set the same setting for HttpServer. Today, what is happening is that NN/DN explicitly call addSslListener() and start https on user configured port. HttpServer doesn't know this, depends on hadoop.ssl.enabled flag and starts https on the regular http port also.

          One more choice to make is what to do with regular http if user configures a https port. I think it makes sense to redirect traffic from http to https so that the user clearly knows he is talking https AND on a different port.

          Show
          Vinod Kumar Vavilapalli added a comment - We started building on this in YARN, but we can change. We need to decide on few things. Today, there is a common hadoop.ssl.enabled flag as added in the patch a hdfs specific dfs.https.enabled flag. We haven't added a new config in YARN assuming that we can just use the common flag. We could deprecate the dfs flag in favor of the common flag, but that's a decision to be made. Irrespective of the above decision, I think that if a https specific port is configured, then NN/DN should set the same setting for HttpServer. Today, what is happening is that NN/DN explicitly call addSslListener() and start https on user configured port. HttpServer doesn't know this, depends on hadoop.ssl.enabled flag and starts https on the regular http port also. One more choice to make is what to do with regular http if user configures a https port. I think it makes sense to redirect traffic from http to https so that the user clearly knows he is talking https AND on a different port.
          Hide
          Suresh Srinivas added a comment -

          Alejandro Abdelnur, thanks for the quick response. We have some bandwidth to work on this. Lets decide on the approach and others can pitch in with help.

          Show
          Suresh Srinivas added a comment - Alejandro Abdelnur , thanks for the quick response. We have some bandwidth to work on this. Lets decide on the approach and others can pitch in with help.
          Hide
          Alejandro Abdelnur added a comment -

          Suresh Srinivas, I'll look into this today, thx.

          Show
          Alejandro Abdelnur added a comment - Suresh Srinivas , I'll look into this today, thx.
          Hide
          Suresh Srinivas added a comment -

          BTW this should have been marked incompatible, because if you set this configuration as true, the older configurations become unnecessary.

          Show
          Suresh Srinivas added a comment - BTW this should have been marked incompatible, because if you set this configuration as true, the older configurations become unnecessary.
          Hide
          Suresh Srinivas added a comment -

          Alejandro Abdelnur, addition of this change has made the previously existed solution unworkable.

          In the past we had http and https port separately configurable. With this you are using the same port for http or https. This causes multiple ways to configure the https functionality. See the comment - https://issues.apache.org/jira/browse/HDFS-5271?focusedCommentId=13780581&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13780581

          There are also YARN jiras associate with this.

          Here is my proposal:

          • Retain the older http and https configuration
          • If a setup only wants to support https, http can redirect to https port

          This needs to happen quickly. If I do not hear back, I will revert this change and add the older config support with redirect back.

          Show
          Suresh Srinivas added a comment - Alejandro Abdelnur , addition of this change has made the previously existed solution unworkable. In the past we had http and https port separately configurable. With this you are using the same port for http or https. This causes multiple ways to configure the https functionality. See the comment - https://issues.apache.org/jira/browse/HDFS-5271?focusedCommentId=13780581&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13780581 There are also YARN jiras associate with this. Here is my proposal: Retain the older http and https configuration If a setup only wants to support https, http can redirect to https port This needs to happen quickly. If I do not hear back, I will revert this change and add the older config support with redirect back.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1167 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/)
          HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644)

          Result = FAILURE
          tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1167 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1167/ ) HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644) Result = FAILURE tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1135 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/)
          HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644)

          Result = FAILURE
          tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1135 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1135/ ) HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644) Result = FAILURE tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #2640 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2640/)
          HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644)

          Result = SUCCESS
          tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2640 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2640/ ) HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #2575 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2575/)
          HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644)

          Result = SUCCESS
          tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2575 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2575/ ) HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #2597 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2597/)
          HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644)

          Result = FAILURE
          tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2597 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2597/ ) HADOOP-8581 Amendment to CHANGES.txt setting right JIRA number, add support for HTTPS to the web UIs. (tucu) (Revision 1372644) Result = FAILURE tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1372644 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Alejandro Abdelnur added a comment -

          Committed to branch-2 (did typo in JIRA commit messages both for trunk and branch-2, used HADOOP-8681 instead HADOOP-8581, missing git amends )

          Show
          Alejandro Abdelnur added a comment - Committed to branch-2 (did typo in JIRA commit messages both for trunk and branch-2, used HADOOP-8681 instead HADOOP-8581 , missing git amends )
          Hide
          Alejandro Abdelnur added a comment -

          Committed to trunk. Looking into branch-2 as it seems a JIRA touching HttpServer didn't make it there yet and the merge does not cleanly apply.

          Show
          Alejandro Abdelnur added a comment - Committed to trunk. Looking into branch-2 as it seems a JIRA touching HttpServer didn't make it there yet and the merge does not cleanly apply.
          Hide
          Alejandro Abdelnur added a comment -

          test failures seem unrelated.

          Show
          Alejandro Abdelnur added a comment - test failures seem unrelated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12540104/HADOOP-8581.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

          org.apache.hadoop.hdfs.TestDatanodeBlockScanner
          org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1277//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1277//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12540104/HADOOP-8581.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy: org.apache.hadoop.hdfs.TestDatanodeBlockScanner org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1277//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1277//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          adding comment following comment to the testcase regarding #4 above:

          
              //we do this trick because the MR AppMaster is started in another VM and
              //the HttpServer configuration is not loaded from the job.xml but from the
              //site.xml files in the classpath
          
          Show
          Alejandro Abdelnur added a comment - adding comment following comment to the testcase regarding #4 above: //we do this trick because the MR AppMaster is started in another VM and //the HttpServer configuration is not loaded from the job.xml but from the //site.xml files in the classpath
          Hide
          Aaron T. Myers added a comment -

          new patch addresses all your comments except for the generation of the core-site.xml. The MR AM needs that info coming from the core-site.xml, not from the job.xml. And the MR AM is started in a separate VM, thus cannot set it from the testcase bootstrap of the minicluster.

          Got it. Makes sense. Maybe add a comment in the test to that effect?

          The latest patch looks good to me. +1 pending Jenkins.

          Show
          Aaron T. Myers added a comment - new patch addresses all your comments except for the generation of the core-site.xml. The MR AM needs that info coming from the core-site.xml, not from the job.xml. And the MR AM is started in a separate VM, thus cannot set it from the testcase bootstrap of the minicluster. Got it. Makes sense. Maybe add a comment in the test to that effect? The latest patch looks good to me. +1 pending Jenkins.
          Hide
          Alejandro Abdelnur added a comment -

          @atm, thx for there review. new patch addresses all your comments except for the generation of the core-site.xml. The MR AM needs that info coming from the core-site.xml, not from the job.xml. And the MR AM is started in a separate VM, thus cannot set it from the testcase bootstrap of the minicluster.

          build and install a pseudo cluster, configured for ssl and verified pages work over ssl for all services.

          Show
          Alejandro Abdelnur added a comment - @atm, thx for there review. new patch addresses all your comments except for the generation of the core-site.xml. The MR AM needs that info coming from the core-site.xml, not from the job.xml. And the MR AM is started in a separate VM, thus cannot set it from the testcase bootstrap of the minicluster. build and install a pseudo cluster, configured for ssl and verified pages work over ssl for all services.
          Hide
          Aaron T. Myers added a comment -

          Patch looks pretty good to me, Tucu. Just a few small comments:

          1. Per our coding conventions, I don't think that HttpConfig#SSL_ENABLED should be all caps.
          2. In the HttpServer constructor, move the .setHost and .setPort to after the if/else:
            if (...) {
            ...
              sslListener.setHost(bindAddress);
              sslListener.setPort(port);
              listener = sslListener;
            } else {
              listener = createBaseListener(conf);
              listener.setHost(bindAddress);
              listener.setPort(port);
            }
            
          3. In the core-default.xml description, take out the word "it" and change "webuis" to "web UIs":
            +    Whether to use SSL for the HTTP endpoints. If set to true, it the
            +    NameNode, DataNode, ResourceManager, NodeManager, HistoryServer and
            +    MapReduceAppMaster webuis will be served over HTTPS instead HTTP.
            
          4. Rather than go through the headache of writing out a core-default.xml containing the appropriate SSL config, how about just adding a setSslEnabledForTesting static function to HttpConfig?
          5. Considering that every place you call HttpConfig#getScheme you immediately append "://", maybe just append that in HttpConfig#getScheme? Or perhaps have a HttpConfig#getPrefix which returns HttpConfig#getScheme() + "://" ?
          6. I think you inadvertently incorrectly changed the indentation in HostUtil#getTaskLogUrl to be 4 spaces instead of 2.
          7. There are some inadvertent and unnecessary whitespace changes in RMAppAttemptImpl.
          Show
          Aaron T. Myers added a comment - Patch looks pretty good to me, Tucu. Just a few small comments: Per our coding conventions, I don't think that HttpConfig#SSL_ENABLED should be all caps. In the HttpServer constructor, move the .setHost and .setPort to after the if/else: if (...) { ... sslListener.setHost(bindAddress); sslListener.setPort(port); listener = sslListener; } else { listener = createBaseListener(conf); listener.setHost(bindAddress); listener.setPort(port); } In the core-default.xml description, take out the word "it" and change "webuis" to "web UIs": + Whether to use SSL for the HTTP endpoints. If set to true , it the + NameNode, DataNode, ResourceManager, NodeManager, HistoryServer and + MapReduceAppMaster webuis will be served over HTTPS instead HTTP. Rather than go through the headache of writing out a core-default.xml containing the appropriate SSL config, how about just adding a setSslEnabledForTesting static function to HttpConfig? Considering that every place you call HttpConfig#getScheme you immediately append "://", maybe just append that in HttpConfig#getScheme? Or perhaps have a HttpConfig#getPrefix which returns HttpConfig#getScheme() + "://" ? I think you inadvertently incorrectly changed the indentation in HostUtil#getTaskLogUrl to be 4 spaces instead of 2. There are some inadvertent and unnecessary whitespace changes in RMAppAttemptImpl.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12539884/HADOOP-8581.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

          org.apache.hadoop.hdfs.TestDatanodeBlockScanner
          org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1267//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1267//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12539884/HADOOP-8581.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy: org.apache.hadoop.hdfs.TestDatanodeBlockScanner org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1267//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1267//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          patch rebased to trunk (after yarn move)

          Show
          Alejandro Abdelnur added a comment - patch rebased to trunk (after yarn move)
          Hide
          Alejandro Abdelnur added a comment -

          canceling patch as I have to rebased it due to the yarn move.

          Show
          Alejandro Abdelnur added a comment - canceling patch as I have to rebased it due to the yarn move.
          Hide
          Alejandro Abdelnur added a comment -

          jenkins test-patch for some weird reason keeps ignoring this patch. Just run test-patch locally, following the result:

              +1 @author.  The patch does not contain any @author tags.
          
              +1 tests included.  The patch appears to include 1 new or modified test files.
          
              +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
          
              +1 javadoc.  The javadoc tool did not generate any warning messages.
          
              +1 eclipse:eclipse.  The patch built with eclipse:eclipse.
          
              -1 findbugs.  The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings.
          
              +1 release audit.  The applied patch does not increase the total number of release audit warnings.
          

          The 4 findbugs warnings are unrelated.

          Show
          Alejandro Abdelnur added a comment - jenkins test-patch for some weird reason keeps ignoring this patch. Just run test-patch locally, following the result: +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. The 4 findbugs warnings are unrelated.
          Hide
          Alejandro Abdelnur added a comment -

          re-uploading to see if jenkins takes notice.

          Show
          Alejandro Abdelnur added a comment - re-uploading to see if jenkins takes notice.
          Hide
          Alejandro Abdelnur added a comment -

          @suresh, thanks for your detailed review. Attached is a patch incorporating all your feedback.

          • splitting into different patches. As ATM noted, cross-projects work, the patch was nto applying because HADOOP-8644 was not yet committed.

          I've also reverted using $

          {hadoop.ssl.enabled}

          as the default value for the mapreduce mapreduce.shuffle.ssl.enabled. With this change there is not pre-assumption that enabling SSL for the webui enables SSL for encrypted shuffle.

          Show
          Alejandro Abdelnur added a comment - @suresh, thanks for your detailed review. Attached is a patch incorporating all your feedback. splitting into different patches. As ATM noted, cross-projects work, the patch was nto applying because HADOOP-8644 was not yet committed. I've also reverted using $ {hadoop.ssl.enabled} as the default value for the mapreduce mapreduce.shuffle.ssl.enabled . With this change there is not pre-assumption that enabling SSL for the webui enables SSL for encrypted shuffle.
          Hide
          Suresh Srinivas added a comment -

          Why can't Jenkins use it? Cross-project patches should work now.

          That is good! I was not aware of it.

          Show
          Suresh Srinivas added a comment - Why can't Jenkins use it? Cross-project patches should work now. That is good! I was not aware of it.
          Hide
          Aaron T. Myers added a comment -

          I prefer splitting this into separate patches instead of one single patch that Jenkins cannot use.

          Why can't Jenkins use it? Cross-project patches should work now.

          Show
          Aaron T. Myers added a comment - I prefer splitting this into separate patches instead of one single patch that Jenkins cannot use. Why can't Jenkins use it? Cross-project patches should work now.
          Hide
          Suresh Srinivas added a comment -

          Early comments.

          I prefer splitting this into separate patches instead of one single patch that Jenkins cannot use.

          1. There are unnecessary white space changes (e.g: WebAppProxyServlet.java). Indentation in some places is incorrect as well (4 spaces instead of two spaces).
          2. core-site.xml - typo "SSL for for the HTTP". Can you please add more/better description for the new parameter added.
          3. HttpServer. java - please do not turn checked exception GeneralSecurityException into RTE. Perhaps you could throw it as IOException
          4. Add brief comments to TestSSLHttpServer.java
          5. Not sure you needed to make getTaskLogsUrl() non-static
          Show
          Suresh Srinivas added a comment - Early comments. I prefer splitting this into separate patches instead of one single patch that Jenkins cannot use. There are unnecessary white space changes (e.g: WebAppProxyServlet.java). Indentation in some places is incorrect as well (4 spaces instead of two spaces). core-site.xml - typo "SSL for for the HTTP". Can you please add more/better description for the new parameter added. HttpServer. java - please do not turn checked exception GeneralSecurityException into RTE. Perhaps you could throw it as IOException Add brief comments to TestSSLHttpServer.java Not sure you needed to make getTaskLogsUrl() non-static
          Hide
          Tom White added a comment -

          +1 looks good to me.

          Show
          Tom White added a comment - +1 looks good to me.
          Hide
          Alejandro Abdelnur added a comment -

          applying the patch to the latest trunk from the GIT mirror it applies just fine (SVN seems down at the moment so I cannot try there)

          Show
          Alejandro Abdelnur added a comment - applying the patch to the latest trunk from the GIT mirror it applies just fine (SVN seems down at the moment so I cannot try there)
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12538965/HADOOP-8581.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1245//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538965/HADOOP-8581.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 javac. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1245//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          preview patch, documentation is missing. tested manually that web UI for MR/YARN & HDFS work over HTTPS. This patch requires HADOOP-8644.

          Show
          Alejandro Abdelnur added a comment - preview patch, documentation is missing. tested manually that web UI for MR/YARN & HDFS work over HTTPS. This patch requires HADOOP-8644 .

            People

            • Assignee:
              Alejandro Abdelnur
              Reporter:
              Alejandro Abdelnur
            • Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development