Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.6.0
    • Fix Version/s: 3.0.0-alpha2
    • Component/s: kms
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Hide
      <!-- markdown -->

      The following environment variables are deprecated. Set the corresponding
      configuration properties instead.

      Environment Variable | Configuration Property | Configuration File
      -------------------------|------------------------------|--------------------
      KMS_HTTP_PORT | hadoop.kms.http.port | kms-site.xml
      KMS_MAX_HTTP_HEADER_SIZE | hadoop.http.max.request.header.size and hadoop.http.max.response.header.size | kms-site.xml
      KMS_MAX_THREADS | hadoop.http.max.threads | kms-site.xml
      KMS_SSL_ENABLED | hadoop.kms.ssl.enabled | kms-site.xml
      KMS_SSL_KEYSTORE_FILE | ssl.server.keystore.location | ssl-server.xml
      KMS_SSL_KEYSTORE_PASS | ssl.server.keystore.password | ssl-server.xml
      KMS_TEMP | hadoop.http.temp.dir | kms-site.xml

      These default HTTP Services have been added.

      Name | Description
      -------------------|------------------------------------
      /conf | Display configuration properties
      /jmx | Java JMX management interface
      /logLevel | Get or set log level per class
      /logs | Display log files
      /stacks | Display JVM stacks
      /static/index.html | The static home page

      The JMX path has been changed from /kms/jmx to /jmx.

      Script kms.sh has been deprecated, use `hadoop kms` instead. The new scripts are based on the Hadoop shell scripting framework. `hadoop daemonlog` is supported. SSL configurations are read from ssl-server.xml.
      Show
      <!-- markdown --> The following environment variables are deprecated. Set the corresponding configuration properties instead. Environment Variable | Configuration Property | Configuration File -------------------------|------------------------------|-------------------- KMS_HTTP_PORT | hadoop.kms.http.port | kms-site.xml KMS_MAX_HTTP_HEADER_SIZE | hadoop.http.max.request.header.size and hadoop.http.max.response.header.size | kms-site.xml KMS_MAX_THREADS | hadoop.http.max.threads | kms-site.xml KMS_SSL_ENABLED | hadoop.kms.ssl.enabled | kms-site.xml KMS_SSL_KEYSTORE_FILE | ssl.server.keystore.location | ssl-server.xml KMS_SSL_KEYSTORE_PASS | ssl.server.keystore.password | ssl-server.xml KMS_TEMP | hadoop.http.temp.dir | kms-site.xml These default HTTP Services have been added. Name | Description -------------------|------------------------------------ /conf | Display configuration properties /jmx | Java JMX management interface /logLevel | Get or set log level per class /logs | Display log files /stacks | Display JVM stacks /static/index.html | The static home page The JMX path has been changed from /kms/jmx to /jmx. Script kms.sh has been deprecated, use `hadoop kms` instead. The new scripts are based on the Hadoop shell scripting framework. `hadoop daemonlog` is supported. SSL configurations are read from ssl-server.xml.

      Description

      The Tomcat 6 we are using will reach EOL at the end of 2017. While there are other good options, I would propose switching to Jetty 9 for the following reasons:

      • Easier migration. Both Tomcat and Jetty are based on Servlet Containers, so we don't have change client code that much. It would require more work to switch to JAX-RS.
      • Well established.
      • Good performance and scalability.

      Other alternatives:

      • Jersey + Grizzly
      • Tomcat 8

      Your opinions will be greatly appreciated.

      1. HADOOP-13597.001.patch
        82 kB
        John Zhuge
      2. HADOOP-13597.002.patch
        97 kB
        John Zhuge
      3. HADOOP-13597.003.patch
        102 kB
        John Zhuge
      4. HADOOP-13597.004.patch
        107 kB
        John Zhuge
      5. HADOOP-13597.005.patch
        110 kB
        John Zhuge
      6. HADOOP-13597.006.patch
        115 kB
        John Zhuge
      7. HADOOP-13597.007.patch
        116 kB
        John Zhuge

        Issue Links

          Activity

          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for creating this John.
          +1 to the idea, based on discussions from HADOOP-10075 so far.

          Show
          xiaochen Xiao Chen added a comment - Thanks for creating this John. +1 to the idea, based on discussions from HADOOP-10075 so far.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I'd prefer making the leap to jersey/grizzy

          Show
          stevel@apache.org Steve Loughran added a comment - I'd prefer making the leap to jersey/grizzy
          Hide
          jzhuge John Zhuge added a comment -

          Bob Hansen commented in HADOOP-10860:

          Haohui Mai converted the DN side of webhdfs to use Netty for performance and stability. He may have some experience to share.

          Show
          jzhuge John Zhuge added a comment - Bob Hansen commented in HADOOP-10860 : Haohui Mai converted the DN side of webhdfs to use Netty for performance and stability. He may have some experience to share.
          Hide
          andrew.wang Andrew Wang added a comment -

          Is this change and the HTTPFS switch incompatible? I'm guessing yes, since it changes the configuration for users. If so, please set the "Incompatible" flags for tracking.

          Show
          andrew.wang Andrew Wang added a comment - Is this change and the HTTPFS switch incompatible? I'm guessing yes, since it changes the configuration for users. If so, please set the "Incompatible" flags for tracking.
          Hide
          jzhuge John Zhuge added a comment -

          Starting work based on Robert's patch 011 for HADOOP-10075.

          Will use embedded Jetty. Migrate all configs to a single kms-site.xml, including Tomcat port and SSL settings currently in server.xml and ssl-server.xml.

          Will use HttpServer2.

          Show
          jzhuge John Zhuge added a comment - Starting work based on Robert's patch 011 for HADOOP-10075 . Will use embedded Jetty. Migrate all configs to a single kms-site.xml, including Tomcat port and SSL settings currently in server.xml and ssl-server.xml. Will use HttpServer2.
          Hide
          jzhuge John Zhuge added a comment -

          Alejandro Abdelnur and Andrew Wang, since MiniKMS is also based on embedded Jetty, it is possible to replace it with full KMS once KMS is switched to embedded Jetty?

          Show
          jzhuge John Zhuge added a comment - Alejandro Abdelnur and Andrew Wang , since MiniKMS is also based on embedded Jetty, it is possible to replace it with full KMS once KMS is switched to embedded Jetty?
          Hide
          andrew.wang Andrew Wang added a comment -

          Hi John, could you respond to my earlier question about if this change is incompatible? If so, this is a blocker for 3.0.0-beta1, and we should mark it as such.

          since MiniKMS is also based on embedded Jetty, it is possible to replace it with full KMS once KMS is switched to embedded Jetty?

          Seems fine to me.

          Show
          andrew.wang Andrew Wang added a comment - Hi John, could you respond to my earlier question about if this change is incompatible? If so, this is a blocker for 3.0.0-beta1, and we should mark it as such. since MiniKMS is also based on embedded Jetty, it is possible to replace it with full KMS once KMS is switched to embedded Jetty? Seems fine to me.
          Hide
          jzhuge John Zhuge added a comment -

          Agree with you, it is an incompatible.

          Show
          jzhuge John Zhuge added a comment - Agree with you, it is an incompatible.
          Hide
          jzhuge John Zhuge added a comment - - edited

          Patch 001

          • KMSHttpServer based on HttpServer2
          • Redirect MiniKMS to KMSHttpServer thus all KMS unit tests exercise KMSHttpServer
          • Add kms-default.xml
          • Add Jetty properties including SSL properties
          • Convert hadoop-kms from war to jar
          • Rewrite kms.sh to use Hadoop shell script framework
          • Obsolete KMSJMXServlet. HttpServer2 uses compatible org.apache.hadoop.jmx.JMXJsonServlet with more features.
          • Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to work anyway
          • Obsolete kms.sh version that prints Tomcat version

          TESTING DONE

          • All hadoop-kms unit tests. MiniKMS equals full KMS.
          • Non-secure REST APIs
          • Non-secure “hadoop key” commands
          • SSL REST APIs
          • kms.sh run/start/stop/status
          • JMX works
          • /logs works

          TODO

          • Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
          • Add static web content /index.html
          • More ad-hoc testing
          • Integration testing
          • Update docs: index.md.vm

          TODO in new JIRAs:

          • Integrate with Hadoop SSL server configuration
          • Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
          • Design common Http server configuration. Common properties in “<module>-site.xml” with config prefix, e.g., “hadoop.kms.”.
          • Design HttpServer2 configuration-based builder
          • Share web apps code in Common, HDFS, and YARN

          My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

          Show
          jzhuge John Zhuge added a comment - - edited Patch 001 KMSHttpServer based on HttpServer2 Redirect MiniKMS to KMSHttpServer thus all KMS unit tests exercise KMSHttpServer Add kms-default.xml Add Jetty properties including SSL properties Convert hadoop-kms from war to jar Rewrite kms.sh to use Hadoop shell script framework Obsolete KMSJMXServlet. HttpServer2 uses compatible org.apache.hadoop.jmx.JMXJsonServlet with more features. Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to work anyway Obsolete kms.sh version that prints Tomcat version TESTING DONE All hadoop-kms unit tests. MiniKMS equals full KMS. Non-secure REST APIs Non-secure “hadoop key” commands SSL REST APIs kms.sh run/start/stop/status JMX works /logs works TODO Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE Add static web content /index.html More ad-hoc testing Integration testing Update docs: index.md.vm TODO in new JIRAs: Integrate with Hadoop SSL server configuration Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc. Design common Http server configuration. Common properties in “<module>-site.xml” with config prefix, e.g., “hadoop.kms.”. Design HttpServer2 configuration-based builder Share web apps code in Common, HDFS, and YARN My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001
          Hide
          aw Allen Wittenauer added a comment -

          Hooray! This is really great.

          Rewrite kms.sh to use Hadoop shell script framework

          I didn't have any specific feedback about this bit (quick pass; didn't see anything obvious).

          One of the big goals I had for the rewrite was to get sbin out of the direct path for administrators. With that in mind, I wonder if this is the time to fix kms to be less of an outlier.

          One choice would be integrate it into bin/hadoop. (probably via shell profile a la the bits in hadoop-tools). Another, less drastic option would be just to move sbin/kms.sh to bin/kms. In either case, sbin/kms.sh just becomes a wrapper.

          Anyway, food for thought.

          Show
          aw Allen Wittenauer added a comment - Hooray! This is really great. Rewrite kms.sh to use Hadoop shell script framework I didn't have any specific feedback about this bit (quick pass; didn't see anything obvious). One of the big goals I had for the rewrite was to get sbin out of the direct path for administrators. With that in mind, I wonder if this is the time to fix kms to be less of an outlier. One choice would be integrate it into bin/hadoop. (probably via shell profile a la the bits in hadoop-tools). Another, less drastic option would be just to move sbin/kms.sh to bin/kms. In either case, sbin/kms.sh just becomes a wrapper. Anyway, food for thought.
          Hide
          jzhuge John Zhuge added a comment -

          Allen Wittenauer I was about to ping you on the rewrite, thanks for the quick feedback !

          It seems more natural to move it into a sub-command of bin/hadoop.

          Also ok to move sbin/kms.sh to bin/kms, while I found it a little awkward for this kind of script where there is one single implicit subcommand, unlike hadoop/hdfs/yarn scripts.

          Show
          jzhuge John Zhuge added a comment - Allen Wittenauer I was about to ping you on the rewrite, thanks for the quick feedback ! It seems more natural to move it into a sub-command of bin/hadoop. Also ok to move sbin/kms.sh to bin/kms, while I found it a little awkward for this kind of script where there is one single implicit subcommand, unlike hadoop/hdfs/yarn scripts.
          Hide
          jzhuge John Zhuge added a comment - - edited

          Map sbin/kms.sh to bin/hadoop kms:

          kms.sh run hadoop kms
          kms.sh start hadoop --daemon start kms
          kms.sh stop hadoop --daemon stop kms
          kms.sh status hadoop --daemon status kms
          Show
          jzhuge John Zhuge added a comment - - edited Map sbin/kms.sh to bin/hadoop kms : kms.sh run hadoop kms kms.sh start hadoop --daemon start kms kms.sh stop hadoop --daemon stop kms kms.sh status hadoop --daemon status kms
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 1s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 17s Maven dependency ordering for branch
          +1 mvninstall 6m 51s trunk passed
          +1 compile 9m 42s trunk passed
          +1 checkstyle 1m 37s trunk passed
          +1 mvnsite 1m 58s trunk passed
          +1 mvneclipse 1m 10s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 6s trunk passed
          +1 javadoc 1m 45s trunk passed
          0 mvndep 0m 20s Maven dependency ordering for patch
          +1 mvninstall 1m 14s the patch passed
          +1 compile 9m 21s the patch passed
          +1 javac 9m 21s the patch passed
          +1 checkstyle 1m 43s root: The patch generated 0 new + 57 unchanged - 5 fixed = 57 total (was 62)
          +1 mvnsite 2m 11s the patch passed
          +1 mvneclipse 1m 24s the patch passed
          +1 shellcheck 0m 16s The patch generated 0 new + 566 unchanged - 1 fixed = 566 total (was 567)
          +1 shelldocs 0m 24s The patch generated 0 new + 344 unchanged - 2 fixed = 344 total (was 346)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 6s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 30s the patch passed
          +1 javadoc 1m 59s the patch passed
          +1 unit 0m 27s hadoop-assemblies in the patch passed.
          +1 unit 10m 51s hadoop-common in the patch passed.
          +1 unit 2m 29s hadoop-kms in the patch passed.
          +1 asflicense 0m 45s The patch does not generate ASF License warnings.
          86m 17s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13597
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840798/HADOOP-13597.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs
          uname Linux bee2064e3e35 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 47ca9e2
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11155/testReport/
          modules C: hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11155/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 1s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 6m 51s trunk passed +1 compile 9m 42s trunk passed +1 checkstyle 1m 37s trunk passed +1 mvnsite 1m 58s trunk passed +1 mvneclipse 1m 10s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 6s trunk passed +1 javadoc 1m 45s trunk passed 0 mvndep 0m 20s Maven dependency ordering for patch +1 mvninstall 1m 14s the patch passed +1 compile 9m 21s the patch passed +1 javac 9m 21s the patch passed +1 checkstyle 1m 43s root: The patch generated 0 new + 57 unchanged - 5 fixed = 57 total (was 62) +1 mvnsite 2m 11s the patch passed +1 mvneclipse 1m 24s the patch passed +1 shellcheck 0m 16s The patch generated 0 new + 566 unchanged - 1 fixed = 566 total (was 567) +1 shelldocs 0m 24s The patch generated 0 new + 344 unchanged - 2 fixed = 344 total (was 346) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 6s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 30s the patch passed +1 javadoc 1m 59s the patch passed +1 unit 0m 27s hadoop-assemblies in the patch passed. +1 unit 10m 51s hadoop-common in the patch passed. +1 unit 2m 29s hadoop-kms in the patch passed. +1 asflicense 0m 45s The patch does not generate ASF License warnings. 86m 17s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13597 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12840798/HADOOP-13597.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs uname Linux bee2064e3e35 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 47ca9e2 Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11155/testReport/ modules C: hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11155/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for working on this John Zhuge.

          Some review comments aside from your todos:

          • Suggest to name the new config key names XXXX_KEY, to be more consistent with HDFS/Common way.
          • Nice we're loading envvars for compatibility. It'll make upgrades less painful. OTOH, could you add some text in the Loaded env logs to indicate this is a deprecated way, and user should user configuration files instead?
          • kms-default.xml has some duplicate key names (hadoop.kms.authentication.signer.secret.provider.zookeeper.XXX).
          • This exists before your patch, but good to fix along: kms-default.xml has some escaped chars (") in configuration.
          • hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type defaults to none before. We should keep that unchanged. The kerberos default value also doesn't work, it should be sasl
          • We should add basic testing to KMSHttpServer
          • Had a quick run, kms.log seems record every request/response. By default this shouldn't be logged.
          • After clean install kms.log appears to be at TRACE level. Suggest to make it INFO.
          Show
          xiaochen Xiao Chen added a comment - Thanks for working on this John Zhuge . Some review comments aside from your todos: Suggest to name the new config key names XXXX_KEY, to be more consistent with HDFS/Common way. Nice we're loading envvars for compatibility. It'll make upgrades less painful. OTOH, could you add some text in the Loaded env logs to indicate this is a deprecated way, and user should user configuration files instead? kms-default.xml has some duplicate key names (hadoop.kms.authentication.signer.secret.provider.zookeeper.XXX). This exists before your patch, but good to fix along: kms-default.xml has some escaped chars (") in configuration. hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type defaults to none before. We should keep that unchanged. The kerberos default value also doesn't work, it should be sasl We should add basic testing to KMSHttpServer Had a quick run, kms.log seems record every request/response. By default this shouldn't be logged. After clean install kms.log appears to be at TRACE level. Suggest to make it INFO.
          Hide
          jzhuge John Zhuge added a comment -

          Great comments!

          Suggest to name the new config key names XXXX_KEY

          In the case, we will have mixed naming style in KMSConfiguration. Is that ok? The new properties are much fewer than old ones especially after I move SSL properties to ssl-server.xml.

          We should add basic testing to KMSHttpServer

          KMSHttpServer is called by MiniKMS thus all its methods are exercised by KMS unit tests.Should I add tests to ensure legacy env variables are still supported?

          Show
          jzhuge John Zhuge added a comment - Great comments! Suggest to name the new config key names XXXX_KEY In the case, we will have mixed naming style in KMSConfiguration. Is that ok? The new properties are much fewer than old ones especially after I move SSL properties to ssl-server.xml. We should add basic testing to KMSHttpServer KMSHttpServer is called by MiniKMS thus all its methods are exercised by KMS unit tests.Should I add tests to ensure legacy env variables are still supported?
          Hide
          jzhuge John Zhuge added a comment -

          HDFS/Common way of key naming makes sense, I will switch. Leave the old keys alone.

          Show
          jzhuge John Zhuge added a comment - HDFS/Common way of key naming makes sense, I will switch. Leave the old keys alone.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks John.

          Yes agreed. Let's keep this jira focused so new configs only. The old ones aren't public so can be changed in another jira.

          re. KMSHttpServer testing, I was thinking to make the most basic checks so if something breaks it will be obvious. If it's all covered by MiniKMS and can be easily figured out, fine by me then.

          Show
          xiaochen Xiao Chen added a comment - Thanks John. Yes agreed. Let's keep this jira focused so new configs only. The old ones aren't public so can be changed in another jira. re. KMSHttpServer testing, I was thinking to make the most basic checks so if something breaks it will be obvious. If it's all covered by MiniKMS and can be easily figured out, fine by me then.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 002

          • Add “hadoop kms” sub-command. kms.sh is now just a wrapper.
          • Read SSL configuration from ssl-server.xml
          • Put common SSL config keys in SSLConfig
          • Put common HTTP config keys in HttpConfig
          • Support all deprecated environment variables and give warning of deprecation
          • Enhanced web page /static/index.html, not /index.html due to HttpServer2 limitation

          TESTING DONE

          • Run “hadoop key list/create/delete/roll” in non-secure and SSL setup
          • All KMS unit tests that actually exercise the full-blown KMS
          • Script: hadoop kms, hadoop —daemon start|status|stop kms
          • Script: kms.sh run|start|status|stop
          • /kms/jmx, /kms/logLevel, /kms/conf, /kms/stack, /logs, and /static

          TODO

          • Integration testing
          • Update docs: index.md.vm

          TODO in new JIRAs:

          • Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
          • Share web apps code in Common, HDFS, and YARN
          Show
          jzhuge John Zhuge added a comment - Patch 002 Add “hadoop kms” sub-command. kms.sh is now just a wrapper. Read SSL configuration from ssl-server.xml Put common SSL config keys in SSLConfig Put common HTTP config keys in HttpConfig Support all deprecated environment variables and give warning of deprecation Enhanced web page /static/index.html, not /index.html due to HttpServer2 limitation TESTING DONE Run “hadoop key list/create/delete/roll” in non-secure and SSL setup All KMS unit tests that actually exercise the full-blown KMS Script: hadoop kms, hadoop —daemon start|status|stop kms Script: kms.sh run|start|status|stop /kms/jmx, /kms/logLevel, /kms/conf, /kms/stack, /logs, and /static TODO Integration testing Update docs: index.md.vm TODO in new JIRAs: Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc. Share web apps code in Common, HDFS, and YARN
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 1m 48s Maven dependency ordering for branch
          +1 mvninstall 7m 36s trunk passed
          +1 compile 10m 0s trunk passed
          +1 checkstyle 1m 45s trunk passed
          +1 mvnsite 2m 1s trunk passed
          +1 mvneclipse 1m 10s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 10s trunk passed
          +1 javadoc 1m 47s trunk passed
          0 mvndep 0m 21s Maven dependency ordering for patch
          +1 mvninstall 1m 22s the patch passed
          +1 compile 10m 29s the patch passed
          +1 javac 10m 29s the patch passed
          +1 checkstyle 2m 1s root: The patch generated 0 new + 236 unchanged - 9 fixed = 236 total (was 245)
          +1 mvnsite 2m 19s the patch passed
          +1 mvneclipse 1m 25s the patch passed
          -1 shellcheck 0m 17s The patch generated 5 new + 559 unchanged - 8 fixed = 564 total (was 567)
          +1 shelldocs 0m 22s The patch generated 0 new + 342 unchanged - 4 fixed = 342 total (was 346)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 7s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 42s the patch passed
          +1 javadoc 2m 11s the patch passed
          +1 unit 0m 28s hadoop-assemblies in the patch passed.
          +1 unit 9m 38s hadoop-common in the patch passed.
          +1 unit 2m 31s hadoop-kms in the patch passed.
          +1 asflicense 0m 48s The patch does not generate ASF License warnings.
          90m 6s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13597
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12841424/HADOOP-13597.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs
          uname Linux 8c7df46be33e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / c87b3a4
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          shellcheck https://builds.apache.org/job/PreCommit-HADOOP-Build/11181/artifact/patchprocess/diff-patch-shellcheck.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11181/testReport/
          modules C: hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11181/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 1m 48s Maven dependency ordering for branch +1 mvninstall 7m 36s trunk passed +1 compile 10m 0s trunk passed +1 checkstyle 1m 45s trunk passed +1 mvnsite 2m 1s trunk passed +1 mvneclipse 1m 10s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 10s trunk passed +1 javadoc 1m 47s trunk passed 0 mvndep 0m 21s Maven dependency ordering for patch +1 mvninstall 1m 22s the patch passed +1 compile 10m 29s the patch passed +1 javac 10m 29s the patch passed +1 checkstyle 2m 1s root: The patch generated 0 new + 236 unchanged - 9 fixed = 236 total (was 245) +1 mvnsite 2m 19s the patch passed +1 mvneclipse 1m 25s the patch passed -1 shellcheck 0m 17s The patch generated 5 new + 559 unchanged - 8 fixed = 564 total (was 567) +1 shelldocs 0m 22s The patch generated 0 new + 342 unchanged - 4 fixed = 342 total (was 346) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 7s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 42s the patch passed +1 javadoc 2m 11s the patch passed +1 unit 0m 28s hadoop-assemblies in the patch passed. +1 unit 9m 38s hadoop-common in the patch passed. +1 unit 2m 31s hadoop-kms in the patch passed. +1 asflicense 0m 48s The patch does not generate ASF License warnings. 90m 6s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13597 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12841424/HADOOP-13597.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs uname Linux 8c7df46be33e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / c87b3a4 Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 shellcheck https://builds.apache.org/job/PreCommit-HADOOP-Build/11181/artifact/patchprocess/diff-patch-shellcheck.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11181/testReport/ modules C: hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11181/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jzhuge John Zhuge added a comment -
          Show
          jzhuge John Zhuge added a comment - Private branch for Patch 002 .
          Hide
          jzhuge John Zhuge added a comment - - edited

          Allen Wittenauer Found mistake in hadoop-kms.sh for Patch 002 where everything is in hadoop_subcommand_kms. Should be:

          if [[ "${HADOOP_SHELL_EXECNAME}" = hadoop ]]; then
            hadoop_add_profile kms
            hadoop_add_subcommand "kms" "run KMS, the Key Management Server"
          fi
          
          function _kms_hadoop_init
          {
            # init variables
          }
          
          function hadoop_subcommand_kms
          {
            # Called by bin/hadoop to provide subcommand case statement if any
            HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true
            HADOOP_CLASSNAME=org.apache.hadoop.crypto.key.kms.server.KMSHttpServer
          }
          
          function _kms_hadoop_finalize
          {
            # Called in finalize phase, all env vars are settled
            hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" \
              "-Dkms.config.dir=${HADOOP_CONF_DIR}"
            hadoop_add_param HADOOP_OPTS "-Dkms.log.dir=" \
              "-Dkms.log.dir=${HADOOP_LOG_DIR}"
          }
          

          The 3 functions are called in this order:

          1. _kms_hadoop_init
          2. hadoop_subcommand_kms
          3. _kms_hadoop_finalize
          Show
          jzhuge John Zhuge added a comment - - edited Allen Wittenauer Found mistake in hadoop-kms.sh for Patch 002 where everything is in hadoop_subcommand_kms . Should be: if [[ "${HADOOP_SHELL_EXECNAME}" = hadoop ]]; then hadoop_add_profile kms hadoop_add_subcommand "kms" "run KMS, the Key Management Server" fi function _kms_hadoop_init { # init variables } function hadoop_subcommand_kms { # Called by bin/hadoop to provide subcommand case statement if any HADOOP_SUBCMD_SUPPORTDAEMONIZATION= true HADOOP_CLASSNAME=org.apache.hadoop.crypto.key.kms.server.KMSHttpServer } function _kms_hadoop_finalize { # Called in finalize phase, all env vars are settled hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" \ "-Dkms.config.dir=${HADOOP_CONF_DIR}" hadoop_add_param HADOOP_OPTS "-Dkms.log.dir=" \ "-Dkms.log.dir=${HADOOP_LOG_DIR}" } The 3 functions are called in this order: _kms_hadoop_init hadoop_subcommand_kms _kms_hadoop_finalize
          Hide
          aw Allen Wittenauer added a comment -

          I'm sort of surprised that the "all in one" doesn't work, other than -e should probably be -x. It's almost certainly safer and doesn't have nearly as many side effects. Anything in particular that is breaking that I could help with?

          Show
          aw Allen Wittenauer added a comment - I'm sort of surprised that the "all in one" doesn't work, other than -e should probably be -x. It's almost certainly safer and doesn't have nearly as many side effects. Anything in particular that is breaking that I could help with?
          Hide
          jzhuge John Zhuge added a comment - - edited

          hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" ... may be called too early. Since hadoop_subcommand_opts is called between hadoo_subcommand_kms and _kms_hadoop_finalize, in all-in-one approach, HADOOP_KMS_OPTS=-Dkms.config.dir=.. would not have taken effect.

          Show
          jzhuge John Zhuge added a comment - - edited hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" ... may be called too early. Since hadoop_subcommand_opts is called between hadoo_subcommand_kms and _kms_hadoop_finalize , in all-in-one approach, HADOOP_KMS_OPTS=-Dkms.config.dir=.. would not have taken effect.
          Hide
          jzhuge John Zhuge added a comment -

          I was wrong. The test case HADOOP_KMS_OPTS=-Dkms.config.dir=.. bin/hadoop kms works in both approaches, though for different reasons:

          • All in one: hadoop_subcommand_kms adds -Dkms_config_dir first, then HADOOP_KMS_OPTS is appended to HADOOP_OPTS. The later -Dkms.config.dir on HADOOP_OPTS takes effect.
          • multiple handlers: HADOOP_KMS_OPTS is appended to HADOOP_OPTS first, then _kms_hadoop_finalize calls hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" which checks the existence of string "-Dkms.config.dir=" and decides not to add param.
          Show
          jzhuge John Zhuge added a comment - I was wrong. The test case HADOOP_KMS_OPTS=-Dkms.config.dir=.. bin/hadoop kms works in both approaches, though for different reasons: All in one: hadoop_subcommand_kms adds -Dkms_config_dir first, then HADOOP_KMS_OPTS is appended to HADOOP_OPTS. The later -Dkms.config.dir on HADOOP_OPTS takes effect. multiple handlers: HADOOP_KMS_OPTS is appended to HADOOP_OPTS first, then _kms_hadoop_finalize calls hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" which checks the existence of string "-Dkms.config.dir=" and decides not to add param.
          Hide
          jzhuge John Zhuge added a comment - - edited

          Could have used Decorator Pattern to design a wrapper to log configuration access:

          interface ConfigurationAccess {
            String get(String name);
            String getInt(String name, int defaultValue);
          }
          class Configuration implements ConfigurationAccess {
          ...
          }
          class LoggedConfigurationAccess implements ConfigurationAccess {
            LoggedConfigurationAccess(conf, log) {
              this.conf = conf;
              this.log = log;
            }
            String get(String name) {
              String value = conf.get(name);
              log.info(..., name, value);
              return value;
            }
            String getInt(String name, int defaultValue) {
              String value = conf.getInt(name, defaultValue);
              log.info(..., name, value, defaultValue);
              return value;
            }
          }
          

          A little downsize though: LoggedConfigurationAccess#getInt will log 2 messages because Configuration#getInt calls LoggedConfigurationAccess#get.

          Show
          jzhuge John Zhuge added a comment - - edited Could have used Decorator Pattern to design a wrapper to log configuration access: interface ConfigurationAccess { String get( String name); String getInt( String name, int defaultValue); } class Configuration implements ConfigurationAccess { ... } class LoggedConfigurationAccess implements ConfigurationAccess { LoggedConfigurationAccess(conf, log) { this .conf = conf; this .log = log; } String get( String name) { String value = conf.get(name); log.info(..., name, value); return value; } String getInt( String name, int defaultValue) { String value = conf.getInt(name, defaultValue); log.info(..., name, value, defaultValue); return value; } } A little downsize though: LoggedConfigurationAccess#getInt will log 2 messages because Configuration#getInt calls LoggedConfigurationAccess#get .
          Hide
          jzhuge John Zhuge added a comment -

          Another approach:

          class AccessLoggingConfiguration implements Configuration {
            LoggedConfigurationAccess(conf, log) {
              super(conf);
              this.log = log;
            }
            String get(String name) {
              String value = super.get(name);
              log.info(..., name, value);
              return value;
            }
            String getInt(String name, int defaultValue) {
              String value = super.getInt(name, defaultValue);
              log.info(..., name, value, defaultValue);
              return value;
            }
          }
          
          Show
          jzhuge John Zhuge added a comment - Another approach: class AccessLoggingConfiguration implements Configuration { LoggedConfigurationAccess(conf, log) { super (conf); this .log = log; } String get( String name) { String value = super .get(name); log.info(..., name, value); return value; } String getInt( String name, int defaultValue) { String value = super .getInt(name, defaultValue); log.info(..., name, value, defaultValue); return value; } }
          Hide
          aw Allen Wittenauer added a comment -

          Oh, I see what you were concerned about.

          Looking closer at the code, I'd change the all-in-one to actually do

           hadoop_add_param HADOOP_KMS_OPTS "Dkms.config.dir=" ...
           hadoop_add_param HADOOP_KMS_OPTS "Dkms.log.dir=" ...
          

          since that's much more likely the place where kms.config values get set. Then the whole scenario becomes a non-issue. There's also precedent for just doing something like:

            HADOOP_KMS_OPTS=${HADOOP_KMS_OPTS:-"-Dfoo"}
          

          (see hdfs-config.sh)

          but I'm less of a fan of that. (We should probably undo those and use the hadoop_add_param trick. It's cleaner, more practical, and does a better job of actually protecting the user from missing defaults.)

          Also, be aware of the rather severe consequences that

            hadoop_deprecate_envvar KMS_CONFIG HADOOP_CONF_DIR
            hadoop_deprecate_envvar KMS_LOG HADOOP_LOG_DIR
          

          has.... if KMS_CONFIG was never an analog to HADOOP_CONF_DIR in branch-2, I'd be tempted to just throw a manual warning about it rather than go though deprecate.

          Show
          aw Allen Wittenauer added a comment - Oh, I see what you were concerned about. Looking closer at the code, I'd change the all-in-one to actually do hadoop_add_param HADOOP_KMS_OPTS "Dkms.config.dir=" ... hadoop_add_param HADOOP_KMS_OPTS "Dkms.log.dir=" ... since that's much more likely the place where kms.config values get set. Then the whole scenario becomes a non-issue. There's also precedent for just doing something like: HADOOP_KMS_OPTS=${HADOOP_KMS_OPTS:- "-Dfoo" } (see hdfs-config.sh) but I'm less of a fan of that. (We should probably undo those and use the hadoop_add_param trick. It's cleaner, more practical, and does a better job of actually protecting the user from missing defaults.) Also, be aware of the rather severe consequences that hadoop_deprecate_envvar KMS_CONFIG HADOOP_CONF_DIR hadoop_deprecate_envvar KMS_LOG HADOOP_LOG_DIR has.... if KMS_CONFIG was never an analog to HADOOP_CONF_DIR in branch-2, I'd be tempted to just throw a manual warning about it rather than go though deprecate.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 003

          • Add dependencySets to pom.xml
          • Add shell script hadoop_using_envvar and hadoop_mkdir
          • Extend hadoop_deprecate_envvar
          • Enhance AccessLoggingConfiguration to inherit from Configuration
          • Move the configuration keys out of SSLConfig and HttpConfig
          • HttpServer2 reads tempDirectory and header sizes from configuration instead of builder

          TESTING DONE

          • Run “hadoop key list/create/delete/roll” in non-secure and SSL setup
          • All KMS unit tests that actually exercise the full-blown KMS
          • Script: hadoop kms, hadoop —daemon start|status|stop kms
          • Script: kms.sh run|start|status|stop
          • /kms/jmx, /kms/logLevel, /kms/conf, /kms/stack, /logs, and /static

          TODO

          • Update docs: index.md.vm

          TODO in new JIRAs:

          • Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
          • Share web apps code in Common, HDFS, and YARN
          Show
          jzhuge John Zhuge added a comment - Patch 003 Add dependencySets to pom.xml Add shell script hadoop_using_envvar and hadoop_mkdir Extend hadoop_deprecate_envvar Enhance AccessLoggingConfiguration to inherit from Configuration Move the configuration keys out of SSLConfig and HttpConfig HttpServer2 reads tempDirectory and header sizes from configuration instead of builder TESTING DONE Run “hadoop key list/create/delete/roll” in non-secure and SSL setup All KMS unit tests that actually exercise the full-blown KMS Script: hadoop kms, hadoop —daemon start|status|stop kms Script: kms.sh run|start|status|stop /kms/jmx, /kms/logLevel, /kms/conf, /kms/stack, /logs, and /static TODO Update docs: index.md.vm TODO in new JIRAs: Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc. Share web apps code in Common, HDFS, and YARN
          Hide
          jzhuge John Zhuge added a comment -
          Show
          jzhuge John Zhuge added a comment - My private branch for 003: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.003
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 1m 39s Maven dependency ordering for branch
          +1 mvninstall 6m 53s trunk passed
          +1 compile 9m 35s trunk passed
          +1 checkstyle 1m 40s trunk passed
          +1 mvnsite 1m 56s trunk passed
          +1 mvneclipse 1m 8s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 5s trunk passed
          +1 javadoc 1m 44s trunk passed
          0 mvndep 0m 20s Maven dependency ordering for patch
          +1 mvninstall 1m 13s the patch passed
          +1 compile 9m 17s the patch passed
          +1 javac 9m 17s the patch passed
          +1 checkstyle 1m 44s root: The patch generated 0 new + 233 unchanged - 8 fixed = 233 total (was 241)
          +1 mvnsite 2m 10s the patch passed
          +1 mvneclipse 1m 22s the patch passed
          -1 shellcheck 0m 16s The patch generated 1 new + 568 unchanged - 8 fixed = 569 total (was 576)
          +1 shelldocs 0m 23s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376)
          +1 whitespace 0m 1s The patch has no whitespace issues.
          +1 xml 0m 6s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 30s the patch passed
          +1 javadoc 1m 58s the patch passed
          +1 unit 0m 28s hadoop-assemblies in the patch passed.
          +1 unit 9m 50s hadoop-common in the patch passed.
          +1 unit 2m 27s hadoop-kms in the patch passed.
          +1 asflicense 0m 45s The patch does not generate ASF License warnings.
          86m 13s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13597
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842284/HADOOP-13597.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml shellcheck shelldocs findbugs checkstyle
          uname Linux 9910c5794984 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9ef89ed
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          shellcheck https://builds.apache.org/job/PreCommit-HADOOP-Build/11214/artifact/patchprocess/diff-patch-shellcheck.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11214/testReport/
          modules C: hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11214/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 1m 39s Maven dependency ordering for branch +1 mvninstall 6m 53s trunk passed +1 compile 9m 35s trunk passed +1 checkstyle 1m 40s trunk passed +1 mvnsite 1m 56s trunk passed +1 mvneclipse 1m 8s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 5s trunk passed +1 javadoc 1m 44s trunk passed 0 mvndep 0m 20s Maven dependency ordering for patch +1 mvninstall 1m 13s the patch passed +1 compile 9m 17s the patch passed +1 javac 9m 17s the patch passed +1 checkstyle 1m 44s root: The patch generated 0 new + 233 unchanged - 8 fixed = 233 total (was 241) +1 mvnsite 2m 10s the patch passed +1 mvneclipse 1m 22s the patch passed -1 shellcheck 0m 16s The patch generated 1 new + 568 unchanged - 8 fixed = 569 total (was 576) +1 shelldocs 0m 23s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376) +1 whitespace 0m 1s The patch has no whitespace issues. +1 xml 0m 6s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 30s the patch passed +1 javadoc 1m 58s the patch passed +1 unit 0m 28s hadoop-assemblies in the patch passed. +1 unit 9m 50s hadoop-common in the patch passed. +1 unit 2m 27s hadoop-kms in the patch passed. +1 asflicense 0m 45s The patch does not generate ASF License warnings. 86m 13s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13597 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842284/HADOOP-13597.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml shellcheck shelldocs findbugs checkstyle uname Linux 9910c5794984 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9ef89ed Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 shellcheck https://builds.apache.org/job/PreCommit-HADOOP-Build/11214/artifact/patchprocess/diff-patch-shellcheck.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11214/testReport/ modules C: hadoop-assemblies hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11214/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Hi John Zhuge thanks for working on this big patch!

          • There's one bit that might cause confusion in deployment. The fact that keystore password could come from either environment variable, from configuration file or credential files (via Configuration#getPassword) make me feel a bit uneasy. If the password comes from a credential file, it will also need to ProviderUtils.excludeIncompatibleCredentialProviders in order to trim credential files on HdfsFileSystems.
          • It seems the KMS server is not Kerberized. You might want to construct a HttpServer2 object with extra options to enable Kerberos:
            new HttpServer2.Builder().setSecurityEnabled(true)
                      .setUsernameConfKey(PRINCIPAL)
                      .setKeytabConfKey(KEYTAB);
            
          • When KMSWebServer starts/stops, it should print corresponding message using StringUtils.startupShutdownMessage. This will make supporters' life easier.
          • I think you can not assume the admin user is user.name=kms when accessing the servlets such as jmx, loglevel, etc. Also, need to make sure access permission is guarded properly accessing these servlets.
          • I am not sure how existing KMS handles truststore and its password, but I think you might be missing something in the new KMS when handling truststore and its password.
          • The keystore password comes from configuration key hadoop.security.keystore.java-keystore-provider.password-file. If I understand ConfigRedact correctly it doesn't seem to redact this specific configuration key to me. Could you double check?
          • In Configuration#getPasswordString, please print name if there's an IOException to log. Also, should it catch IOException and return null? If it looks for a password but is unable to, would it be easier to let the exception be thrown? It could be a troubleshooting nightmare I imagine.
          Show
          jojochuang Wei-Chiu Chuang added a comment - Hi John Zhuge thanks for working on this big patch! There's one bit that might cause confusion in deployment. The fact that keystore password could come from either environment variable, from configuration file or credential files (via Configuration#getPassword) make me feel a bit uneasy. If the password comes from a credential file, it will also need to ProviderUtils.excludeIncompatibleCredentialProviders in order to trim credential files on HdfsFileSystems. It seems the KMS server is not Kerberized. You might want to construct a HttpServer2 object with extra options to enable Kerberos: new HttpServer2.Builder().setSecurityEnabled( true ) .setUsernameConfKey(PRINCIPAL) .setKeytabConfKey(KEYTAB); When KMSWebServer starts/stops, it should print corresponding message using StringUtils.startupShutdownMessage . This will make supporters' life easier. I think you can not assume the admin user is user.name=kms when accessing the servlets such as jmx, loglevel, etc. Also, need to make sure access permission is guarded properly accessing these servlets. I am not sure how existing KMS handles truststore and its password, but I think you might be missing something in the new KMS when handling truststore and its password. The keystore password comes from configuration key hadoop.security.keystore.java-keystore-provider.password-file . If I understand ConfigRedact correctly it doesn't seem to redact this specific configuration key to me. Could you double check? In Configuration#getPasswordString , please print name if there's an IOException to log. Also, should it catch IOException and return null? If it looks for a password but is unable to, would it be easier to let the exception be thrown? It could be a troubleshooting nightmare I imagine.
          Hide
          jzhuge John Zhuge added a comment -

          Thanks Wei-Chiu Chuang for the review !

          There's one bit that might cause confusion in deployment. The fact that keystore password could come from either environment variable, from configuration file or credential files (via Configuration#getPassword) make me feel a bit uneasy. If the password comes from a credential file, it will also need to ProviderUtils.excludeIncompatibleCredentialProviders in order to trim credential files on HdfsFileSystems.

          Very good point! I will engage wider discussion. As a precaution, I could revert to the existing KMS approach which does not consult credential provider and file a separate JIRA to integrate with credential provider.

          It seems the KMS server is not Kerberized. You might want to construct a HttpServer2 object with extra options to enable Kerberos:

          KMS uses KMSAuthenticationFilter specified in web.xml instead of the generic AuthenticationFilter by HttpServer2.

          When KMSWebServer starts/stops, it should print corresponding message using StringUtils.startupShutdownMessage. This will make supporters' life easier.

          Will do.

          I think you can not assume the admin user is user.name=kms when accessing the servlets such as jmx, loglevel, etc. Also, need to make sure access permission is guarded properly accessing these servlets.

          Don't think existing KMS supports admin user. Will add this feature.

          I am not sure how existing KMS handles truststore and its password, but I think you might be missing something in the new KMS when handling truststore and its password.

          Truststore password is obsoleted by HADOOP-13864.

          The keystore password comes from configuration key hadoop.security.keystore.java-keystore-provider.password-file. If I understand ConfigRedact correctly it doesn't seem to redact this specific configuration key to me. Could you double check?

          This key is for the password file, not password, so it does not have to be redacted.

          In Configuration#getPasswordString, please print name if there's an IOException to log. Also, should it catch IOException and return null? If it looks for a password but is unable to, would it be easier to let the exception be thrown? It could be a troubleshooting nightmare I imagine.

          Good point. getPasswordString seems unnecessary. Remove it.

          Show
          jzhuge John Zhuge added a comment - Thanks Wei-Chiu Chuang for the review ! There's one bit that might cause confusion in deployment. The fact that keystore password could come from either environment variable, from configuration file or credential files (via Configuration#getPassword) make me feel a bit uneasy. If the password comes from a credential file, it will also need to ProviderUtils.excludeIncompatibleCredentialProviders in order to trim credential files on HdfsFileSystems. Very good point! I will engage wider discussion. As a precaution, I could revert to the existing KMS approach which does not consult credential provider and file a separate JIRA to integrate with credential provider. It seems the KMS server is not Kerberized. You might want to construct a HttpServer2 object with extra options to enable Kerberos: KMS uses KMSAuthenticationFilter specified in web.xml instead of the generic AuthenticationFilter by HttpServer2. When KMSWebServer starts/stops, it should print corresponding message using StringUtils.startupShutdownMessage. This will make supporters' life easier. Will do. I think you can not assume the admin user is user.name=kms when accessing the servlets such as jmx, loglevel, etc. Also, need to make sure access permission is guarded properly accessing these servlets. Don't think existing KMS supports admin user. Will add this feature. I am not sure how existing KMS handles truststore and its password, but I think you might be missing something in the new KMS when handling truststore and its password. Truststore password is obsoleted by HADOOP-13864 . The keystore password comes from configuration key hadoop.security.keystore.java-keystore-provider.password-file. If I understand ConfigRedact correctly it doesn't seem to redact this specific configuration key to me. Could you double check? This key is for the password file, not password, so it does not have to be redacted. In Configuration#getPasswordString, please print name if there's an IOException to log. Also, should it catch IOException and return null? If it looks for a password but is unable to, would it be easier to let the exception be thrown? It could be a troubleshooting nightmare I imagine. Good point. getPasswordString seems unnecessary. Remove it.
          Hide
          aw Allen Wittenauer added a comment -
          +  if [[ -z "${oldval}" ]]; then
          +    return
          

          This is an even worse side effect since it means that we don't promote old values into new ones when the vars have been renamed. (There are a lot of them!)

          Also, where are the unit tests for the bash functions you added?

          Show
          aw Allen Wittenauer added a comment - + if [[ -z "${oldval}" ]]; then + return This is an even worse side effect since it means that we don't promote old values into new ones when the vars have been renamed. (There are a lot of them!) Also, where are the unit tests for the bash functions you added?
          Hide
          jzhuge John Zhuge added a comment -

          Allen Wittenauer I tries to preserve exactly the same behavior as the existing hadoop_deprecate_envvar when newvar is not empty. Isn't -z the reverse of -n? Did I miss something?

          The current code:

          function hadoop_deprecate_envvar
          {
            local oldvar=$1
            local newvar=$2
            local oldval=${!oldvar}
            local newval=${!newvar}
          
            if [[ -n "${oldval}" ]]; then          <<<<<<<====
              hadoop_error "WARNING: ${oldvar} has been replaced by ${newvar}. Using value of ${oldvar}."
              # shellcheck disable=SC2086
              eval ${newvar}=\"${oldval}\"
          
              # shellcheck disable=SC2086
              newval=${oldval}
          
              # shellcheck disable=SC2086
              eval ${newvar}=\"${newval}\"
            fi
          }
          
          Show
          jzhuge John Zhuge added a comment - Allen Wittenauer I tries to preserve exactly the same behavior as the existing hadoop_deprecate_envvar when newvar is not empty. Isn't -z the reverse of -n ? Did I miss something? The current code: function hadoop_deprecate_envvar { local oldvar=$1 local newvar=$2 local oldval=${!oldvar} local newval=${!newvar} if [[ -n "${oldval}" ]]; then <<<<<<<==== hadoop_error "WARNING: ${oldvar} has been replaced by ${newvar}. Using value of ${oldvar}." # shellcheck disable=SC2086 eval ${newvar}=\ "${oldval}\" # shellcheck disable=SC2086 newval=${oldval} # shellcheck disable=SC2086 eval ${newvar}=\ "${newval}\" fi }
          Hide
          jzhuge John Zhuge added a comment - - edited

          However, do realize that the last elif condition can be omitted and changed to else:

            if [[ -z "${oldval}" ]]; then
              return
            elif [[ -z "${newvar}" ]]; then
              hadoop_error "WARNING: ${oldvar} has been deprecated."
            elif [[ -n "${oldval}" && -n "${newvar}" ]]; then
          
          Show
          jzhuge John Zhuge added a comment - - edited However, do realize that the last elif condition can be omitted and changed to else : if [[ -z "${oldval}" ]]; then return elif [[ -z "${newvar}" ]]; then hadoop_error "WARNING: ${oldvar} has been deprecated." elif [[ -n "${oldval}" && -n "${newvar}" ]]; then
          Hide
          rkanter Robert Kanter added a comment -

          A few additional comments:

          1. I don't know if getPasswordString is a good idea. Won't that just make things confusing for users? They try to set a password in the config, but it ends up being null (probably NPE?) instead of throwing an IOE about not finding the password. The latter would be more clear what the problem is.
          2. Should we mark HttpServer2#HTTP_MAX_THREADS as @deprecated?
          3. Not you're doing, but typo in HttpServer2 comment: explicitly destroy the secrete provider
          Show
          rkanter Robert Kanter added a comment - A few additional comments: I don't know if getPasswordString is a good idea. Won't that just make things confusing for users? They try to set a password in the config, but it ends up being null (probably NPE?) instead of throwing an IOE about not finding the password. The latter would be more clear what the problem is. Should we mark HttpServer2#HTTP_MAX_THREADS as @deprecated ? Not you're doing, but typo in HttpServer2 comment: explicitly destroy the secrete provider
          Hide
          jzhuge John Zhuge added a comment -

          Thanks Robert for the review!

          Fixed 2 and 3.

          1. I will post a simplified Configuration#getPasswordString in the next patch. It may still return null though for several reasons: 1) to be consistent with getPassword; 2) some passwords restrieved are simply stored somewhere and may not get used at all, and they are accessed, NPE is an ok indicator; 3) HttpServer2/SSLFactory callers can handle null passwords.

          Show
          jzhuge John Zhuge added a comment - Thanks Robert for the review! Fixed 2 and 3. 1. I will post a simplified Configuration#getPasswordString in the next patch. It may still return null though for several reasons: 1) to be consistent with getPassword; 2) some passwords restrieved are simply stored somewhere and may not get used at all, and they are accessed, NPE is an ok indicator; 3) HttpServer2/SSLFactory callers can handle null passwords.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks John for revving and others for reviewing.

          My comments below:

          • HttpServer2: Don't understand the skipSecretProvider. Could you explain and add some comments/javadocs? Is this about to the AuthenticationFilter and it's related secret providers startup tricks?
          • HttpServer2: Possible to have createHttpsChannelConnector call createHttpChannelConnector first, to reduce some duplicate codes? The httpConfig.setSecureScheme(HTTPS_SCHEME); line seems reasonable to be inside the method too.
          • I found the AccessLoggingConfiguration class naming confusing. Looking at the class javadoc didn't help much either - I only figured out until looking at the code usage. Can't find a good replacement in my vocabulary (appreciate if anyone else has better naming), but we should state in javadoc: 1 this a configuration object that logs each access. 2. it redacts sensitive information. Actually, maybe this is better to be a composites a Configuration rather than inherits? At least whoever uses it later don't have to figure out which method is an Override etc. (BTW, currently missing @Override annotations on all methods, and set seems to be missing a super.set.)
          • KMSWebServer: Nit - I think hadoop code mostly have static import on 1 line and ignore the 80-char rules.
          • KMSWebServer: Totally theoretical, it may be good to also have the isAlive method, and probably add a waitActive-ish method in the MiniKMS, so interested tests can call that and reduce flaky tests due to start up race.
          • Searching for tomcat, see several nitty references still: AuthenticationFiler's var name isInitializedByTomcat, CommandsManual.md and SecureMode.md, KMS's doc index.md.vm, and some code comments etc.
          • For the passwords, agree with Robert on supportability. However I also see similar code in DFSUtil (loadSslConfToHttpServerBuilder and getPassword). Was these copied over? We should at least move that to a common util, and avoid this level of duplication. This will probably leave us not having to change Configuration, but adding a wrapper util. Or per Wei-Chiu's suggestion, maybe not needed any more. Appreciate more javadocs here too, as to why such method is needed.
          • Didn't see answer to Allen's ask about unit tests. (Take a look at hadoop-common-project/hadoop-common/src/test/scripts if you're wondering how that's done).
          • Nit: kms-site.xml is following other -site.xmls, to have a comment line "put site-specific...", which is good. Please follow them closer to have this line before the <configuration> element.
          Show
          xiaochen Xiao Chen added a comment - Thanks John for revving and others for reviewing. My comments below: HttpServer2 : Don't understand the skipSecretProvider . Could you explain and add some comments/javadocs? Is this about to the AuthenticationFilter and it's related secret providers startup tricks? HttpServer2 : Possible to have createHttpsChannelConnector call createHttpChannelConnector first, to reduce some duplicate codes? The httpConfig.setSecureScheme(HTTPS_SCHEME); line seems reasonable to be inside the method too. I found the AccessLoggingConfiguration class naming confusing. Looking at the class javadoc didn't help much either - I only figured out until looking at the code usage. Can't find a good replacement in my vocabulary (appreciate if anyone else has better naming), but we should state in javadoc: 1 this a configuration object that logs each access. 2. it redacts sensitive information. Actually, maybe this is better to be a composites a Configuration rather than inherits? At least whoever uses it later don't have to figure out which method is an Override etc. (BTW, currently missing @Override annotations on all methods, and set seems to be missing a super.set .) KMSWebServer : Nit - I think hadoop code mostly have static import on 1 line and ignore the 80-char rules. KMSWebServer : Totally theoretical, it may be good to also have the isAlive method, and probably add a waitActive -ish method in the MiniKMS, so interested tests can call that and reduce flaky tests due to start up race. Searching for tomcat , see several nitty references still: AuthenticationFiler 's var name isInitializedByTomcat , CommandsManual.md and SecureMode.md , KMS's doc index.md.vm , and some code comments etc. For the passwords, agree with Robert on supportability. However I also see similar code in DFSUtil ( loadSslConfToHttpServerBuilder and getPassword ). Was these copied over? We should at least move that to a common util, and avoid this level of duplication. This will probably leave us not having to change Configuration , but adding a wrapper util. Or per Wei-Chiu's suggestion, maybe not needed any more. Appreciate more javadocs here too, as to why such method is needed. Didn't see answer to Allen's ask about unit tests. (Take a look at hadoop-common-project/hadoop-common/src/test/scripts if you're wondering how that's done). Nit: kms-site.xml is following other -site.xmls, to have a comment line "put site-specific...", which is good. Please follow them closer to have this line before the <configuration> element.
          Hide
          jzhuge John Zhuge added a comment -

          Thanks a lot, Xiao !

          HttpServer2: Don't understand the skipSecretProvider. Could you explain and add some comments/javadocs? Is this about to the AuthenticationFilter and it's related secret providers startup tricks?

          Remove it. Don't recall why I added it.

          HttpServer2: Possible to have createHttpsChannelConnector call createHttpChannelConnector first, to reduce some duplicate codes? The httpConfig.setSecureScheme(HTTPS_SCHEME); line seems reasonable to be inside the method too.

          Fixed.

          I found the AccessLoggingConfiguration class naming confusing. Looking at the class javadoc didn't help much either - I only figured out until looking at the code usage. Can't find a good replacement in my vocabulary (appreciate if anyone else has better naming), but we should state in javadoc: 1 this a configuration object that logs each access. 2. it redacts sensitive information. Actually, maybe this is better to be a composites a Configuration rather than inherits? At least whoever uses it later don't have to figure out which method is an Override etc. (BTW, currently missing @Override annotations on all methods, and set seems to be missing a super.set.)

          Fixed. Is ConfigurationWithLogging a better name? This class is extremely useful during development. Now it has served its purpose, I am ok to remove it.

          KMSWebServer: Nit - I think hadoop code mostly have static import on 1 line and ignore the 80-char rules.

          Fixed.

          KMSWebServer: Totally theoretical, it may be good to also have the isAlive method, and probably add a waitActive-ish method in the MiniKMS, so interested tests can call that and reduce flaky tests due to start up race.

          Good idea. Can we create a follow-up JIRA to add it later when we modify the flaky tests? Otherwise, the new method is unused and untested.

          Searching for tomcat, see several nitty references still: AuthenticationFiler's var name isInitializedByTomcat, CommandsManual.md and SecureMode.md, KMS's doc index.md.vm, and some code comments etc.

          TODO to update the docs. Investigating AuthenticationFiler#isInitializedByTomcat.

          For the passwords, agree with Robert on supportability. However I also see similar code in DFSUtil (loadSslConfToHttpServerBuilder and getPassword). Was these copied over? We should at least move that to a common util, and avoid this level of duplication. This will probably leave us not having to change Configuration, but adding a wrapper util. Or per Wei-Chiu's suggestion, maybe not needed any more. Appreciate more javadocs here too, as to why such method is needed.

          I will start a separate thread. Quite a few duplicate code in HDFS, YARN, and AWS.

          Didn't see answer to Allen's ask about unit tests. (Take a look at hadoop-common-project/hadoop-common/src/test/scripts if you're wondering how that's done).

          Fixed.

          Nit: kms-site.xml is following other -site.xmls, to have a comment line "put site-specific...", which is good. Please follow them closer to have this line before the <configuration> element.

          Fixed.

          Show
          jzhuge John Zhuge added a comment - Thanks a lot, Xiao ! HttpServer2: Don't understand the skipSecretProvider. Could you explain and add some comments/javadocs? Is this about to the AuthenticationFilter and it's related secret providers startup tricks? Remove it. Don't recall why I added it. HttpServer2: Possible to have createHttpsChannelConnector call createHttpChannelConnector first, to reduce some duplicate codes? The httpConfig.setSecureScheme(HTTPS_SCHEME); line seems reasonable to be inside the method too. Fixed. I found the AccessLoggingConfiguration class naming confusing. Looking at the class javadoc didn't help much either - I only figured out until looking at the code usage. Can't find a good replacement in my vocabulary (appreciate if anyone else has better naming), but we should state in javadoc: 1 this a configuration object that logs each access. 2. it redacts sensitive information. Actually, maybe this is better to be a composites a Configuration rather than inherits? At least whoever uses it later don't have to figure out which method is an Override etc. (BTW, currently missing @Override annotations on all methods, and set seems to be missing a super.set.) Fixed. Is ConfigurationWithLogging a better name? This class is extremely useful during development. Now it has served its purpose, I am ok to remove it. KMSWebServer: Nit - I think hadoop code mostly have static import on 1 line and ignore the 80-char rules. Fixed. KMSWebServer: Totally theoretical, it may be good to also have the isAlive method, and probably add a waitActive-ish method in the MiniKMS, so interested tests can call that and reduce flaky tests due to start up race. Good idea. Can we create a follow-up JIRA to add it later when we modify the flaky tests? Otherwise, the new method is unused and untested. Searching for tomcat, see several nitty references still: AuthenticationFiler's var name isInitializedByTomcat, CommandsManual.md and SecureMode.md, KMS's doc index.md.vm, and some code comments etc. TODO to update the docs. Investigating AuthenticationFiler#isInitializedByTomcat . For the passwords, agree with Robert on supportability. However I also see similar code in DFSUtil (loadSslConfToHttpServerBuilder and getPassword). Was these copied over? We should at least move that to a common util, and avoid this level of duplication. This will probably leave us not having to change Configuration, but adding a wrapper util. Or per Wei-Chiu's suggestion, maybe not needed any more. Appreciate more javadocs here too, as to why such method is needed. I will start a separate thread. Quite a few duplicate code in HDFS, YARN, and AWS. Didn't see answer to Allen's ask about unit tests. (Take a look at hadoop-common-project/hadoop-common/src/test/scripts if you're wondering how that's done). Fixed. Nit: kms-site.xml is following other -site.xmls, to have a comment line "put site-specific...", which is good. Please follow them closer to have this line before the <configuration> element. Fixed.
          Hide
          jzhuge John Zhuge added a comment -

          These are getPassword variations:

          • S3AUtils#lookupPassword(Configuration conf, String key, String defVal)
          • LdapGroupsMapping#getPassword(Configuration conf, String alias, String defaultPass)
          • FileBasedKeyStoresFactory#getPassword(Configuration conf, String alias, String defaultPass)
          • DFSUtil#getPassword(Configuration conf, String alias)
          • WebAppUtils#getPassword(Configuration conf, String alias)

          These are various features of these wrappers:

          • Return String type. They all do that. What was the reason for Configuration#getPassword to return char[]? char[] is hard to work with, especially with the possibility of null.
          • Default value. Some APIs do have default value, thus avoid null return.
          • Swallow IOE from Configuration#getPassword and return default value or null
          • Re-throw IOE from Configuration#getPassword with extra message.
          Show
          jzhuge John Zhuge added a comment - These are getPassword variations: S3AUtils#lookupPassword(Configuration conf, String key, String defVal) LdapGroupsMapping#getPassword(Configuration conf, String alias, String defaultPass) FileBasedKeyStoresFactory#getPassword(Configuration conf, String alias, String defaultPass) DFSUtil#getPassword(Configuration conf, String alias) WebAppUtils#getPassword(Configuration conf, String alias) These are various features of these wrappers: Return String type. They all do that. What was the reason for Configuration#getPassword to return char[] ? char[] is hard to work with, especially with the possibility of null. Default value. Some APIs do have default value, thus avoid null return. Swallow IOE from Configuration#getPassword and return default value or null Re-throw IOE from Configuration#getPassword with extra message.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 004

          • Add unit tests for shell scripts
          • Add HttpServer2 Builder API setSSLConf to provide an alternative to keyStore/keyPassword/trustStore APIs.
          • Revert skipSecretProvider and setContextPath
          • Revert Configuration#getPasswordString
          • Remove AuthenticationFilter#isInitializedByTomcat and related code
          • Rename AccessLoggingConfiguration to ConfigurationWithLogging

          TESTING DONE

          • hadoop key list/create/delete/roll in non-secure and SSL setup
          • hadoop kms
          • hadoop —daemon start|status|stop kms
          • kms.sh run|start|status|stop
          • hadoop daemonlog
          • KMS unit tests
          • Shell script unit tests
          • /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html

          TODO

          • Update docs: index.md.vm, CommandsManual.md, and SecureMode.md

          TODO in new JIRAs:

          • HADOOP-13875: Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
          • Share web apps code in Common, HDFS, and YARN

          Private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.004

          Show
          jzhuge John Zhuge added a comment - Patch 004 Add unit tests for shell scripts Add HttpServer2 Builder API setSSLConf to provide an alternative to keyStore/keyPassword/trustStore APIs. Revert skipSecretProvider and setContextPath Revert Configuration#getPasswordString Remove AuthenticationFilter#isInitializedByTomcat and related code Rename AccessLoggingConfiguration to ConfigurationWithLogging TESTING DONE hadoop key list/create/delete/roll in non-secure and SSL setup hadoop kms hadoop —daemon start|status|stop kms kms.sh run|start|status|stop hadoop daemonlog KMS unit tests Shell script unit tests /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html TODO Update docs: index.md.vm, CommandsManual.md, and SecureMode.md TODO in new JIRAs: HADOOP-13875 : Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc. Share web apps code in Common, HDFS, and YARN Private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.004
          Hide
          jzhuge John Zhuge added a comment -

          Here is a link to some tests I use: https://github.com/jzhuge/kms-httpfs-test.

          Show
          jzhuge John Zhuge added a comment - Here is a link to some tests I use: https://github.com/jzhuge/kms-httpfs-test .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 4 new or modified test files.
          0 mvndep 0m 17s Maven dependency ordering for branch
          +1 mvninstall 6m 49s trunk passed
          +1 compile 9m 34s trunk passed
          +1 checkstyle 1m 38s trunk passed
          +1 mvnsite 2m 27s trunk passed
          +1 mvneclipse 1m 32s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 40s trunk passed
          +1 javadoc 2m 9s trunk passed
          0 mvndep 0m 20s Maven dependency ordering for patch
          +1 mvninstall 1m 32s the patch passed
          +1 compile 9m 16s the patch passed
          -1 javac 9m 16s root generated 4 new + 709 unchanged - 0 fixed = 713 total (was 709)
          -0 checkstyle 1m 42s root: The patch generated 1 new + 98 unchanged - 6 fixed = 99 total (was 104)
          +1 mvnsite 2m 44s the patch passed
          +1 mvneclipse 1m 50s the patch passed
          +1 shellcheck 0m 14s The patch generated 0 new + 568 unchanged - 8 fixed = 568 total (was 576)
          +1 shelldocs 0m 22s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 6s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 3m 13s the patch passed
          +1 javadoc 2m 28s the patch passed
          +1 unit 0m 28s hadoop-assemblies in the patch passed.
          -1 unit 3m 42s hadoop-auth in the patch failed.
          -1 unit 8m 9s hadoop-common in the patch failed.
          -1 unit 2m 8s hadoop-kms in the patch failed.
          +1 asflicense 0m 44s The patch does not generate ASF License warnings.
          91m 5s



          Reason Tests
          Failed junit tests hadoop.security.authentication.client.TestPseudoAuthenticator
            hadoop.security.authentication.server.TestAuthenticationFilter
            hadoop.security.authentication.client.TestKerberosAuthenticator
            hadoop.security.token.delegation.web.TestWebDelegationToken
            hadoop.crypto.key.kms.server.TestKMSWithZK
            hadoop.crypto.key.kms.server.TestKMS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13597
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842695/HADOOP-13597.004.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs
          uname Linux 4c70ac35f5e4 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4c38f11
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-auth.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/testReport/
          modules C: hadoop-assemblies hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 4 new or modified test files. 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 6m 49s trunk passed +1 compile 9m 34s trunk passed +1 checkstyle 1m 38s trunk passed +1 mvnsite 2m 27s trunk passed +1 mvneclipse 1m 32s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 40s trunk passed +1 javadoc 2m 9s trunk passed 0 mvndep 0m 20s Maven dependency ordering for patch +1 mvninstall 1m 32s the patch passed +1 compile 9m 16s the patch passed -1 javac 9m 16s root generated 4 new + 709 unchanged - 0 fixed = 713 total (was 709) -0 checkstyle 1m 42s root: The patch generated 1 new + 98 unchanged - 6 fixed = 99 total (was 104) +1 mvnsite 2m 44s the patch passed +1 mvneclipse 1m 50s the patch passed +1 shellcheck 0m 14s The patch generated 0 new + 568 unchanged - 8 fixed = 568 total (was 576) +1 shelldocs 0m 22s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 6s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 3m 13s the patch passed +1 javadoc 2m 28s the patch passed +1 unit 0m 28s hadoop-assemblies in the patch passed. -1 unit 3m 42s hadoop-auth in the patch failed. -1 unit 8m 9s hadoop-common in the patch failed. -1 unit 2m 8s hadoop-kms in the patch failed. +1 asflicense 0m 44s The patch does not generate ASF License warnings. 91m 5s Reason Tests Failed junit tests hadoop.security.authentication.client.TestPseudoAuthenticator   hadoop.security.authentication.server.TestAuthenticationFilter   hadoop.security.authentication.client.TestKerberosAuthenticator   hadoop.security.token.delegation.web.TestWebDelegationToken   hadoop.crypto.key.kms.server.TestKMSWithZK   hadoop.crypto.key.kms.server.TestKMS Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13597 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842695/HADOOP-13597.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs uname Linux 4c70ac35f5e4 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4c38f11 Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-auth.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-kms.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/testReport/ modules C: hadoop-assemblies hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11242/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          OK, I misread oldvar as newvar in the patch file. I'm not a fan of the change since it just increases the complexity of the code and the run time when oldvar is in use.

          Show
          aw Allen Wittenauer added a comment - OK, I misread oldvar as newvar in the patch file. I'm not a fan of the change since it just increases the complexity of the code and the run time when oldvar is in use.
          Hide
          jzhuge John Zhuge added a comment -

          Agree with your complexity concern. I intially created a separate API but decided to multiplex hadoop_deprecate_envvar because it is such a good name Any suggestion on the separate API? How about hadoop_retire_envvar?

          Show
          jzhuge John Zhuge added a comment - Agree with your complexity concern. I intially created a separate API but decided to multiplex hadoop_deprecate_envvar because it is such a good name Any suggestion on the separate API? How about hadoop_retire_envvar ?
          Hide
          aw Allen Wittenauer added a comment - - edited
          +  hadoop_deprecate_envvar CATALINA_OUT
          +  hadoop_deprecate_envvar CATALINA_PID
          +  hadoop_deprecate_envvar KMS_ADMIN_PORT
          +  hadoop_deprecate_envvar KMS_CATALINA_HOME
          +  hadoop_deprecate_envvar KMS_SSL_TRUSTSTORE_PASS
          

          We don't do this anywhere in the scripts. Instead, this is documented in the release notes. It's just extra console noise otherwise.

          +  hadoop_using_envvar KMS_HOME
          

          This doesn't appear to have actually been configurable by users. I don't see a reason to add it now.

          +  hadoop_using_envvar KMS_HTTP_PORT
          +  hadoop_using_envvar KMS_LOG
          +  hadoop_using_envvar KMS_MAX_HTTP_HEADER_SIZE
          +  hadoop_using_envvar KMS_MAX_THREADS
          +  hadoop_using_envvar KMS_SSL_ENABLED
          +  hadoop_using_envvar KMS_SSL_KEYSTORE_FILE
          +  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
          +  hadoop_using_envvar KMS_TEMP
          

          I know that branch-2 spit out a bunch of stuff, but it always felt wrong. Is this actually valuable to anyone that aren't developers? Would --debug be a better usage here? It seems like a lot of noise on the console that's probably more appropriate for a log file

          +  hadoop_using_envvar KMS_SSL_KEYSTORE_PASS
          

          Is this actually printing a password to the screen?!? Is there any chance we can switch this to being read from a file? env vars are exposed in /proc on some OSes...

          hadoop_mkdir
          

          We have a bunch of places where this same construct is being used. We should probably replace all of them if we're going to add a function to do it.

          FWIW, I definitely prefer the single function for handling kms. So Much Better. (and I'm really ecstatic of dropping kms-config.sh , etc, etc.)

          Show
          aw Allen Wittenauer added a comment - - edited + hadoop_deprecate_envvar CATALINA_OUT + hadoop_deprecate_envvar CATALINA_PID + hadoop_deprecate_envvar KMS_ADMIN_PORT + hadoop_deprecate_envvar KMS_CATALINA_HOME + hadoop_deprecate_envvar KMS_SSL_TRUSTSTORE_PASS We don't do this anywhere in the scripts. Instead, this is documented in the release notes. It's just extra console noise otherwise. + hadoop_using_envvar KMS_HOME This doesn't appear to have actually been configurable by users. I don't see a reason to add it now. + hadoop_using_envvar KMS_HTTP_PORT + hadoop_using_envvar KMS_LOG + hadoop_using_envvar KMS_MAX_HTTP_HEADER_SIZE + hadoop_using_envvar KMS_MAX_THREADS + hadoop_using_envvar KMS_SSL_ENABLED + hadoop_using_envvar KMS_SSL_KEYSTORE_FILE + hadoop_using_envvar KMS_SSL_KEYSTORE_PASS + hadoop_using_envvar KMS_TEMP I know that branch-2 spit out a bunch of stuff, but it always felt wrong. Is this actually valuable to anyone that aren't developers? Would --debug be a better usage here? It seems like a lot of noise on the console that's probably more appropriate for a log file + hadoop_using_envvar KMS_SSL_KEYSTORE_PASS Is this actually printing a password to the screen?!? Is there any chance we can switch this to being read from a file? env vars are exposed in /proc on some OSes... hadoop_mkdir We have a bunch of places where this same construct is being used. We should probably replace all of them if we're going to add a function to do it. FWIW, I definitely prefer the single function for handling kms. So Much Better. (and I'm really ecstatic of dropping kms-config.sh , etc, etc.)
          Hide
          jzhuge John Zhuge added a comment -

          Thanks for the review.

          Will remove hadoop_deprecate_envvar and document the deprecated envvars in Release Notes.

          Will switch hadoop_using_envvar to use hadoop_debug. And will skip passoword envvars.

          Will change other similar places to use hadoop_mkdir.

          Show
          jzhuge John Zhuge added a comment - Thanks for the review. Will remove hadoop_deprecate_envvar and document the deprecated envvars in Release Notes. Will switch hadoop_using_envvar to use hadoop_debug. And will skip passoword envvars. Will change other similar places to use hadoop_mkdir.
          Hide
          jzhuge John Zhuge added a comment -

          We should keep that unchanged. The kerberos default value also doesn't work, it should be sasl.

          Xiao Chen I don't understand. Could you check Patch 004? Thanks.

          Show
          jzhuge John Zhuge added a comment - We should keep that unchanged. The kerberos default value also doesn't work, it should be sasl. Xiao Chen I don't understand. Could you check Patch 004? Thanks.
          Hide
          jzhuge John Zhuge added a comment -

          I think it is fixed:

            <property>
              <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type</name>
              <value>none</value>
              <description>
                The Zookeeper authentication type, 'none' (default) or 'sasl' (Kerberos).
              </description>
            </property>
          
          Show
          jzhuge John Zhuge added a comment - I think it is fixed: <property> <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type</name> <value>none</value> <description> The Zookeeper authentication type, 'none' ( default ) or 'sasl' (Kerberos). </description> </property>
          Hide
          jzhuge John Zhuge added a comment -

          Patch 005

          • Set HttpServer2.Builder#authFilterConfigurationPrefix to integrate with HttpServer2’s secret provider
          • Rename AuthenticationFilter#isInitializedByTomcat to destroySecretProvider
          • Update docs: index.md.vm and CommandsManual.md

          TESTING DONE

          • hadoop key list/create/delete/roll in non-secure and SSL setup
          • hadoop —daemon start|stop kms
          • kms.sh run
          • hadoop daemonlog
          • KMS unit tests
          • dist-test for module hadoop-common and hadoop-hdfs: http://dist-test.cloudera.org/job?job_id=hadoop.jzhuge.1481879379.20252, 10 unrelated test failures.
          • Shell script unit tests
          • /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html

          TODO in new JIRAs:

          • Update doc CredentialProviderAPI.md and SecureMode.md for both KMS and HttpFS
          • HADOOP-13875: Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
          • Share web apps code in Common, HDFS, and YARN

          Allen Wittenauer, Xiao Chen, Wei-Chiu Chuang, Robert Kanter, please review. I believe all review comments are either addressed or to be followed up in future JIRAs. Let me know otherwise.

          Show
          jzhuge John Zhuge added a comment - Patch 005 Set HttpServer2.Builder#authFilterConfigurationPrefix to integrate with HttpServer2’s secret provider Rename AuthenticationFilter#isInitializedByTomcat to destroySecretProvider Update docs: index.md.vm and CommandsManual.md TESTING DONE hadoop key list/create/delete/roll in non-secure and SSL setup hadoop —daemon start|stop kms kms.sh run hadoop daemonlog KMS unit tests dist-test for module hadoop-common and hadoop-hdfs: http://dist-test.cloudera.org/job?job_id=hadoop.jzhuge.1481879379.20252 , 10 unrelated test failures. Shell script unit tests /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html TODO in new JIRAs: Update doc CredentialProviderAPI.md and SecureMode.md for both KMS and HttpFS HADOOP-13875 : Full SSL server configuration: includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc. Share web apps code in Common, HDFS, and YARN Allen Wittenauer , Xiao Chen , Wei-Chiu Chuang , Robert Kanter , please review. I believe all review comments are either addressed or to be followed up in future JIRAs. Let me know otherwise.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for the new rev John Zhuge. Feels pretty close to me. Seems all comments are addressed to me too.

          I only have the following book-keeping comments, other than that LGTM.

          • ConfigurationWithLogging.java: javadoc should say about redaction.
          • kms doc: we should still mention 9600 is the default port in KMS Configuration.
          • kms doc: the envvars (e.g. KMS_MAX_THREADS) needs documentation.
          • kms doc: /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html needs documentation.
          • The linked 'breaks' HADOOP-13872 KMS JMX exception seem to be fixed by this. Could you confirm, and if so, close that one out? Don't see any option close to 'fixed by', but I think 'relates to' with a comment should be clear enough.

          It's a fairly big patch, appreciate if other reviewers can take a look.

          Show
          xiaochen Xiao Chen added a comment - Thanks for the new rev John Zhuge . Feels pretty close to me. Seems all comments are addressed to me too. I only have the following book-keeping comments, other than that LGTM. ConfigurationWithLogging.java: javadoc should say about redaction. kms doc: we should still mention 9600 is the default port in KMS Configuration . kms doc: the envvars (e.g. KMS_MAX_THREADS) needs documentation. kms doc: /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html needs documentation. The linked 'breaks' HADOOP-13872 KMS JMX exception seem to be fixed by this. Could you confirm, and if so, close that one out? Don't see any option close to 'fixed by', but I think 'relates to' with a comment should be clear enough. It's a fairly big patch, appreciate if other reviewers can take a look.
          Hide
          jzhuge John Zhuge added a comment - - edited

          Patch 006

          • Xiao's comments

          TESTING DONE

          Show
          jzhuge John Zhuge added a comment - - edited Patch 006 Xiao's comments TESTING DONE Automated regression tests for KMS https://github.com/jzhuge/hadoop-regression-tests in unsecure and ssl mode View docs
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 0m 18s Maven dependency ordering for branch
          +1 mvninstall 13m 44s trunk passed
          +1 compile 11m 13s trunk passed
          +1 checkstyle 1m 41s trunk passed
          +1 mvnsite 2m 36s trunk passed
          +1 mvneclipse 1m 38s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 43s trunk passed
          +1 javadoc 2m 16s trunk passed
          0 mvndep 0m 19s Maven dependency ordering for patch
          +1 mvninstall 1m 36s the patch passed
          +1 compile 10m 31s the patch passed
          -1 javac 10m 31s root generated 4 new + 690 unchanged - 0 fixed = 694 total (was 690)
          -0 checkstyle 1m 56s root: The patch generated 1 new + 98 unchanged - 6 fixed = 99 total (was 104)
          +1 mvnsite 3m 35s the patch passed
          +1 mvneclipse 1m 51s the patch passed
          +1 shellcheck 0m 17s The patch generated 0 new + 566 unchanged - 9 fixed = 566 total (was 575)
          +1 shelldocs 0m 24s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 7s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 3m 41s the patch passed
          +1 javadoc 2m 38s the patch passed
          +1 unit 0m 29s hadoop-assemblies in the patch passed.
          +1 unit 3m 53s hadoop-auth in the patch passed.
          +1 unit 11m 2s hadoop-common in the patch passed.
          +1 unit 2m 37s hadoop-kms in the patch passed.
          +1 asflicense 0m 53s The patch does not generate ASF License warnings.
          107m 29s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13597
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12844463/HADOOP-13597.006.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs
          uname Linux 6fc5958eea33 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 22befbd
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/artifact/patchprocess/diff-checkstyle-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/testReport/
          modules C: hadoop-assemblies hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 18s Maven dependency ordering for branch +1 mvninstall 13m 44s trunk passed +1 compile 11m 13s trunk passed +1 checkstyle 1m 41s trunk passed +1 mvnsite 2m 36s trunk passed +1 mvneclipse 1m 38s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 43s trunk passed +1 javadoc 2m 16s trunk passed 0 mvndep 0m 19s Maven dependency ordering for patch +1 mvninstall 1m 36s the patch passed +1 compile 10m 31s the patch passed -1 javac 10m 31s root generated 4 new + 690 unchanged - 0 fixed = 694 total (was 690) -0 checkstyle 1m 56s root: The patch generated 1 new + 98 unchanged - 6 fixed = 99 total (was 104) +1 mvnsite 3m 35s the patch passed +1 mvneclipse 1m 51s the patch passed +1 shellcheck 0m 17s The patch generated 0 new + 566 unchanged - 9 fixed = 566 total (was 575) +1 shelldocs 0m 24s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 7s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 3m 41s the patch passed +1 javadoc 2m 38s the patch passed +1 unit 0m 29s hadoop-assemblies in the patch passed. +1 unit 3m 53s hadoop-auth in the patch passed. +1 unit 11m 2s hadoop-common in the patch passed. +1 unit 2m 37s hadoop-kms in the patch passed. +1 asflicense 0m 53s The patch does not generate ASF License warnings. 107m 29s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13597 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12844463/HADOOP-13597.006.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs uname Linux 6fc5958eea33 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 22befbd Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/artifact/patchprocess/diff-checkstyle-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/testReport/ modules C: hadoop-assemblies hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11312/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jzhuge John Zhuge added a comment -

          Puzzled by the "ConfigurationWithLogging.java:0:: Missing package-info.java file." checkstyle warning. Didn't see the issue in previous patches.

          Javac warnings of deprecation can be ignored. Do not want to touch too much code in this patch.

          Show
          jzhuge John Zhuge added a comment - Puzzled by the "ConfigurationWithLogging.java:0:: Missing package-info.java file." checkstyle warning. Didn't see the issue in previous patches. Javac warnings of deprecation can be ignored. Do not want to touch too much code in this patch.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks John Zhuge for revving. I think the patch is generally ready. Considering the holiday season, I think we should wait awhile for the feedback from others.
          I'll go ahead and review the httpfs one first and check back later next week. Hopefully we can get this in right into the new year, if not earlier.

          Show
          xiaochen Xiao Chen added a comment - Thanks John Zhuge for revving. I think the patch is generally ready. Considering the holiday season, I think we should wait awhile for the feedback from others. I'll go ahead and review the httpfs one first and check back later next week. Hopefully we can get this in right into the new year, if not earlier.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Thanks John Zhuge and Xiao Chen for driving forward this far. I think the patch is good to me.

          There's one nit in doc CommandsManual.md though:

          However, the command does not support KMS server, because its web interface is based on Tomcat, which does not support the servlet.

          This statement will no longer be valid after this change.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Thanks John Zhuge and Xiao Chen for driving forward this far. I think the patch is good to me. There's one nit in doc CommandsManual.md though: However, the command does not support KMS server, because its web interface is based on Tomcat, which does not support the servlet. This statement will no longer be valid after this change.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks Wei-Chiu Chuang for the review. Good catch, searching for tomcat I can also see SecureMode.md has a mention of tomcat, maybe better changed in the httpfs jira though (since that goes in later). Also 1 reference of tomcat in hadoop-common-project/hadoop-kms/dev-support/findbugsExcludeFile.xml, can be updated here.

          +1 pending those 2 nits though.

          Since it's a fairly big and impactful patch, will let it float a few days in case other watchers have comments. Plan to commit by Thursday if no objections.

          Show
          xiaochen Xiao Chen added a comment - Thanks Wei-Chiu Chuang for the review. Good catch, searching for tomcat I can also see SecureMode.md has a mention of tomcat, maybe better changed in the httpfs jira though (since that goes in later). Also 1 reference of tomcat in hadoop-common-project/hadoop-kms/dev-support/findbugsExcludeFile.xml , can be updated here. +1 pending those 2 nits though. Since it's a fairly big and impactful patch, will let it float a few days in case other watchers have comments. Plan to commit by Thursday if no objections.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 007

          • Update CommandsManual.md and findbugsExcludeFile.xml

          TESTING DONE

          Show
          jzhuge John Zhuge added a comment - Patch 007 Update CommandsManual.md and findbugsExcludeFile.xml TESTING DONE KMS Bats regression tests https://github.com/jzhuge/hadoop-regression-tests in insecure and ssl mode Verify CommandsManual.html /jmx, /logLevel, /conf, /stack, /logs, and /static/index.html
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 1m 47s Maven dependency ordering for branch
          +1 mvninstall 12m 31s trunk passed
          +1 compile 9m 40s trunk passed
          +1 checkstyle 1m 37s trunk passed
          +1 mvnsite 2m 23s trunk passed
          +1 mvneclipse 1m 31s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 2m 40s trunk passed
          +1 javadoc 2m 10s trunk passed
          0 mvndep 0m 20s Maven dependency ordering for patch
          +1 mvninstall 1m 32s the patch passed
          +1 compile 9m 23s the patch passed
          -1 javac 9m 23s root generated 4 new + 690 unchanged - 0 fixed = 694 total (was 690)
          -0 checkstyle 1m 42s root: The patch generated 1 new + 98 unchanged - 6 fixed = 99 total (was 104)
          +1 mvnsite 3m 9s the patch passed
          +1 mvneclipse 1m 51s the patch passed
          +1 shellcheck 0m 16s The patch generated 0 new + 566 unchanged - 9 fixed = 566 total (was 575)
          +1 shelldocs 0m 23s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 7s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies
          +1 findbugs 3m 25s the patch passed
          +1 javadoc 2m 38s the patch passed
          +1 unit 0m 27s hadoop-assemblies in the patch passed.
          +1 unit 3m 47s hadoop-auth in the patch passed.
          +1 unit 10m 14s hadoop-common in the patch passed.
          +1 unit 2m 28s hadoop-kms in the patch passed.
          +1 asflicense 0m 46s The patch does not generate ASF License warnings.
          102m 9s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HADOOP-13597
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12845487/HADOOP-13597.007.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs
          uname Linux 9481063edf8a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e49e0a6
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/artifact/patchprocess/diff-checkstyle-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/testReport/
          modules C: hadoop-assemblies hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 1m 47s Maven dependency ordering for branch +1 mvninstall 12m 31s trunk passed +1 compile 9m 40s trunk passed +1 checkstyle 1m 37s trunk passed +1 mvnsite 2m 23s trunk passed +1 mvneclipse 1m 31s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 2m 40s trunk passed +1 javadoc 2m 10s trunk passed 0 mvndep 0m 20s Maven dependency ordering for patch +1 mvninstall 1m 32s the patch passed +1 compile 9m 23s the patch passed -1 javac 9m 23s root generated 4 new + 690 unchanged - 0 fixed = 694 total (was 690) -0 checkstyle 1m 42s root: The patch generated 1 new + 98 unchanged - 6 fixed = 99 total (was 104) +1 mvnsite 3m 9s the patch passed +1 mvneclipse 1m 51s the patch passed +1 shellcheck 0m 16s The patch generated 0 new + 566 unchanged - 9 fixed = 566 total (was 575) +1 shelldocs 0m 23s The patch generated 0 new + 372 unchanged - 4 fixed = 372 total (was 376) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 7s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-assemblies +1 findbugs 3m 25s the patch passed +1 javadoc 2m 38s the patch passed +1 unit 0m 27s hadoop-assemblies in the patch passed. +1 unit 3m 47s hadoop-auth in the patch passed. +1 unit 10m 14s hadoop-common in the patch passed. +1 unit 2m 28s hadoop-kms in the patch passed. +1 asflicense 0m 46s The patch does not generate ASF License warnings. 102m 9s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13597 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12845487/HADOOP-13597.007.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle shellcheck shelldocs uname Linux 9481063edf8a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e49e0a6 Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 javac https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/artifact/patchprocess/diff-checkstyle-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/testReport/ modules C: hadoop-assemblies hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11351/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Had a final pass of the patch, +1.

          Verified the built documentation looks good.
          Verified no perf regression on my local Max, in a simple non-ssl test, using apache benchmark:

          ab -n 10000 -c 10 -T "application/json" -p postfile "http://IP:9600/kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever"

          before
          This is ApacheBench, Version 2.3 <$Revision: 1748469 $>
          Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
          Licensed to The Apache Software Foundation, http://www.apache.org/
          
          Benchmarking 172.16.3.181 (be patient)
          Completed 1000 requests
          Completed 2000 requests
          Completed 3000 requests
          Completed 4000 requests
          Completed 5000 requests
          Completed 6000 requests
          Completed 7000 requests
          Completed 8000 requests
          Completed 9000 requests
          Completed 10000 requests
          Finished 10000 requests
          
          
          Server Software:        Apache-Coyote/1.1
          Server Hostname:        172.16.3.181
          Server Port:            9600
          
          Document Path:          /kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever
          Document Length:        84 bytes
          
          Concurrency Level:      10
          Time taken for tests:   1.881 seconds
          Complete requests:      10000
          Failed requests:        0
          Total transferred:      3540000 bytes
          Total body sent:        2940000
          HTML transferred:       840000 bytes
          Requests per second:    5316.11 [#/sec] (mean)
          Time per request:       1.881 [ms] (mean)
          Time per request:       0.188 [ms] (mean, across all concurrent requests)
          Transfer rate:          1837.79 [Kbytes/sec] received
                                  1526.30 kb/s sent
                                  3364.10 kb/s total
          
          Connection Times (ms)
                        min  mean[+/-sd] median   max
          Connect:        0    0   0.3      0      17
          Processing:     1    1   1.5      1      49
          Waiting:        0    1   1.4      1      49
          Total:          1    2   1.5      2      49
          
          Percentage of the requests served within a certain time (ms)
            50%      2
            66%      2
            75%      2
            80%      2
            90%      3
            95%      3
            98%      5
            99%      7
           100%     49 (longest request)
          
          after
          xiao-MBP:Downloads xiao$ ab -n 5000 -c 10 -T "application/json" -p postfile  "http://172.16.3.181:9600/kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever"
          This is ApacheBench, Version 2.3 <$Revision: 1748469 $>
          Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
          Licensed to The Apache Software Foundation, http://www.apache.org/
          
          Benchmarking 172.16.3.181 (be patient)
          Completed 500 requests
          Completed 1000 requests
          Completed 1500 requests
          Completed 2000 requests
          Completed 2500 requests
          Completed 3000 requests
          Completed 3500 requests
          Completed 4000 requests
          Completed 4500 requests
          Completed 5000 requests
          Finished 5000 requests
          
          
          Server Software:        Jetty(9.3.11.v20160721)
          Server Hostname:        172.16.3.181
          Server Port:            9600
          
          Document Path:          /kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever
          Document Length:        84 bytes
          
          Concurrency Level:      10
          Time taken for tests:   0.949 seconds
          Complete requests:      5000
          Failed requests:        0
          Total transferred:      2315000 bytes
          Total body sent:        1475000
          HTML transferred:       420000 bytes
          Requests per second:    5267.81 [#/sec] (mean)
          Time per request:       1.898 [ms] (mean)
          Time per request:       0.190 [ms] (mean, across all concurrent requests)
          Transfer rate:          2381.83 [Kbytes/sec] received
                                  1517.58 kb/s sent
                                  3899.41 kb/s total
          
          Connection Times (ms)
                        min  mean[+/-sd] median   max
          Connect:        0    1   0.3      0       3
          Processing:     1    1   0.8      1      12
          Waiting:        0    1   0.7      1      11
          Total:          1    2   0.9      2      12
          ERROR: The median and mean for the initial connection time are more than twice the standard
                 deviation apart. These results are NOT reliable.
          
          Percentage of the requests served within a certain time (ms)
            50%      2
            66%      2
            75%      2
            80%      2
            90%      3
            95%      3
            98%      4
            99%      4
           100%     12 (longest request)
          

          (Tried 10000 but seems to hang around 6k for a while, guessing it's because of some changed defaults etc. IMO as long as the latency is at the same level, we can always tune the threads etc as needed.)

          Committing this. Great work here John Zhuge! Do you mind file a follow-on to fix the javac?

          Show
          xiaochen Xiao Chen added a comment - Had a final pass of the patch, +1. Verified the built documentation looks good. Verified no perf regression on my local Max, in a simple non-ssl test, using apache benchmark: ab -n 10000 -c 10 -T "application/json" -p postfile "http://IP:9600/kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever" before This is ApacheBench, Version 2.3 <$Revision: 1748469 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 172.16.3.181 (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests Completed 10000 requests Finished 10000 requests Server Software: Apache-Coyote/1.1 Server Hostname: 172.16.3.181 Server Port: 9600 Document Path: /kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever Document Length: 84 bytes Concurrency Level: 10 Time taken for tests: 1.881 seconds Complete requests: 10000 Failed requests: 0 Total transferred: 3540000 bytes Total body sent: 2940000 HTML transferred: 840000 bytes Requests per second: 5316.11 [#/sec] (mean) Time per request: 1.881 [ms] (mean) Time per request: 0.188 [ms] (mean, across all concurrent requests) Transfer rate: 1837.79 [Kbytes/sec] received 1526.30 kb/s sent 3364.10 kb/s total Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.3 0 17 Processing: 1 1 1.5 1 49 Waiting: 0 1 1.4 1 49 Total: 1 2 1.5 2 49 Percentage of the requests served within a certain time (ms) 50% 2 66% 2 75% 2 80% 2 90% 3 95% 3 98% 5 99% 7 100% 49 (longest request) after xiao-MBP:Downloads xiao$ ab -n 5000 -c 10 -T "application/json" -p postfile "http://172.16.3.181:9600/kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever" This is ApacheBench, Version 2.3 <$Revision: 1748469 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 172.16.3.181 (be patient) Completed 500 requests Completed 1000 requests Completed 1500 requests Completed 2000 requests Completed 2500 requests Completed 3000 requests Completed 3500 requests Completed 4000 requests Completed 4500 requests Completed 5000 requests Finished 5000 requests Server Software: Jetty(9.3.11.v20160721) Server Hostname: 172.16.3.181 Server Port: 9600 Document Path: /kms/v1/keyversion/k1%408/_eek?eek_op=decrypt&user.name=whatever Document Length: 84 bytes Concurrency Level: 10 Time taken for tests: 0.949 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 2315000 bytes Total body sent: 1475000 HTML transferred: 420000 bytes Requests per second: 5267.81 [#/sec] (mean) Time per request: 1.898 [ms] (mean) Time per request: 0.190 [ms] (mean, across all concurrent requests) Transfer rate: 2381.83 [Kbytes/sec] received 1517.58 kb/s sent 3899.41 kb/s total Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 0.3 0 3 Processing: 1 1 0.8 1 12 Waiting: 0 1 0.7 1 11 Total: 1 2 0.9 2 12 ERROR: The median and mean for the initial connection time are more than twice the standard deviation apart. These results are NOT reliable. Percentage of the requests served within a certain time (ms) 50% 2 66% 2 75% 2 80% 2 90% 3 95% 3 98% 4 99% 4 100% 12 (longest request) (Tried 10000 but seems to hang around 6k for a while, guessing it's because of some changed defaults etc. IMO as long as the latency is at the same level, we can always tune the threads etc as needed.) Committing this. Great work here John Zhuge ! Do you mind file a follow-on to fix the javac?
          Hide
          xiaochen Xiao Chen added a comment -

          Committed to trunk.

          Thanks John Zhuge for the contribution, and all reviewers for the discussion and review!

          John, could you also add a release note to the jira?

          Show
          xiaochen Xiao Chen added a comment - Committed to trunk. Thanks John Zhuge for the contribution, and all reviewers for the discussion and review! John, could you also add a release note to the jira?
          Hide
          jzhuge John Zhuge added a comment -

          File HADOOP-13955 to follow up. Will add relnotes.

          Thanks a lot Xiao Chen for the review and commit. Thanks Allen Wittenauer, Wei-Chiu Chuang, Andrew Wang, and Steve Loughran for reviews and discussions.

          Show
          jzhuge John Zhuge added a comment - File HADOOP-13955 to follow up. Will add relnotes. Thanks a lot Xiao Chen for the review and commit. Thanks Allen Wittenauer , Wei-Chiu Chuang , Andrew Wang , and Steve Loughran for reviews and discussions.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11078 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11078/)
          HADOOP-13597. Switch KMS from Tomcat to Jetty. Contributed by John (xiao: rev 5d182949badb2eb80393de7ba3838102d006488b)

          • (add) hadoop-common-project/hadoop-kms/src/site/configuration.xsl
          • (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/ssl-server.xml.conf
          • (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties
          • (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/logging.properties
          • (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/ROOT/index.html
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
          • (add) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebServer.java
          • (add) hadoop-common-project/hadoop-kms/src/main/resources/webapps/static/index.html
          • (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java
          • (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
          • (edit) hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
          • (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml
          • (add) hadoop-common-project/hadoop-kms/src/main/resources/kms-default.xml
          • (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh
          • (delete) hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh
          • (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/server.xml
          • (edit) hadoop-common-project/hadoop-kms/dev-support/findbugsExcludeFile.xml
          • (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/ROOT/WEB-INF/web.xml
          • (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java
          • (edit) hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
          • (delete) hadoop-common-project/hadoop-kms/src/main/webapp/WEB-INF/web.xml
          • (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
          • (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigurationWithLogging.java
          • (add) hadoop-common-project/hadoop-kms/src/main/resources/webapps/kms/WEB-INF/web.xml
          • (add) hadoop-common-project/hadoop-common/src/test/scripts/hadoop_using_envvar.bats
          • (add) hadoop-common-project/hadoop-kms/src/main/libexec/shellprofile.d/hadoop-kms.sh
          • (add) hadoop-common-project/hadoop-common/src/test/scripts/hadoop_mkdir.bats
          • (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
          • (delete) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJMXServlet.java
          • (edit) hadoop-common-project/hadoop-kms/pom.xml
          • (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11078 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11078/ ) HADOOP-13597 . Switch KMS from Tomcat to Jetty. Contributed by John (xiao: rev 5d182949badb2eb80393de7ba3838102d006488b) (add) hadoop-common-project/hadoop-kms/src/site/configuration.xsl (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/ssl-server.xml.conf (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/logging.properties (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/ROOT/index.html (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java (add) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebServer.java (add) hadoop-common-project/hadoop-kms/src/main/resources/webapps/static/index.html (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java (edit) hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-site.xml (add) hadoop-common-project/hadoop-kms/src/main/resources/kms-default.xml (edit) hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh (delete) hadoop-common-project/hadoop-kms/src/main/libexec/kms-config.sh (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/server.xml (edit) hadoop-common-project/hadoop-kms/dev-support/findbugsExcludeFile.xml (delete) hadoop-common-project/hadoop-kms/src/main/tomcat/ROOT/WEB-INF/web.xml (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSConfiguration.java (edit) hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md (delete) hadoop-common-project/hadoop-kms/src/main/webapp/WEB-INF/web.xml (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ConfigurationWithLogging.java (add) hadoop-common-project/hadoop-kms/src/main/resources/webapps/kms/WEB-INF/web.xml (add) hadoop-common-project/hadoop-common/src/test/scripts/hadoop_using_envvar.bats (add) hadoop-common-project/hadoop-kms/src/main/libexec/shellprofile.d/hadoop-kms.sh (add) hadoop-common-project/hadoop-common/src/test/scripts/hadoop_mkdir.bats (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm (delete) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJMXServlet.java (edit) hadoop-common-project/hadoop-kms/pom.xml (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-kms-dist.xml (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java

            People

            • Assignee:
              jzhuge John Zhuge
              Reporter:
              jzhuge John Zhuge
            • Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development