Hive
  1. Hive
  2. HIVE-4911

Enable QOP configuration for Hive Server 2 thrift transport

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.12.0
    • Component/s: None
    • Labels:
      None
    • Release Note:
      Hide
      This patch adds feature to enable enable integrity protection and confidentiality protection ( beyond just the default of authentication), for communication between hive jdbc driver and hive server2 . You can use SASL (http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) QOP property (http://docs.oracle.com/javase/7/docs/api/javax/security/sasl/Sasl.html#QOP) configure this.

      - This is only when kerberos is used for the HS2 client (jdbc/odbc application) authentication with HS2.
      - hive.server2.thrift.sasl.qop in hive site.xml has to be set to one of valid QOP values ('auth', 'auth-int' or 'auth-conf')
      - specify sasl.qop in hive connection string sessionconf part of your jdbc hive connection string. eg jdbc:hive://hostname/dbname;sasl.qop=auth-int

      This also adds SASL QOP protection for metastore client server communication. You can enable it using hadoop configuration paramter hadoop.rpc.protection.

      Show
      This patch adds feature to enable enable integrity protection and confidentiality protection ( beyond just the default of authentication), for communication between hive jdbc driver and hive server2 . You can use SASL ( http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer ) QOP property ( http://docs.oracle.com/javase/7/docs/api/javax/security/sasl/Sasl.html#QOP ) configure this. - This is only when kerberos is used for the HS2 client (jdbc/odbc application) authentication with HS2. - hive.server2.thrift.sasl.qop in hive site.xml has to be set to one of valid QOP values ('auth', 'auth-int' or 'auth-conf') - specify sasl.qop in hive connection string sessionconf part of your jdbc hive connection string. eg jdbc: hive://hostname/dbname;sasl.qop=auth-int This also adds SASL QOP protection for metastore client server communication. You can enable it using hadoop configuration paramter hadoop.rpc.protection.

      Description

      The QoP for hive server 2 should be configurable to enable encryption. A new configuration should be exposed "hive.server2.thrift.sasl.qop". This would give greater control configuring hive server 2 service.

      1. HIVE-4911-trunk-0.patch
        17 kB
        Arup Malakar
      2. HIVE-4911-trunk-1.patch
        24 kB
        Arup Malakar
      3. HIVE-4911-trunk-2.patch
        25 kB
        Arup Malakar
      4. 20-build-temp-change.patch
        1 kB
        Thejas M Nair
      5. HIVE-4911-trunk-3.patch
        22 kB
        Arup Malakar
      6. 20-build-temp-change-1.patch
        3 kB
        Arup Malakar

        Issue Links

          Activity

          Hide
          Ashutosh Chauhan added a comment -

          This issue has been fixed and released as part of 0.12 release. If you find further issues, please create a new jira and link it to this one.

          Show
          Ashutosh Chauhan added a comment - This issue has been fixed and released as part of 0.12 release. If you find further issues, please create a new jira and link it to this one.
          Hide
          Arup Malakar added a comment -

          I thought I will add the performance numbers I have seen here for reference. In my testing I have observed that with auth-conf the amount of time taken
          to transfer data is ~2.3 times the time it takes without encryption. In my test I have a table of size 1GB, and I did
          "select *" on the table using the jdbc driver once with encryption and once without encryption.

          Time taken:

          • No encryption: ~9 minutes
          • Encryption: ~20 minutes

          I was wondering if anyone has experience with SASL encryption, if it is possible to tune any JVM/SASL settings to bring down this time. I am also interested in understanding if it is advisable to use a different crypto provider than the default one that ships with the JDK. If this much overhead is to be expected with encryption methods I would like to know that too. I am using patched version of hive-10 with Hive Server 2 on hadoop 23/jdk 1.7/RHEL 5.

          PS: This comment is a repost of a mail I sent out to hive-dev mailing list.

          Show
          Arup Malakar added a comment - I thought I will add the performance numbers I have seen here for reference. In my testing I have observed that with auth-conf the amount of time taken to transfer data is ~2.3 times the time it takes without encryption. In my test I have a table of size 1GB , and I did "select *" on the table using the jdbc driver once with encryption and once without encryption. Time taken: No encryption: ~9 minutes Encryption: ~20 minutes I was wondering if anyone has experience with SASL encryption, if it is possible to tune any JVM/SASL settings to bring down this time. I am also interested in understanding if it is advisable to use a different crypto provider than the default one that ships with the JDK. If this much overhead is to be expected with encryption methods I would like to know that too. I am using patched version of hive-10 with Hive Server 2 on hadoop 23/jdk 1.7/RHEL 5 . PS: This comment is a repost of a mail I sent out to hive-dev mailing list.
          Hide
          Thejas M Nair added a comment -
          Show
          Thejas M Nair added a comment - Also updated the wiki for HS2 - https://cwiki.apache.org/confluence/display/Hive/Setting+up+HiveServer2
          Hide
          Thejas M Nair added a comment -

          Adding release notes.

          Show
          Thejas M Nair added a comment - Adding release notes.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #122 (See https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/122/)
          HIVE-4911 : Enable QOP configuration for Hive Server 2 thrift transport (Arup Malakar via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512010)

          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          • /hive/trunk/conf/hive-default.xml.template
          • /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/SaslQOP.java
          • /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
          • /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java
          • /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #122 (See https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/122/ ) HIVE-4911 : Enable QOP configuration for Hive Server 2 thrift transport (Arup Malakar via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512010 ) /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java /hive/trunk/conf/hive-default.xml.template /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java /hive/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java /hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java /hive/trunk/service/src/java/org/apache/hive/service/auth/SaslQOP.java /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hive-trunk-hadoop2-ptest #51 (See https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/51/)
          HIVE-4911 : Enable QOP configuration for Hive Server 2 thrift transport (Arup Malakar via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512010)

          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          • /hive/trunk/conf/hive-default.xml.template
          • /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/SaslQOP.java
          • /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
          • /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java
          • /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hive-trunk-hadoop2-ptest #51 (See https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/51/ ) HIVE-4911 : Enable QOP configuration for Hive Server 2 thrift transport (Arup Malakar via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512010 ) /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java /hive/trunk/conf/hive-default.xml.template /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java /hive/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java /hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java /hive/trunk/service/src/java/org/apache/hive/service/auth/SaslQOP.java /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hive-trunk-hadoop2 #345 (See https://builds.apache.org/job/Hive-trunk-hadoop2/345/)
          HIVE-4911 : Enable QOP configuration for Hive Server 2 thrift transport (Arup Malakar via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512010)

          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          • /hive/trunk/conf/hive-default.xml.template
          • /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
          • /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java
          • /hive/trunk/service/src/java/org/apache/hive/service/auth/SaslQOP.java
          • /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
          • /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java
          • /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hive-trunk-hadoop2 #345 (See https://builds.apache.org/job/Hive-trunk-hadoop2/345/ ) HIVE-4911 : Enable QOP configuration for Hive Server 2 thrift transport (Arup Malakar via Ashutosh Chauhan) (hashutosh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1512010 ) /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java /hive/trunk/conf/hive-default.xml.template /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java /hive/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java /hive/trunk/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java /hive/trunk/service/src/java/org/apache/hive/service/auth/SaslQOP.java /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
          Hide
          Arup Malakar added a comment -
          Show
          Arup Malakar added a comment - Thanks Ashutosh Chauhan .
          Hide
          Ashutosh Chauhan added a comment -

          Committed to trunk. Thanks, Arup!

          Show
          Ashutosh Chauhan added a comment - Committed to trunk. Thanks, Arup!
          Hide
          Arup Malakar added a comment -

          Ashutosh ChauhanThat is correct. 20-build* patch are temporary patch I used to build against 20 until HIVE-4991 is committed.

          Show
          Arup Malakar added a comment - Ashutosh Chauhan That is correct. 20-build* patch are temporary patch I used to build against 20 until HIVE-4991 is committed.
          Hide
          Ashutosh Chauhan added a comment -

          Arup Malakar HIVE-4911-trunk-3.patch is the patch in entirety. We dont need anything else, right ?

          Show
          Ashutosh Chauhan added a comment - Arup Malakar HIVE-4911 -trunk-3.patch is the patch in entirety. We dont need anything else, right ?
          Hide
          Ashutosh Chauhan added a comment -

          +1 LGTM

          Show
          Ashutosh Chauhan added a comment - +1 LGTM
          Hide
          Thejas M Nair added a comment -

          Looks good to me. +1

          Show
          Thejas M Nair added a comment - Looks good to me. +1
          Hide
          Arup Malakar added a comment -

          I used 20-build-temp-change-1.patch to compile against 20.

          Thejas M Nair Let me know if you have any comments.

          Show
          Arup Malakar added a comment - I used 20-build-temp-change-1.patch to compile against 20. Thejas M Nair Let me know if you have any comments.
          Hide
          Arup Malakar added a comment -

          Thanks Thejas M Nairfor confirming that build is broken for 20. I was wondering if something was wrong in my environment. I will update the patch so that it applies cleanly on trunk.

          Show
          Arup Malakar added a comment - Thanks Thejas M Nair for confirming that build is broken for 20. I was wondering if something was wrong in my environment. I will update the patch so that it applies cleanly on trunk.
          Hide
          Thejas M Nair added a comment -

          Ok, looks like hive build is broken right now for 0.20. I have created HIVE-4991 to track that.
          To verify that your changes work with hadoop 0.20 you can use this patch (20-build-temp-change.patch) over your patch.
          HIVE-4991 can be addressed separately .

          Your latest patch is not applying cleanly on latest hive trunk because of recent commits, looks like some minor update is required.

          Show
          Thejas M Nair added a comment - Ok, looks like hive build is broken right now for 0.20. I have created HIVE-4991 to track that. To verify that your changes work with hadoop 0.20 you can use this patch (20-build-temp-change.patch) over your patch. HIVE-4991 can be addressed separately . Your latest patch is not applying cleanly on latest hive trunk because of recent commits, looks like some minor update is required.
          Hide
          Arup Malakar added a comment -

          For the above comment, I meant it errors out when compiled with hadoop 20. I used the following command:

          ant clean package  -Dhadoop.mr.rev=20

          It compiles fine with hadoop 23.

          Show
          Arup Malakar added a comment - For the above comment, I meant it errors out when compiled with hadoop 20. I used the following command: ant clean package -Dhadoop.mr.rev=20 It compiles fine with hadoop 23.
          Hide
          Arup Malakar added a comment -

          New changes:

          1. Incorporated sasl.qop renaming of param
          2. Moved getHadoopSaslProperties to HadoopThriftAuthBridge

          I can't get it to compile it with the following arguments though. The classes I made change to compile fine but the SessionState.java complains:

          compile:
               [echo] Project: ql
              [javac] Compiling 898 source files to /Users/malakar/code/oss/hive/build/ql/classes
              [javac] /Users/malakar/code/oss/hive/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java:35: package org.apache.commons.io does not exist
              [javac] import org.apache.commons.io.FileUtils;
              [javac]                             ^
              [javac] /Users/malakar/code/oss/hive/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java:743: cannot find symbol
              [javac] symbol  : variable FileUtils
              [javac] location: class org.apache.hadoop.hive.ql.session.SessionState
              [javac]         FileUtils.deleteDirectory(resourceDir);
              [javac]         ^
              [javac] Note: Some input files use or override a deprecated API.
              [javac] Note: Recompile with -Xlint:deprecation for details.
              [javac] Note: Some input files use unchecked or unsafe operations.
              [javac] Note: Recompile with -Xlint:unchecked for details.
              [javac] 2 errors
          
          Show
          Arup Malakar added a comment - New changes: 1. Incorporated sasl.qop renaming of param 2. Moved getHadoopSaslProperties to HadoopThriftAuthBridge I can't get it to compile it with the following arguments though. The classes I made change to compile fine but the SessionState.java complains: compile: [echo] Project: ql [javac] Compiling 898 source files to /Users/malakar/code/oss/hive/build/ql/classes [javac] /Users/malakar/code/oss/hive/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java:35: package org.apache.commons.io does not exist [javac] import org.apache.commons.io.FileUtils; [javac] ^ [javac] /Users/malakar/code/oss/hive/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java:743: cannot find symbol [javac] symbol : variable FileUtils [javac] location: class org.apache.hadoop.hive.ql.session.SessionState [javac] FileUtils.deleteDirectory(resourceDir); [javac] ^ [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [javac] 2 errors
          Hide
          Joey Echeverria added a comment -

          I'll give it a try and let you know how I get on.

          Show
          Joey Echeverria added a comment - I'll give it a try and let you know how I get on.
          Hide
          Thejas M Nair added a comment -

          Arup Malakar I have responded to your comment about "auth" param name in jdbc connection string.
          I think the refactoring that you have done to add MetaStoreUtils.getMetaStoreSaslProperties(conf) is a good idea.
          As you pointed out using this SaslRpcServer is likely to give compilation issues with 0.20. Looks like that will need to go into hadoop shims classes. Can you ensure that you are able to build with hadoop 0.20 ?

          I think it may be a good idea to expose another setting for MS as well rather than piggybacking on hadoop.rpc.protection. That would give finer control on the deployment.

          I think it is better to not increase complexity by adding more configs, unless there is really an use case for it.

          Joey Echeverria With the new patch QOP for HMS should work with hadoop.rpc.protection being set. Do you want to try it out ?

          Show
          Thejas M Nair added a comment - Arup Malakar I have responded to your comment about "auth" param name in jdbc connection string. I think the refactoring that you have done to add MetaStoreUtils.getMetaStoreSaslProperties(conf) is a good idea. As you pointed out using this SaslRpcServer is likely to give compilation issues with 0.20. Looks like that will need to go into hadoop shims classes. Can you ensure that you are able to build with hadoop 0.20 ? I think it may be a good idea to expose another setting for MS as well rather than piggybacking on hadoop.rpc.protection. That would give finer control on the deployment. I think it is better to not increase complexity by adding more configs, unless there is really an use case for it. Joey Echeverria With the new patch QOP for HMS should work with hadoop.rpc.protection being set. Do you want to try it out ?
          Hide
          Arup Malakar added a comment -

          Brock NolandThe reason I implemented fromString() instead of using the valueOf() is because I wanted to use the strings auth, auth-int, auth-conf in the configuration file "hive-site.xml". As using
          this strings for QoP is well understood and can be seen in various online documentations: http://docs.oracle.com/javase/jndi/tutorial/ldap/security/sasl.html.
          But I also wanted to follow enum naming conventions and use enum names AUTH, AUTH_INT and AUTH_CONF. Given these two are different I couldn't use valueOf() and ended up implementing fromString() method.

          Thejas M Nair Thanks for quickly reviewing it. I have incorporated all the comments except one. I need to test the hadoop-at-higher-security-level warning log message part though. Will give it a try when I get the chance.

          I think it may be a good idea to expose another setting for MS as well rather than piggybacking on hadoop.rpc.protection. That would give finer control on the deployment. The recent changes are geared towards making it easier. But as Thejas has mentioned, that change could be discussed in a separate JIRA.

          Show
          Arup Malakar added a comment - Brock Noland The reason I implemented fromString() instead of using the valueOf() is because I wanted to use the strings auth, auth-int, auth-conf in the configuration file "hive-site.xml". As using this strings for QoP is well understood and can be seen in various online documentations: http://docs.oracle.com/javase/jndi/tutorial/ldap/security/sasl.html . But I also wanted to follow enum naming conventions and use enum names AUTH, AUTH_INT and AUTH_CONF. Given these two are different I couldn't use valueOf() and ended up implementing fromString() method. Thejas M Nair Thanks for quickly reviewing it. I have incorporated all the comments except one. I need to test the hadoop-at-higher-security-level warning log message part though. Will give it a try when I get the chance. I think it may be a good idea to expose another setting for MS as well rather than piggybacking on hadoop.rpc.protection. That would give finer control on the deployment. The recent changes are geared towards making it easier. But as Thejas has mentioned, that change could be discussed in a separate JIRA.
          Hide
          Thejas M Nair added a comment -

          Arup Malakar I added some review comments in review board link.

          +1 for having a separate config flag that enables the QOP for hive server2. HS2 -> client connection is usually more vulnerable compared to the network traffic within a hadoop cluster, as the HS2 client is likely to be connecting over a corporate wide network.

          Brock Noland The patch would not work for HMS, that would new some more change. (added a comment about that in review). But I am not sure if that needs to be part of same jira.

          I don't think it makes sense to use the same config param to set the SASL QOP level for metastore. Should we just use hadoop.rpc.protection for that, as it is usually considered as 'inside the cluster' (as opposed to HS2 which is like a 'gateway server')

          Show
          Thejas M Nair added a comment - Arup Malakar I added some review comments in review board link. +1 for having a separate config flag that enables the QOP for hive server2. HS2 -> client connection is usually more vulnerable compared to the network traffic within a hadoop cluster, as the HS2 client is likely to be connecting over a corporate wide network. Brock Noland The patch would not work for HMS, that would new some more change. (added a comment about that in review). But I am not sure if that needs to be part of same jira. I don't think it makes sense to use the same config param to set the SASL QOP level for metastore. Should we just use hadoop.rpc.protection for that, as it is usually considered as 'inside the cluster' (as opposed to HS2 which is like a 'gateway server')
          Hide
          Brock Noland added a comment -

          Arup,

          Does this work for both HS2 and HMS?

          Also, in regards to SaslQOP, is there a reason you don't use valueOf() as opposed to implementing fromString()?

          Show
          Brock Noland added a comment - Arup, Does this work for both HS2 and HMS ? Also, in regards to SaslQOP, is there a reason you don't use valueOf() as opposed to implementing fromString()?
          Hide
          Chris Drome added a comment -

          Brock Noland, I marked this patch as superceding HIVE-4225. HIVE-4225 only addresses the fact that HS2 was ignoring the hadoop.rpc.protection setting. The major limitation of HIVE-4225 is that it applies the QOP setting to both external and internal connections.

          HIVE-4911 improves upon this by allowing separate configuration of external and internal connections. An example of where this is important is when the HS2 client connection must be encrypted, but the connection between HS2 and JT/NN does not require encryption.

          Show
          Chris Drome added a comment - Brock Noland , I marked this patch as superceding HIVE-4225 . HIVE-4225 only addresses the fact that HS2 was ignoring the hadoop.rpc.protection setting. The major limitation of HIVE-4225 is that it applies the QOP setting to both external and internal connections. HIVE-4911 improves upon this by allowing separate configuration of external and internal connections. An example of where this is important is when the HS2 client connection must be encrypted, but the connection between HS2 and JT/NN does not require encryption.
          Hide
          Arup Malakar added a comment -

          Brock Noland, HIVE-4225 proposes a way to configure QoP for the Hive Server 2 thrift service. But it uses the SaslRpcServer.SaslRpcServer object to determine what QoP to use. SaslRpcServer.SaslRpcServer reads this configuration from the parameter hadoop.rpc.protection, as can be seen in: https://svn.apache.org/repos/asf/hadoop/common/branches/HADOOP-6685/src/java/org/apache/hadoop/security/SaslRpcServer.java

            public static void init(Configuration conf) {
              QualityOfProtection saslQOP = QualityOfProtection.AUTHENTICATION;
              String rpcProtection = conf.get("hadoop.rpc.protection",
                  QualityOfProtection.AUTHENTICATION.name().toLowerCase());
              if (QualityOfProtection.INTEGRITY.name().toLowerCase()
                  .equals(rpcProtection)) {
                saslQOP = QualityOfProtection.INTEGRITY;
              } else if (QualityOfProtection.PRIVACY.name().toLowerCase().equals(
                  rpcProtection)) {
                saslQOP = QualityOfProtection.PRIVACY;
              }
              
              SASL_PROPS.put(Sasl.QOP, saslQOP.getSaslQop());
              SASL_PROPS.put(Sasl.SERVER_AUTH, "true");
            }
          

          I believe hadoop.rpc.protection configuration shouldn't dictate what QoP hive server 2 would use. The QoP of Hive Server 2 should rather be exposed via a new Hive Server 2 specific setting. That way either can change independent of each other.

          Show
          Arup Malakar added a comment - Brock Noland , HIVE-4225 proposes a way to configure QoP for the Hive Server 2 thrift service. But it uses the SaslRpcServer.SaslRpcServer object to determine what QoP to use. SaslRpcServer.SaslRpcServer reads this configuration from the parameter hadoop.rpc.protection , as can be seen in: https://svn.apache.org/repos/asf/hadoop/common/branches/HADOOP-6685/src/java/org/apache/hadoop/security/SaslRpcServer.java public static void init(Configuration conf) { QualityOfProtection saslQOP = QualityOfProtection.AUTHENTICATION; String rpcProtection = conf.get( "hadoop.rpc.protection" , QualityOfProtection.AUTHENTICATION.name().toLowerCase()); if (QualityOfProtection.INTEGRITY.name().toLowerCase() .equals(rpcProtection)) { saslQOP = QualityOfProtection.INTEGRITY; } else if (QualityOfProtection.PRIVACY.name().toLowerCase().equals( rpcProtection)) { saslQOP = QualityOfProtection.PRIVACY; } SASL_PROPS.put(Sasl.QOP, saslQOP.getSaslQop()); SASL_PROPS.put(Sasl.SERVER_AUTH, " true " ); } I believe hadoop.rpc.protection configuration shouldn't dictate what QoP hive server 2 would use. The QoP of Hive Server 2 should rather be exposed via a new Hive Server 2 specific setting. That way either can change independent of each other.
          Hide
          Brock Noland added a comment -

          Arup, thanks for the patch! Could you give some details on why you superceded HIVE-4225?

          Show
          Brock Noland added a comment - Arup, thanks for the patch! Could you give some details on why you superceded HIVE-4225 ?
          Hide
          Arup Malakar added a comment -
          Show
          Arup Malakar added a comment - Review: https://reviews.apache.org/r/12824/

            People

            • Assignee:
              Arup Malakar
              Reporter:
              Arup Malakar
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development