Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Cannot Reproduce
    • Affects Version/s: 2.2.0
    • Fix Version/s: None
    • Component/s: ha, journal-node, security
    • Labels:

      Description

      When HA is implemented with QJM and using kerberos, it's not possible to set wire-encrypted data.
      If it's set property hadoop.rpc.protection to something different to authentication it doesn't work propertly, getting the error:

      ERROR security.UserGroupInformation: PriviledgedActionException as:principal@REALM (auth:KERBEROS) cause:javax.security.sasl.SaslException: No common protection layer between client and ser

      With NFS as shared storage everything works like a charm

      1. namenode.xml
        67 kB
        Juan Carlos Fernandez
      2. journal.xml
        66 kB
        Juan Carlos Fernandez
      3. ssl-server.xml
        2 kB
        Juan Carlos Fernandez
      4. ssl-client.xml
        2 kB
        Juan Carlos Fernandez
      5. jaas.conf
        0.5 kB
        Juan Carlos Fernandez
      6. hdfs-site.xml
        7 kB
        Juan Carlos Fernandez
      7. core-site.xml
        3 kB
        Juan Carlos Fernandez

        Activity

        Hide
        Jing Zhao added a comment -

        Hi Juan Carlos Fernandez, I'm not a security expert. But looks like the error msg "No common protection layer between client and server" is caused by different saslQOP values on server (JournalNode) and client (NameNode) of the RPC protocol. Have you set the same value of "hadoop.rpc.protection" (e.g., auth-conf) in your NNs and all the JournalNodes?

        Show
        Jing Zhao added a comment - Hi Juan Carlos Fernandez , I'm not a security expert. But looks like the error msg "No common protection layer between client and server" is caused by different saslQOP values on server (JournalNode) and client (NameNode) of the RPC protocol. Have you set the same value of "hadoop.rpc.protection" (e.g., auth-conf) in your NNs and all the JournalNodes?
        Hide
        Juan Carlos Fernandez added a comment -

        Jing Zhao
        In my configuration I'm running JN in host1 host2 and host3 and NN in host1 and host2. Sharing core-site.xml and hdfs-site.xml. So there are only one hadoop.rpc.protection (only with authenticate it works). Also add that it works perfectly with nfs, instead of QJM of course, so it tell us everything is working perfectly but JNs.

        Show
        Juan Carlos Fernandez added a comment - Jing Zhao In my configuration I'm running JN in host1 host2 and host3 and NN in host1 and host2. Sharing core-site.xml and hdfs-site.xml. So there are only one hadoop.rpc.protection (only with authenticate it works). Also add that it works perfectly with nfs, instead of QJM of course, so it tell us everything is working perfectly but JNs.
        Hide
        Haohui Mai added a comment -

        There are multiple tests covering this feature (i.e., wire-encryption over RPC), therefore I believe that it might be a configuration error.

        Can you post your configuration?

        Show
        Haohui Mai added a comment - There are multiple tests covering this feature (i.e., wire-encryption over RPC), therefore I believe that it might be a configuration error. Can you post your configuration?
        Hide
        Suresh Srinivas added a comment -

        Juan Carlos Fernandez, please provide the information required for verifying if this is indeed a bug. I will close this jira after a week or so, if information required is not posted to the jira.

        Show
        Suresh Srinivas added a comment - Juan Carlos Fernandez , please provide the information required for verifying if this is indeed a bug. I will close this jira after a week or so, if information required is not posted to the jira.
        Hide
        Juan Carlos Fernandez added a comment -

        Also in my bash_profile I have export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.auth.login.config=/opt/hadoop/etc/hadoop/jaas.conf"

        Do you need more configurations or environment parameters?

        Show
        Juan Carlos Fernandez added a comment - Also in my bash_profile I have export HADOOP_OPTS="$HADOOP_OPTS -Djava.security.auth.login.config=/opt/hadoop/etc/hadoop/jaas.conf" Do you need more configurations or environment parameters?
        Hide
        Suresh Srinivas added a comment -

        Haohui Mai, can you please comment on this issue?

        Show
        Suresh Srinivas added a comment - Haohui Mai , can you please comment on this issue?
        Hide
        Haohui Mai added a comment -

        Juan Carlos Fernandez, can you post your configuration for both the journalnode and the namenode? You can get the configuration by accessing http://<host-port>/conf.

        Show
        Haohui Mai added a comment - Juan Carlos Fernandez , can you post your configuration for both the journalnode and the namenode? You can get the configuration by accessing http://<host-port>/conf .
        Hide
        Juan Carlos Fernandez added a comment -

        I can't access to any http/https url because it fails on start time, I attached xml files on the issue.

        Show
        Juan Carlos Fernandez added a comment - I can't access to any http/https url because it fails on start time, I attached xml files on the issue.
        Hide
        Juan Carlos Fernandez added a comment -

        Config files from url

        Show
        Juan Carlos Fernandez added a comment - Config files from url
        Hide
        Juan Carlos Fernandez added a comment -

        sorry Haohui Mai I could do it accessing to the URL before namenodes go down, journalnodes doesn't go down. It was my mistake

        Show
        Juan Carlos Fernandez added a comment - sorry Haohui Mai I could do it accessing to the URL before namenodes go down, journalnodes doesn't go down. It was my mistake
        Hide
        Juan Carlos Fernandez added a comment -

        Why has being marked as closed? I wasn't able to run QJM + SSL

        Show
        Juan Carlos Fernandez added a comment - Why has being marked as closed? I wasn't able to run QJM + SSL
        Hide
        Harsh J added a comment -

        This seems to be working just fine on my 2.5.0 cluster. Both JN and NN have the same hadoop.rpc.protection configs and thereby avoids the error.

        Unless you're still facing this Juan Carlos Fernandez, I'd propose we close this as 'Cannot Reproduce'.

        Show
        Harsh J added a comment - This seems to be working just fine on my 2.5.0 cluster. Both JN and NN have the same hadoop.rpc.protection configs and thereby avoids the error. Unless you're still facing this Juan Carlos Fernandez , I'd propose we close this as 'Cannot Reproduce'.

          People

          • Assignee:
            Unassigned
            Reporter:
            Juan Carlos Fernandez
          • Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development