Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-2856

Fix block protocol so that Datanodes don't require root or jsvc

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0, 2.4.0
    • Fix Version/s: 2.6.0
    • Component/s: datanode, security
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      SASL now can be used to secure the DataTransferProtocol, which transfers file block content between HDFS clients and DataNodes. In this configuration, it is no longer required for secured clusters to start the DataNode as root and bind to privileged ports.
      Show
      SASL now can be used to secure the DataTransferProtocol, which transfers file block content between HDFS clients and DataNodes. In this configuration, it is no longer required for secured clusters to start the DataNode as root and bind to privileged ports.

      Description

      Since we send the block tokens unencrypted to the datanode, we currently start the datanode as root using jsvc and get a secure (< 1024) port.

      If we have the datanode generate a nonce and send it on the connection and the sends an hmac of the nonce back instead of the block token it won't reveal any secrets. Thus, we wouldn't require a secure port and would not require root or jsvc.

      1. HDFS-2856.7.patch
        150 kB
        Chris Nauroth
      2. HDFS-2856-branch-2.7.patch
        157 kB
        Chris Nauroth
      3. HDFS-2856.6.patch
        150 kB
        Chris Nauroth
      4. HDFS-2856.5.patch
        150 kB
        Chris Nauroth
      5. HDFS-2856.4.patch
        149 kB
        Chris Nauroth
      6. HDFS-2856-Test-Plan-1.pdf
        154 kB
        Chris Nauroth
      7. HDFS-2856.3.patch
        147 kB
        Chris Nauroth
      8. HDFS-2856.2.patch
        143 kB
        Chris Nauroth
      9. HDFS-2856.1.patch
        137 kB
        Chris Nauroth
      10. HDFS-2856.prototype.patch
        32 kB
        Chris Nauroth
      11. Datanode-Security-Design.pdf
        95 kB
        Chris Nauroth
      12. Datanode-Security-Design.pdf
        95 kB
        Chris Nauroth
      13. Datanode-Security-Design.pdf
        93 kB
        Chris Nauroth

        Issue Links

          Activity

          Hide
          Ram Marti added a comment -

          I am not sure this is quite correct.
          Let us recall the original issue:
          The authenticated data node processes that bind to the port has crashed

          • The tasks that have been launched by a malicious user and running on the data node monitors for the crash, bind to that
            port and receive the data and the block access token.
          • Till the block token expires (configurable but defaults to 10 hours) can use that token to access data on other data
            nodes.

          This may be fixed by what you propose above.But consider the write case. The client sends the data (unencrypted) and this data is available to the process listening on that port.

          I think the only way you can remove this restriction is if you enable integrity and encryption on the channel.

          Show
          Ram Marti added a comment - I am not sure this is quite correct. Let us recall the original issue: The authenticated data node processes that bind to the port has crashed The tasks that have been launched by a malicious user and running on the data node monitors for the crash, bind to that port and receive the data and the block access token. Till the block token expires (configurable but defaults to 10 hours) can use that token to access data on other data nodes. This may be fixed by what you propose above.But consider the write case. The client sends the data (unencrypted) and this data is available to the process listening on that port. I think the only way you can remove this restriction is if you enable integrity and encryption on the channel.
          Hide
          Todd Lipcon added a comment -

          Or handshake at the beginning of a write – we already have the DN send back a BlockOpResponseProto. We can authenticate the DN there before the client sends any private data.

          Show
          Todd Lipcon added a comment - Or handshake at the beginning of a write – we already have the DN send back a BlockOpResponseProto. We can authenticate the DN there before the client sends any private data.
          Hide
          Devaraj Das added a comment -

          We considered this option back at the time when we were trying to secure the datanode protocols. The problem with this approach is the increased number of round-trips that the handshake would introduce in every hop in the write-pipeline. We hadn't benchmark this though..

          Show
          Devaraj Das added a comment - We considered this option back at the time when we were trying to secure the datanode protocols. The problem with this approach is the increased number of round-trips that the handshake would introduce in every hop in the write-pipeline. We hadn't benchmark this though..
          Hide
          Todd Lipcon added a comment -

          We do already wait for a BlockOpResponseProto before beginning to stream data as it is... I suppose it would preclude a future optimization where we start streaming data before getting a response, but that's not an optimization in effect today.

          Show
          Todd Lipcon added a comment - We do already wait for a BlockOpResponseProto before beginning to stream data as it is... I suppose it would preclude a future optimization where we start streaming data before getting a response, but that's not an optimization in effect today.
          Hide
          Owen O'Malley added a comment -

          We should move the protocol to be a handshake-based one. Devaraj, the place where we rejected the handshake was the shuffle where the number of small connections is very high.

          Show
          Owen O'Malley added a comment - We should move the protocol to be a handshake-based one. Devaraj, the place where we rejected the handshake was the shuffle where the number of small connections is very high.
          Hide
          Suresh Srinivas added a comment -

          As this requires protocol changes between client and datanode, it is good to get this into 2.0.5 beta, to ensure wire compatibility.

          Show
          Suresh Srinivas added a comment - As this requires protocol changes between client and datanode, it is good to get this into 2.0.5 beta, to ensure wire compatibility.
          Hide
          Suresh Srinivas added a comment -

          Marking this as blocker for 2.0.5 beta.

          Show
          Suresh Srinivas added a comment - Marking this as blocker for 2.0.5 beta.
          Hide
          Chris Nauroth added a comment -

          I'm attaching a design document for establishing authentication of the datanode to the client. Feedback is welcome. I'm also reassigning the issue to myself.

          Show
          Chris Nauroth added a comment - I'm attaching a design document for establishing authentication of the datanode to the client. Feedback is welcome. I'm also reassigning the issue to myself.
          Hide
          Owen O'Malley added a comment -

          Chris, can you update the document with a read path?

          We should also include the current timestamp from the client to the datanode (both directly and in the hmac) to make replay attacks harder.

          Show
          Owen O'Malley added a comment - Chris, can you update the document with a read path? We should also include the current timestamp from the client to the datanode (both directly and in the hmac) to make replay attacks harder.
          Hide
          Chris Nauroth added a comment -

          Thanks, Owen. Here is a new version of the design doc.

          Chris, can you update the document with a read path?

          The preliminary handshake is the same, so I didn't clone all of that information to discuss readBlock. Instead, I added a statement that other operations like readBlock are similar in steps 1-6. If you still prefer to see a detailed section dedicated to readBlock, let me know, and I'll add it.

          We should also include the current timestamp from the client to the datanode (both directly and in the hmac) to make replay attacks harder.

          Good idea. I've changed step 3 to include client timestamp in the arguments and the calculation of client digest. I've changed step 4 so that the datanode checks client timestamp is within a threshold. I've changed step 5 to include client timestamp in calculation of server digest.

          Show
          Chris Nauroth added a comment - Thanks, Owen. Here is a new version of the design doc. Chris, can you update the document with a read path? The preliminary handshake is the same, so I didn't clone all of that information to discuss readBlock. Instead, I added a statement that other operations like readBlock are similar in steps 1-6. If you still prefer to see a detailed section dedicated to readBlock, let me know, and I'll add it. We should also include the current timestamp from the client to the datanode (both directly and in the hmac) to make replay attacks harder. Good idea. I've changed step 3 to include client timestamp in the arguments and the calculation of client digest. I've changed step 4 so that the datanode checks client timestamp is within a threshold. I've changed step 5 to include client timestamp in calculation of server digest.
          Hide
          Dilli Arumugam added a comment -

          Would suggest adding a config parameter on DataNode to define acceptable time skew on the client timestamp.

          Show
          Dilli Arumugam added a comment - Would suggest adding a config parameter on DataNode to define acceptable time skew on the client timestamp.
          Hide
          Chris Nauroth added a comment -

          Uploading a new version of the design document with 2:

          1. Mentioned that timestamp threshold is configurable. (Thank you, Dilli.)
          2. Stated more clearly on page 1 that the existing connection between datanode and namenode is already authenticated via Kerberos before giving the block key to the datanode. Therefore, if the datanode proves to the client that it has the block key, then the client knows that the datanode has authenticated. (Thank you, Sanjay.)
          Show
          Chris Nauroth added a comment - Uploading a new version of the design document with 2: Mentioned that timestamp threshold is configurable. (Thank you, Dilli.) Stated more clearly on page 1 that the existing connection between datanode and namenode is already authenticated via Kerberos before giving the block key to the datanode. Therefore, if the datanode proves to the client that it has the block key, then the client knows that the datanode has authenticated. (Thank you, Sanjay.)
          Hide
          Todd Lipcon added a comment -

          One question about this new protocol – it relies on the client and server addresses to prevent MITM type attacks. But many nodes are multi-homed, and in the case of cross-cluster communication there may even be NAT or SOCKS proxies in the way. Given that, a client may not know its own address (as seen by the datanode), and the address that the client is using to speak to the DN may not be the same one the DN has bound to.

          Instead, can we just use the DatanodeID and port of the target DN? This would still prevent a man-in-the-middle where the request is forwarded to a different DN. I'm not sure what value is provided by including the client's address in the digest.

          Show
          Todd Lipcon added a comment - One question about this new protocol – it relies on the client and server addresses to prevent MITM type attacks. But many nodes are multi-homed, and in the case of cross-cluster communication there may even be NAT or SOCKS proxies in the way. Given that, a client may not know its own address (as seen by the datanode), and the address that the client is using to speak to the DN may not be the same one the DN has bound to. Instead, can we just use the DatanodeID and port of the target DN? This would still prevent a man-in-the-middle where the request is forwarded to a different DN. I'm not sure what value is provided by including the client 's address in the digest.
          Hide
          Dilli Arumugam added a comment -

          Not sure whether the following problem should be addressed outside the scope of this bug.
          But seems related and looks like a more serious security problem.

          In WebHDFS world, client submits DelegationToken to DataNode.

          If we fix the current problem but have WebHDFS On, we have a bigger security problem.

          Show
          Dilli Arumugam added a comment - Not sure whether the following problem should be addressed outside the scope of this bug. But seems related and looks like a more serious security problem. In WebHDFS world, client submits DelegationToken to DataNode. If we fix the current problem but have WebHDFS On, we have a bigger security problem.
          Hide
          Aaron T. Myers added a comment -

          Thanks a lot for working on this issue, Chris. Two questions for you:

          1. In steps 5 and 6 of the proposed protocol, I think you may need to do an 's/block key/block access token/g'. As you have it currently, if the server digest returned by the DN is based on the block key directly, the client will not be able to recompute/verify the returned server digest, since the client does not know the block key. However, the client does know the block access token, and a properly authenticated DN will be able to recompute the block access token based on the block key it shares with the NN.
          2. Did you consider at all scrapping our custom authentication protocol and instead switching to using straight SASL MD5-DIGEST for the DataTransferProtocol? This is roughly what I did to add support for encrypting the DataTransferProtocol in HDFS-3637.
          Show
          Aaron T. Myers added a comment - Thanks a lot for working on this issue, Chris. Two questions for you: In steps 5 and 6 of the proposed protocol, I think you may need to do an 's/block key/block access token/g'. As you have it currently, if the server digest returned by the DN is based on the block key directly, the client will not be able to recompute/verify the returned server digest, since the client does not know the block key. However, the client does know the block access token, and a properly authenticated DN will be able to recompute the block access token based on the block key it shares with the NN. Did you consider at all scrapping our custom authentication protocol and instead switching to using straight SASL MD5-DIGEST for the DataTransferProtocol? This is roughly what I did to add support for encrypting the DataTransferProtocol in HDFS-3637 .
          Hide
          Suresh Srinivas added a comment -

          The changes for this jira needs to be backward compatible. Given that, marking the priority as Major instead of Blocker.

          Show
          Suresh Srinivas added a comment - The changes for this jira needs to be backward compatible. Given that, marking the priority as Major instead of Blocker.
          Hide
          Chris Nauroth added a comment -

          Thanks for the comments, everyone. Let's discuss the SASL point first, because it could shift the design and make the specific questions about the proposed protocol change irrelevant.

          Did you consider at all scrapping our custom authentication protocol and instead switching to using straight SASL MD5-DIGEST for the DataTransferProtocol?

          Thanks for pointing out HDFS-3637. After further review of that patch, I see how we can iterate on that. I think it also has some benefits over the proposal that I posted: 1) consistency with authentication in the rest of the codebase, and 2) enabling encryption would defeat a man-in-the-middle attack without causing harm to intermediate proxy deployments like source address validation might cause. I'd like to explore the SASL solution further.

          The only potential downside I see is that if we ever pipeline multiple operations over a single connection, then we'd need to renegotiate SASL per operation, because the authorization decision may be different per block. This doesn't seem like an insurmountable problem though.

          I have a question about the compatibility impact of HDFS-3637. I see that an upgraded client can talk to an old cluster, and an old client can talk to an upgraded cluster if encryption is off. It looks like if it's an upgraded cluster and encryption is on, then DataXceiver will not run operations sent from unencrypted client connections, including connections initiated from an old client. This implies that all clients must be upgraded before it's safe to turn on encryption in the cluster. Do I understand correctly? If so, can we relax this logic a bit to allow for compatibility of an old client connected to an upgraded cluster with SASL on? The design doc proposed checking whether or not the datanode port is < 1024, and if so, then allow the old connection. The thinking here is that anyone continuing to run on a port < 1024 must still have a component that hasn't upgraded, so therefore it needs to support the old connection. Once datanode has been reconfigured to run on a port >= 1024, then all non-encrypted connections can be rejected.

          Also, I wasn't sure about how the HDFS-3637 patch impacts compatibility for inter-datanode connections. Is it possible to have a mix of old and upgraded datanodes running, some with encryption on and some with encryption off, or does it require a coordinated push to turn on encryption across the whole cluster?

          We wanted to be conscious of backwards compatibility with this change, particularly for a rolling upgrade scenario.

          Show
          Chris Nauroth added a comment - Thanks for the comments, everyone. Let's discuss the SASL point first, because it could shift the design and make the specific questions about the proposed protocol change irrelevant. Did you consider at all scrapping our custom authentication protocol and instead switching to using straight SASL MD5-DIGEST for the DataTransferProtocol? Thanks for pointing out HDFS-3637 . After further review of that patch, I see how we can iterate on that. I think it also has some benefits over the proposal that I posted: 1) consistency with authentication in the rest of the codebase, and 2) enabling encryption would defeat a man-in-the-middle attack without causing harm to intermediate proxy deployments like source address validation might cause. I'd like to explore the SASL solution further. The only potential downside I see is that if we ever pipeline multiple operations over a single connection, then we'd need to renegotiate SASL per operation, because the authorization decision may be different per block. This doesn't seem like an insurmountable problem though. I have a question about the compatibility impact of HDFS-3637 . I see that an upgraded client can talk to an old cluster, and an old client can talk to an upgraded cluster if encryption is off. It looks like if it's an upgraded cluster and encryption is on, then DataXceiver will not run operations sent from unencrypted client connections, including connections initiated from an old client. This implies that all clients must be upgraded before it's safe to turn on encryption in the cluster. Do I understand correctly? If so, can we relax this logic a bit to allow for compatibility of an old client connected to an upgraded cluster with SASL on? The design doc proposed checking whether or not the datanode port is < 1024, and if so, then allow the old connection. The thinking here is that anyone continuing to run on a port < 1024 must still have a component that hasn't upgraded, so therefore it needs to support the old connection. Once datanode has been reconfigured to run on a port >= 1024, then all non-encrypted connections can be rejected. Also, I wasn't sure about how the HDFS-3637 patch impacts compatibility for inter-datanode connections. Is it possible to have a mix of old and upgraded datanodes running, some with encryption on and some with encryption off, or does it require a coordinated push to turn on encryption across the whole cluster? We wanted to be conscious of backwards compatibility with this change, particularly for a rolling upgrade scenario.
          Hide
          Daryn Sharp added a comment -

          I haven't digested the whole jira, but want to request more info about:

          The only potential downside I see is that if we ever pipeline multiple operations over a single connection, then we'd need to renegotiate SASL per operation, because the authorization decision may be different per block

          I've made some RPCv9 changes to allow the future possibility to multiplex connections. Will multiplexing help with this jira's use case? If so, SASL negotiation per operation should not be necessary as negotiation will occur per virtual stream.

          Show
          Daryn Sharp added a comment - I haven't digested the whole jira, but want to request more info about: The only potential downside I see is that if we ever pipeline multiple operations over a single connection, then we'd need to renegotiate SASL per operation, because the authorization decision may be different per block I've made some RPCv9 changes to allow the future possibility to multiplex connections. Will multiplexing help with this jira's use case? If so, SASL negotiation per operation should not be necessary as negotiation will occur per virtual stream.
          Hide
          Chris Nauroth added a comment -

          Will multiplexing help with this jira's use case?

          My comment referred to the fact that block-level operations, like readBlock and writeBlock, require a unique authorization decision per block, using a different block access token for each one. If multiple readBlock/writeBlock calls were pipelined over a single connection, then we'd need to check authorization on each one. If authorization for DataTransferProtocol is moving fully to SASL, then this implies to me that we would need to renegotiate SASL at the start of each block-level operation.

          I don't see a way for multiplexing to help with this problem, because there would still be the problem that we don't know what block the client requested until we start inspecting the front of the message. I haven't followed the RPCv9 changes closely though, so if I'm misunderstanding, please let me know. Thanks, Daryn.

          Show
          Chris Nauroth added a comment - Will multiplexing help with this jira's use case? My comment referred to the fact that block-level operations, like readBlock and writeBlock, require a unique authorization decision per block, using a different block access token for each one. If multiple readBlock/writeBlock calls were pipelined over a single connection, then we'd need to check authorization on each one. If authorization for DataTransferProtocol is moving fully to SASL, then this implies to me that we would need to renegotiate SASL at the start of each block-level operation. I don't see a way for multiplexing to help with this problem, because there would still be the problem that we don't know what block the client requested until we start inspecting the front of the message. I haven't followed the RPCv9 changes closely though, so if I'm misunderstanding, please let me know. Thanks, Daryn.
          Hide
          Chris Nauroth added a comment -

          It's been a while since we've discussed this one, so here is a recap. We (the names listed in the design doc) proposed introducing challenge-response authentication on DataTransferProtocol based on exchanging a digest calculated using the block access token as a shared secret. This would establish mutual authentication between client and DataNode before tokens were exchanged, and thus it would remove the requirement to launch as root and bind to a privileged port. There were a few rounds of feedback discussing exactly which pieces of data to feed into the digest calculation. Aaron T. Myers also suggested folding this into the SASL handshake he had implemented for DataTransferProtocol encryption in HDFS-3637.

          I'm attaching a prototype patch. This is not intended to be committed. It's just a high-level demonstration intended to revive discussion on this issue.

          The suggestion to fold this into the SASL handshake makes sense, because we can rely on the existing DIGEST-MD5 mechanism to handle verifying the digests. This means the scope of this issue is about adding support for the full range of SASL QOPs on DataTransferProtocol. We already support auth-conf, and now we need to add support for auth and auth-int.

          The patch demonstrates this by hacking on the existing DataTransferEncryptor code. I changed the configured QOP to auth and changed the password calculation to use the block access token's password + the target DataNode's UUID + a client-supplied request timestamp. I tested this manually end-to-end. (I needed to set dfs.encrypt.data.transfer to true to trigger the code, but it's not really encrypting.) I ran tcpdump while reading a file, and I confirmed that the SASL negotiation is using auth for the QOP, no cipher parameter (so no encryption), and the block content is passed unencrypted on the wire.

          Early feedback is welcome. There is still a lot of work remaining: renegotiating SASL between multiple block ops with different tokens, reconciling all of this code against the existing HDFS-3637 code, actually removing the privileged port restriction, and automated tests.

          Show
          Chris Nauroth added a comment - It's been a while since we've discussed this one, so here is a recap. We (the names listed in the design doc) proposed introducing challenge-response authentication on DataTransferProtocol based on exchanging a digest calculated using the block access token as a shared secret. This would establish mutual authentication between client and DataNode before tokens were exchanged, and thus it would remove the requirement to launch as root and bind to a privileged port. There were a few rounds of feedback discussing exactly which pieces of data to feed into the digest calculation. Aaron T. Myers also suggested folding this into the SASL handshake he had implemented for DataTransferProtocol encryption in HDFS-3637 . I'm attaching a prototype patch. This is not intended to be committed. It's just a high-level demonstration intended to revive discussion on this issue. The suggestion to fold this into the SASL handshake makes sense, because we can rely on the existing DIGEST-MD5 mechanism to handle verifying the digests. This means the scope of this issue is about adding support for the full range of SASL QOPs on DataTransferProtocol. We already support auth-conf, and now we need to add support for auth and auth-int. The patch demonstrates this by hacking on the existing DataTransferEncryptor code. I changed the configured QOP to auth and changed the password calculation to use the block access token's password + the target DataNode's UUID + a client-supplied request timestamp. I tested this manually end-to-end. (I needed to set dfs.encrypt.data.transfer to true to trigger the code, but it's not really encrypting.) I ran tcpdump while reading a file, and I confirmed that the SASL negotiation is using auth for the QOP, no cipher parameter (so no encryption), and the block content is passed unencrypted on the wire. Early feedback is welcome. There is still a lot of work remaining: renegotiating SASL between multiple block ops with different tokens, reconciling all of this code against the existing HDFS-3637 code, actually removing the privileged port restriction, and automated tests.
          Hide
          Aaron T. Myers added a comment -

          This makes a lot of sense to me, Chris, and I think is a substantially simpler direction to go. Thanks for looking into this.

          One question for you before we proceed much further - correct me if I'm wrong, but I don't think this change can be done in a compatible fashion that would allow rolling upgrades. If I'm correct that that's the case, seems like we should target this issue to be fixed in 3.0.0.

          Show
          Aaron T. Myers added a comment - This makes a lot of sense to me, Chris, and I think is a substantially simpler direction to go. Thanks for looking into this. One question for you before we proceed much further - correct me if I'm wrong, but I don't think this change can be done in a compatible fashion that would allow rolling upgrades. If I'm correct that that's the case, seems like we should target this issue to be fixed in 3.0.0.
          Hide
          Chris Nauroth added a comment -

          I think we can achieve compatibility on the 2.x line by having the client decide the correct protocol. The client can make this decision based on observing a few things in its runtime environment:

          1. Datanode address port - We know that existing secured data nodes are on a privileged port, and future secured data nodes that don't start as root will be on a non-privileged port.
          2. dfs.data.transfer.protection - I propose adding this as a new configuration property for setting the desired SASL QOP on DataTransferProtocol. Its values would have the same syntax as the existing hadoop.rpc.protection property.
          3. dfs.encrypt.data.transfer - We must maintain the existing behavior for deployments that have turned this on. In addition to using SASL with the auth-conf QOP, this property also requires use of an NN-issued encryption key and imposes strict enforcement that all connections must be encrypted. Effectively, this property must supersede dfs.data.transfer.protection and cause rejection of SASL attempts that use any QOP other than auth-conf.

          Using that information, pseudo-code for protocol selection in the client would be:

          if security is on
            if datanode port < 1024
              if dfs.encrypt.data.transfer is on
                use encrypted SASL handshake (HDFS-3637)
              else
                do not use SASL
            else
              if dfs.encrypt.data.transfer is on
                use encrypted SASL handshake (HDFS-3637)
              else if dfs.data.transfer.protection defined
                use general SASL handshake (HDFS-2856)
              else
                error - secured connection on non-privileged port without SASL not possible
          else
            do not use SASL
          

          From an upgrade perspective, existing deployments that don't mind sticking with a privileged port can just keep running as usual, because the protocol would keep working the same way it works today. For existing deployments that want to stop using a privileged port and switch to a non-privileged port, it's more complex. First, they'll need to deploy the code update everywhere. Then, they'll need to restart datanodes to pick up 2 configuration changes simultaneously: 1) switch the port number and 2) set dfs.data.transfer.protection. While this is happening, you could have a mix of datanodes in the cluster running in different modes: some with a privileged port and some with a non-privileged port. This is OK, because the client-side logic above knows how to negotiate the correct protocol on a per-DN basis.

          One thing that would be impossible under this scheme is using a privileged port in combination with the new SASL handshake. The whole motivation for this change is to prevent the need for root access though, so I think this is an acceptable limitation.

          The most recent version of the design document talks about upgrading the DATA_TRANSFER_VERSION. I now believe this isn't necessary. Old clients can keep using the existing protocol version. New clients can trigger the new behavior based on dfs.data.transfer.protection, so a new protocol version isn't necessary. I need to refresh the design doc.

          I believe all of the above fits into our compatibility policies.

          Show
          Chris Nauroth added a comment - I think we can achieve compatibility on the 2.x line by having the client decide the correct protocol. The client can make this decision based on observing a few things in its runtime environment: Datanode address port - We know that existing secured data nodes are on a privileged port, and future secured data nodes that don't start as root will be on a non-privileged port. dfs.data.transfer.protection - I propose adding this as a new configuration property for setting the desired SASL QOP on DataTransferProtocol . Its values would have the same syntax as the existing hadoop.rpc.protection property. dfs.encrypt.data.transfer - We must maintain the existing behavior for deployments that have turned this on. In addition to using SASL with the auth-conf QOP, this property also requires use of an NN-issued encryption key and imposes strict enforcement that all connections must be encrypted. Effectively, this property must supersede dfs.data.transfer.protection and cause rejection of SASL attempts that use any QOP other than auth-conf. Using that information, pseudo-code for protocol selection in the client would be: if security is on if datanode port < 1024 if dfs.encrypt.data.transfer is on use encrypted SASL handshake (HDFS-3637) else do not use SASL else if dfs.encrypt.data.transfer is on use encrypted SASL handshake (HDFS-3637) else if dfs.data.transfer.protection defined use general SASL handshake (HDFS-2856) else error - secured connection on non-privileged port without SASL not possible else do not use SASL From an upgrade perspective, existing deployments that don't mind sticking with a privileged port can just keep running as usual, because the protocol would keep working the same way it works today. For existing deployments that want to stop using a privileged port and switch to a non-privileged port, it's more complex. First, they'll need to deploy the code update everywhere. Then, they'll need to restart datanodes to pick up 2 configuration changes simultaneously: 1) switch the port number and 2) set dfs.data.transfer.protection . While this is happening, you could have a mix of datanodes in the cluster running in different modes: some with a privileged port and some with a non-privileged port. This is OK, because the client-side logic above knows how to negotiate the correct protocol on a per-DN basis. One thing that would be impossible under this scheme is using a privileged port in combination with the new SASL handshake. The whole motivation for this change is to prevent the need for root access though, so I think this is an acceptable limitation. The most recent version of the design document talks about upgrading the DATA_TRANSFER_VERSION . I now believe this isn't necessary. Old clients can keep using the existing protocol version. New clients can trigger the new behavior based on dfs.data.transfer.protection , so a new protocol version isn't necessary. I need to refresh the design doc. I believe all of the above fits into our compatibility policies.
          Hide
          Chris Nauroth added a comment -

          I'm uploading a patch that implements the ideas described in the past several comments. I'm still in progress on more tests and several TODOs, but any feedback at this point is welcome. Pinging Owen O'Malley, Larry McCay, Jitendra Nath Pandey and Aaron T. Myers for potential feedback.

          It's a big patch. I did a lot of refactoring to avoid code duplication between the general-purpose SASL flow and our existing specialized encrypted SASL flow. If this is too cumbersome to review at once, then I can split some of the refactorings into separate patches on request.

          Summary of changes:

          • DataTransferEncryptor: I deleted this class. The code has been refactored into various new classes in a new org.apache.hadoop.hdfs.protocol.datatransfer.sasl sub-package. The presence of the word "encrypt" in this class name would have been potentially misleading, because we're now allowing DataTransferProtocol to support a quality of protection different from auth-conf.
          • SaslDataTransferClient: This class now implements the client side of SASL negotiation, whether using the general-purpose SASL handshake or our existing specialized encrypted handshake. This class is called by the HDFS client and also by the DataNode when acting as a client to another DataNode. The logic for deciding whether or not to do a SASL handshake, and if so which kind of handshake, has become somewhat complex. By encapsulating it behind this class, we avoid repeating that logic at multiple points in the rest of the code.
          • SaslDataTransferServer: This class now implements the server side of SASL negotiation. This is only called by the DataNode when receiving new connections. Similar to the above, this is a single point for encapsulating the logic of deciding which SASL handshake to use.
          • DataTransferSaslUtil: This contains various helper functions needed by the SASL classes.
          • Various classes of the HDFS client and the DataNode have mechanical changes to wire in the new SASL classes and call them.
          • DateNode#checkSecureConfig: This is a new method for checking whether the DataNode is starting in an acceptable secure configuration, either via privileged ports or configuring SASL.
          • hdfs-default.xml: I added documentation of the new properties for configuring SASL on DataTransferProtocol.
          • TestSaslDataTransfer: This is a new test that runs an embedded KDC, starts a secured cluster and demonstrates that a client can request any of the 3 QOPs.

          Here are a few discussion points I'd like to bring up:

          • Our discussion up to this point has focused on the privileged port for DataTransferProtocol. There is also the HTTP port to consider. My thinking on this is that use of the new SASL configuration on a non-privileged port is only acceptable if the configuration also uses SPNEGO for HTTP authentication. If it was using token-based auth, then we'd be back to the same problem of sending secret block access tokens to an unauthenticated process. (See TODO comment in DataNode#checkSecureConfig.) My understanding is that SPNEGO establishes mutual authentication, so checking for this ought to work fine. I'd love if someone could confirm that independently.
          • Previously, I mentioned renegotiating SASL between multiple block operations. On further reflection, I no longer think this is necessary. The initial SASL handshake establishes authentication of the server. For subsequent operations on the same connection/underlying socket, I expect authentication of the remote process wouldn't change. The privileged port check was intended to protect against an attacker binding to the data transfer port after a DataNode process stops. For an existing previously authenticated socket, we know that it's still connected to the same process, so I don't think we need to renegotiate SASL. Thoughts?
          Show
          Chris Nauroth added a comment - I'm uploading a patch that implements the ideas described in the past several comments. I'm still in progress on more tests and several TODOs, but any feedback at this point is welcome. Pinging Owen O'Malley , Larry McCay , Jitendra Nath Pandey and Aaron T. Myers for potential feedback. It's a big patch. I did a lot of refactoring to avoid code duplication between the general-purpose SASL flow and our existing specialized encrypted SASL flow. If this is too cumbersome to review at once, then I can split some of the refactorings into separate patches on request. Summary of changes: DataTransferEncryptor : I deleted this class. The code has been refactored into various new classes in a new org.apache.hadoop.hdfs.protocol.datatransfer.sasl sub-package. The presence of the word "encrypt" in this class name would have been potentially misleading, because we're now allowing DataTransferProtocol to support a quality of protection different from auth-conf. SaslDataTransferClient : This class now implements the client side of SASL negotiation, whether using the general-purpose SASL handshake or our existing specialized encrypted handshake. This class is called by the HDFS client and also by the DataNode when acting as a client to another DataNode. The logic for deciding whether or not to do a SASL handshake, and if so which kind of handshake, has become somewhat complex. By encapsulating it behind this class, we avoid repeating that logic at multiple points in the rest of the code. SaslDataTransferServer : This class now implements the server side of SASL negotiation. This is only called by the DataNode when receiving new connections. Similar to the above, this is a single point for encapsulating the logic of deciding which SASL handshake to use. DataTransferSaslUtil : This contains various helper functions needed by the SASL classes. Various classes of the HDFS client and the DataNode have mechanical changes to wire in the new SASL classes and call them. DateNode#checkSecureConfig : This is a new method for checking whether the DataNode is starting in an acceptable secure configuration, either via privileged ports or configuring SASL. hdfs-default.xml: I added documentation of the new properties for configuring SASL on DataTransferProtocol. TestSaslDataTransfer : This is a new test that runs an embedded KDC, starts a secured cluster and demonstrates that a client can request any of the 3 QOPs. Here are a few discussion points I'd like to bring up: Our discussion up to this point has focused on the privileged port for DataTransferProtocol. There is also the HTTP port to consider. My thinking on this is that use of the new SASL configuration on a non-privileged port is only acceptable if the configuration also uses SPNEGO for HTTP authentication. If it was using token-based auth, then we'd be back to the same problem of sending secret block access tokens to an unauthenticated process. (See TODO comment in DataNode#checkSecureConfig .) My understanding is that SPNEGO establishes mutual authentication, so checking for this ought to work fine. I'd love if someone could confirm that independently. Previously, I mentioned renegotiating SASL between multiple block operations. On further reflection, I no longer think this is necessary. The initial SASL handshake establishes authentication of the server. For subsequent operations on the same connection/underlying socket, I expect authentication of the remote process wouldn't change. The privileged port check was intended to protect against an attacker binding to the data transfer port after a DataNode process stops. For an existing previously authenticated socket, we know that it's still connected to the same process, so I don't think we need to renegotiate SASL. Thoughts?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12647301/HDFS-2856.1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 4 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7002//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7002//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12647301/HDFS-2856.1.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 4 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7002//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7002//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          The failure in TestPipelinesFailover appears to be unrelated. I can't repro.

          Show
          Chris Nauroth added a comment - The failure in TestPipelinesFailover appears to be unrelated. I can't repro.
          Hide
          Daryn Sharp added a comment -

          Chris asked that I take a look, so I'll try to review this week.

          Show
          Daryn Sharp added a comment - Chris asked that I take a look, so I'll try to review this week.
          Hide
          Chris Nauroth added a comment -

          I'm attaching v2 of the patch. This has the following changes since last time:

          1. Added new tests for the balancer with SASL on DataTransferProtocol. This helped me find a bug with the balancer passing around the incorrect datanode ID (the source instead of the destination), so I fixed that.
          2. Removed TODO for inclusion of block pool ID in the SASL handshake. I already include the token identifier, which contains the block pool ID as a component, so it's not necessary to add block pool ID again.
          3. Removed the client-generated timestamp from the SASL handshake. The original intention of the timestamp was to make it harder for a man in the middle to replay the message. The server side would have checked elapsed time since the timestamp and rejected the request if it was beyond a threshold. However, the SASL DIGEST-MD5 handshake already protects against this, because the server initiates a random challenge at the start of any new connection. It's highly likely that the challenge will be unique across different connection attempts, and thus a replayed message is highly likely to be rejected. The timestamp wouldn't provide any additional benefit.
          4. Removed datanode ID from the SASL handshake. This had been intended to protect against a man in the middle rerouting a message to a different datanode. As described above, SASL DIGEST-MD5 already protects against this, because the server issues a different challenge on each connection attempt. The datanode ID wouldn't provide any additional benefit.
          5. Added code in DataNode#checkSecureConfig to check that when SASL is used on DataTransferProtocol, SSL must also be used on HTTP. Plain HTTP wouldn't be safe, because the client could write a delegation token query parameter onto the socket without any authentication of the server. By requiring SSL, we enforce that the server is authenticated before sending the delegation token.
          Show
          Chris Nauroth added a comment - I'm attaching v2 of the patch. This has the following changes since last time: Added new tests for the balancer with SASL on DataTransferProtocol. This helped me find a bug with the balancer passing around the incorrect datanode ID (the source instead of the destination), so I fixed that. Removed TODO for inclusion of block pool ID in the SASL handshake. I already include the token identifier, which contains the block pool ID as a component, so it's not necessary to add block pool ID again. Removed the client-generated timestamp from the SASL handshake. The original intention of the timestamp was to make it harder for a man in the middle to replay the message. The server side would have checked elapsed time since the timestamp and rejected the request if it was beyond a threshold. However, the SASL DIGEST-MD5 handshake already protects against this, because the server initiates a random challenge at the start of any new connection. It's highly likely that the challenge will be unique across different connection attempts, and thus a replayed message is highly likely to be rejected. The timestamp wouldn't provide any additional benefit. Removed datanode ID from the SASL handshake. This had been intended to protect against a man in the middle rerouting a message to a different datanode. As described above, SASL DIGEST-MD5 already protects against this, because the server issues a different challenge on each connection attempt. The datanode ID wouldn't provide any additional benefit. Added code in DataNode#checkSecureConfig to check that when SASL is used on DataTransferProtocol, SSL must also be used on HTTP. Plain HTTP wouldn't be safe, because the client could write a delegation token query parameter onto the socket without any authentication of the server. By requiring SSL, we enforce that the server is authenticated before sending the delegation token.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651284/HDFS-2856.2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7167//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7167//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651284/HDFS-2856.2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7167//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7167//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          I'm attaching patch v3 and notifying Jitendra Nath Pandey, since I know he had started reviewing the patch.

          The new revision fixes a bug I found in that a client configured with SASL on DataTransferProtocol would not have been able to communicate with an unsecured cluster. This is a supported use case for things like distcp from a secured source to an unsecured destination. The code change to fix this is in SaslDataTransferClient#send. It only allows fallback to an unsecure connection if configuration property ipc.client.fallback-to-simple-auth-allowed has been set. This is consistent with other RPC client code.

          I also improved the tests in TestSaslDataTransfer.

          Show
          Chris Nauroth added a comment - I'm attaching patch v3 and notifying Jitendra Nath Pandey , since I know he had started reviewing the patch. The new revision fixes a bug I found in that a client configured with SASL on DataTransferProtocol would not have been able to communicate with an unsecured cluster. This is a supported use case for things like distcp from a secured source to an unsecured destination. The code change to fix this is in SaslDataTransferClient#send . It only allows fallback to an unsecure connection if configuration property ipc.client.fallback-to-simple-auth-allowed has been set. This is consistent with other RPC client code. I also improved the tests in TestSaslDataTransfer .
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12651766/HDFS-2856.3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 5 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7196//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7196//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12651766/HDFS-2856.3.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 5 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7196//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7196//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          I'm attaching a test plan for this patch. Many of the concerns in this patch are difficult to cover in unit testing, so I'd like to share the additional system testing planned to achieve greater coverage of the changes.

          Show
          Chris Nauroth added a comment - I'm attaching a test plan for this patch. Many of the concerns in this patch are difficult to cover in unit testing, so I'd like to share the additional system testing planned to achieve greater coverage of the changes.
          Hide
          Chris Nauroth added a comment -

          I'm attaching patch v4. Compared to v3, the only new changes are in tests. I'm now using the retry property introduced earlier today in HADOOP-10747 to work around the problem of the KDC identifying simultaneous connections as a reply attack. All code related to managing the MiniKdc is now in an abstract SaslDataTransferTestCase class, so that individual tests don't need to repeat the code. I added a few more failure tests to TestSaslDataTransfer.

          Show
          Chris Nauroth added a comment - I'm attaching patch v4. Compared to v3, the only new changes are in tests. I'm now using the retry property introduced earlier today in HADOOP-10747 to work around the problem of the KDC identifying simultaneous connections as a reply attack. All code related to managing the MiniKdc is now in an abstract SaslDataTransferTestCase class, so that individual tests don't need to repeat the code. I added a few more failure tests to TestSaslDataTransfer .
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12652338/HDFS-2856.4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7229//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7229//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12652338/HDFS-2856.4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7229//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7229//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          The test failures are unrelated. TestPipelinesFailover has been failing intermittently on other unrelated patches. TestBalancerWithSaslDataTransfer reruns tests from TestBalancer under secure configuration, and TestBalancer also has experienced intermittent failures lately.

          However, reviewing logs from the test runs made me notice that MiniDFSCluster was printing a bogus warning about failure to bind to a privileged port, which isn't relevant when SASL is configured on DataTransferProtocol. This could cause confusion for people running the tests in the future, so I'd like to stop those log messages. I'm attaching patch v5 with a minor change in MiniDFSCluster to stifle the bogus log messages.

          Show
          Chris Nauroth added a comment - The test failures are unrelated. TestPipelinesFailover has been failing intermittently on other unrelated patches. TestBalancerWithSaslDataTransfer reruns tests from TestBalancer under secure configuration, and TestBalancer also has experienced intermittent failures lately. However, reviewing logs from the test runs made me notice that MiniDFSCluster was printing a bogus warning about failure to bind to a privileged port, which isn't relevant when SASL is configured on DataTransferProtocol. This could cause confusion for people running the tests in the future, so I'd like to stop those log messages. I'm attaching patch v5 with a minor change in MiniDFSCluster to stifle the bogus log messages.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12652461/HDFS-2856.5.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7234//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7234//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12652461/HDFS-2856.5.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7234//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7234//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -
          • For the specialized encrypted handshake, it seems the encrypted key is obtained from namenode via rpc for every block. That makes it now two RPC calls to namenode for every new block to write. For a given file, the key should be same and could be obtained only once?
          • getEncryptedStreams doesn't use access token. IMO the user and the password should be derived from the accesstoken rather than the key.
          • It might make sense to define the defaults for the new configuration variables in hdfs-default and/or as constants. It helps in code reading at times.
          • Log.debug should be wrapped inside if (Log.isDebugEnabled()) condition.
          • checkTrustAndSend obtains new encryption key, irrespective of the qop needed. I believe the encryption key is needed only for specialized encryption case.
          • SaslDataTransferClient object in NameNodeConnector.java seems out of place, the NameNodeConnector is supposed to encapsulate only namenode connections. Can we avoid the saslClient in this class?
          • RemotePeerFactory.java: Javadoc needs update.
          • Minor nit: checkTrustAndSend returns null for skipping handshake which has to be checked in the caller. It could just return the same stream pair, which otherwise every caller has to do.
          Show
          Jitendra Nath Pandey added a comment - For the specialized encrypted handshake, it seems the encrypted key is obtained from namenode via rpc for every block. That makes it now two RPC calls to namenode for every new block to write. For a given file, the key should be same and could be obtained only once? getEncryptedStreams doesn't use access token. IMO the user and the password should be derived from the accesstoken rather than the key. It might make sense to define the defaults for the new configuration variables in hdfs-default and/or as constants. It helps in code reading at times. Log.debug should be wrapped inside if (Log.isDebugEnabled()) condition. checkTrustAndSend obtains new encryption key, irrespective of the qop needed. I believe the encryption key is needed only for specialized encryption case. SaslDataTransferClient object in NameNodeConnector.java seems out of place, the NameNodeConnector is supposed to encapsulate only namenode connections. Can we avoid the saslClient in this class? RemotePeerFactory.java: Javadoc needs update. Minor nit: checkTrustAndSend returns null for skipping handshake which has to be checked in the caller. It could just return the same stream pair, which otherwise every caller has to do.
          Hide
          Chris Nauroth added a comment -

          Jitendra, thank you for taking a look at this patch.

          ...it seems the encrypted key is obtained from namenode via rpc for every block...

          Actually, we cache the encryption key so that we don't need to keep repeating that RPC. (This is true on current trunk and with my patch too.) The key retrieval is now wrapped behind the DataEncryptionKeyFactory interface. There are 2 implementors of this: the DFSClient itself and the NameNodeConnector used by the balancer. In both of those classes, if you look at the newDataEncryptionKey method, you'll see that they lazily fetch a key and cache it for as long as the key expiry.

          getEncryptedStreams doesn't use access token. IMO the user and the password should be derived from the accesstoken rather than the key.

          Thanks for catching that. This is a private method, so I can easily remove access token from the signature. We can't change the user/password calculation for the encrypted case now without breaking compatibility.

          It might make sense to define the defaults for the new configuration variables in hdfs-default and/or as constants. It helps in code reading at times.

          The patch documents the new properties dfs.data.transfer.protection and dfs.data.transfer.saslproperties.resolver.class in hdfs-default.xml. The default values are set to empty/undefined. I think this is what we want, because it's an opt-in feature. Let me know if you had any other configuration properties in mind.

          Log.debug should be wrapped inside if (Log.isDebugEnabled()) condition.

          The new classes use slf4j. (There was some discussion on mailing lists a few months ago about starting to use this library in new classes.) With slf4j, it's no longer necessary to check isDebugEnabled. slf4j accepts string substitution variables using varargs, and it checks the log level internally first before doing any string concatenation. Explicitly checking isDebugEnabled wouldn't provide any performance benefit.

          checkTrustAndSend obtains new encryption key, irrespective of the qop needed. I believe the encryption key is needed only for specialized encryption case.

          The 2 implementations of DataEncryptionKeyFactory mentioned above only retrieve an encryption key if encryption is enabled (NameNode is configured with dfs.encrypt.data.transfer=true). For a deployment configured with SASL on DataTransferProtocol, this will be false, so it won't actually get a key. I'll put a comment in SaslDataTransferClient to clarify this.

          SaslDataTransferClient object in NameNodeConnector.java seems out of place, the NameNodeConnector is supposed to encapsulate only namenode connections. Can we avoid the saslClient in this class?

          Yeah, what was I thinking there? This is needed by the balancer for its DataNode communication when it needs to move blocks. Let me see if I can move it right into the Balancer class.

          RemotePeerFactory.java: Javadoc needs update.

          Will do. Thanks for the catch.

          Minor nit: checkTrustAndSend returns null for skipping handshake which has to be checked in the caller. It could just return the same stream pair, which otherwise every caller has to do.

          I actually need to use null as a sentinel value. In peerSend, I need to know whether or not a SASL handshake was performed, and if so, wrap the peer in an instance of EncryptedPeer (which would be better named SaslPeer at this point, but we can refactor that later). If I returned a non-null IOStreamPair always, then I wouldn't be able to do this check.

          I'll get to work on a new revision that incorporates your feedback. Thanks again!

          Show
          Chris Nauroth added a comment - Jitendra, thank you for taking a look at this patch. ...it seems the encrypted key is obtained from namenode via rpc for every block... Actually, we cache the encryption key so that we don't need to keep repeating that RPC. (This is true on current trunk and with my patch too.) The key retrieval is now wrapped behind the DataEncryptionKeyFactory interface. There are 2 implementors of this: the DFSClient itself and the NameNodeConnector used by the balancer. In both of those classes, if you look at the newDataEncryptionKey method, you'll see that they lazily fetch a key and cache it for as long as the key expiry. getEncryptedStreams doesn't use access token. IMO the user and the password should be derived from the accesstoken rather than the key. Thanks for catching that. This is a private method, so I can easily remove access token from the signature. We can't change the user/password calculation for the encrypted case now without breaking compatibility. It might make sense to define the defaults for the new configuration variables in hdfs-default and/or as constants. It helps in code reading at times. The patch documents the new properties dfs.data.transfer.protection and dfs.data.transfer.saslproperties.resolver.class in hdfs-default.xml. The default values are set to empty/undefined. I think this is what we want, because it's an opt-in feature. Let me know if you had any other configuration properties in mind. Log.debug should be wrapped inside if (Log.isDebugEnabled()) condition. The new classes use slf4j. (There was some discussion on mailing lists a few months ago about starting to use this library in new classes.) With slf4j, it's no longer necessary to check isDebugEnabled . slf4j accepts string substitution variables using varargs, and it checks the log level internally first before doing any string concatenation. Explicitly checking isDebugEnabled wouldn't provide any performance benefit. checkTrustAndSend obtains new encryption key, irrespective of the qop needed. I believe the encryption key is needed only for specialized encryption case. The 2 implementations of DataEncryptionKeyFactory mentioned above only retrieve an encryption key if encryption is enabled (NameNode is configured with dfs.encrypt.data.transfer=true). For a deployment configured with SASL on DataTransferProtocol, this will be false, so it won't actually get a key. I'll put a comment in SaslDataTransferClient to clarify this. SaslDataTransferClient object in NameNodeConnector.java seems out of place, the NameNodeConnector is supposed to encapsulate only namenode connections. Can we avoid the saslClient in this class? Yeah, what was I thinking there? This is needed by the balancer for its DataNode communication when it needs to move blocks. Let me see if I can move it right into the Balancer class. RemotePeerFactory.java: Javadoc needs update. Will do. Thanks for the catch. Minor nit: checkTrustAndSend returns null for skipping handshake which has to be checked in the caller. It could just return the same stream pair, which otherwise every caller has to do. I actually need to use null as a sentinel value. In peerSend , I need to know whether or not a SASL handshake was performed, and if so, wrap the peer in an instance of EncryptedPeer (which would be better named SaslPeer at this point, but we can refactor that later). If I returned a non-null IOStreamPair always, then I wouldn't be able to do this check. I'll get to work on a new revision that incorporates your feedback. Thanks again!
          Hide
          Chris Nauroth added a comment -

          Here is patch version 6. This incorporates Jitendra's feedback as I described in my last comment.

          Show
          Chris Nauroth added a comment - Here is patch version 6. This incorporates Jitendra's feedback as I described in my last comment.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12652910/HDFS-2856.6.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
          org.apache.hadoop.hdfs.server.balancer.TestBalancer

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7240//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7240//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12652910/HDFS-2856.6.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer org.apache.hadoop.hdfs.server.balancer.TestBalancer +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7240//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7240//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          The test failures in the last run are not caused by the patch. As I mentioned in an earlier comment, TestBalancer has been flaky lately, and TestBalancerWithSaslDataTransfer is reusing test code from that suite.

          Show
          Chris Nauroth added a comment - The test failures in the last run are not caused by the patch. As I mentioned in an earlier comment, TestBalancer has been flaky lately, and TestBalancerWithSaslDataTransfer is reusing test code from that suite.
          Hide
          Jitendra Nath Pandey added a comment -

          +1 for the patch.

          Show
          Jitendra Nath Pandey added a comment - +1 for the patch.
          Hide
          Haohui Mai added a comment -

          Tested the latest patch on a secure cluster running under Mac OS X. Worked as expected.

          One comment: the DN continues to start if dfs.block.access.token.enable equals to false (which is the default). Maybe it is better to bail out instead as webhdfs won't work in this configuration.

          Show
          Haohui Mai added a comment - Tested the latest patch on a secure cluster running under Mac OS X. Worked as expected. One comment: the DN continues to start if dfs.block.access.token.enable equals to false (which is the default). Maybe it is better to bail out instead as webhdfs won't work in this configuration.
          Hide
          Chris Nauroth added a comment -

          One comment: the DN continues to start if dfs.block.access.token.enable equals to false (which is the default). Maybe it is better to bail out instead as webhdfs won't work in this configuration.

          Yes, the NameNode logs an error (which is all too easily ignored), but proceeds with startup. The DataNode doesn't even log an error. This is an existing issue unrelated to the current patch, so I filed a new issue to discuss it: HDFS-6666.

          Show
          Chris Nauroth added a comment - One comment: the DN continues to start if dfs.block.access.token.enable equals to false (which is the default). Maybe it is better to bail out instead as webhdfs won't work in this configuration. Yes, the NameNode logs an error (which is all too easily ignored), but proceeds with startup. The DataNode doesn't even log an error. This is an existing issue unrelated to the current patch, so I filed a new issue to discuss it: HDFS-6666 .
          Hide
          Chris Nauroth added a comment -

          I'm attaching a v7 trunk patch, and also a v7 branch-2 patch. I noticed that the trunk patch won't quite work for branch-2, because branch-2 still has the legacy web UI code, and that includes some logic for getting a BlockReader, which is impacted by this patch.

          The only change since last time in the trunk patch is in DataTransferSaslUtil#getPeerAddress. I made this method more resilient to parsing the format of the address string after I saw a test failure on branch-2.

          In the branch-2 patch, the only incremental differences are in JspHelper and DatanodeJspHelper.

          Show
          Chris Nauroth added a comment - I'm attaching a v7 trunk patch, and also a v7 branch-2 patch. I noticed that the trunk patch won't quite work for branch-2, because branch-2 still has the legacy web UI code, and that includes some logic for getting a BlockReader , which is impacted by this patch. The only change since last time in the trunk patch is in DataTransferSaslUtil#getPeerAddress . I made this method more resilient to parsing the format of the address string after I saw a test failure on branch-2. In the branch-2 patch, the only incremental differences are in JspHelper and DatanodeJspHelper .
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12655337/HDFS-2856.7.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
          org.apache.hadoop.hdfs.web.TestWebHDFSXAttr

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7331//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7331//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12655337/HDFS-2856.7.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA org.apache.hadoop.hdfs.web.TestWebHDFSXAttr +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7331//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7331//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          The most recent round of test failures look unrelated. I can't repro locally.

          Show
          Chris Nauroth added a comment - The most recent round of test failures look unrelated. I can't repro locally.
          Hide
          Jitendra Nath Pandey added a comment -

          +1 for the trunk patch as well as the branch-2 patch.

          Show
          Jitendra Nath Pandey added a comment - +1 for the trunk patch as well as the branch-2 patch.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #5877 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5877/)
          HDFS-2856. Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474)

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #5877 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5877/ ) HDFS-2856 . Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Hide
          Chris Nauroth added a comment -

          Jitendra, thank you for the code reviews. I've committed this to trunk and branch-2. Thank you also to the numerous contributors who offered feedback along the way.

          Show
          Chris Nauroth added a comment - Jitendra, thank you for the code reviews. I've committed this to trunk and branch-2. Thank you also to the numerous contributors who offered feedback along the way.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #613 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/613/)
          HDFS-2856. Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474)

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #613 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/613/ ) HDFS-2856 . Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1805 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1805/)
          HDFS-2856. Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474)

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1805 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1805/ ) HDFS-2856 . Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1832 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1832/)
          HDFS-2856. Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474)

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1832 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1832/ ) HDFS-2856 . Fix block protocol so that Datanodes don't require root or jsvc. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1610474 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataEncryptionKeyFactory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/InvalidMagicNumberException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslParticipant.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithSaslDataTransfer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java

            People

            • Assignee:
              Chris Nauroth
              Reporter:
              Owen O'Malley
            • Votes:
              0 Vote for this issue
              Watchers:
              37 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development