Hadoop Common
  1. Hadoop Common
  2. HADOOP-7510

Tokens should use original hostname provided instead of ip

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.3, 2.0.2-alpha
    • Component/s: security
    • Labels:
      None

      Description

      Tokens currently store the ip:port of the remote server. This precludes tokens from being used after a host's ip is changed. Tokens should store the hostname used to make the RPC connection. This will enable new processes to use their existing tokens.

      1. HADOOP-7510.branch-0.23.patch
        5 kB
        Daryn Sharp
      2. HADOOP-7510.trunk.patch
        6 kB
        Daryn Sharp
      3. HADOOP-7510-12.patch
        76 kB
        Daryn Sharp
      4. HADOOP-7510-11.patch
        74 kB
        Daryn Sharp
      5. HADOOP-7510-10.patch
        70 kB
        Daryn Sharp
      6. HADOOP-7510-9.patch
        56 kB
        Daryn Sharp
      7. HADOOP-7510-8.patch
        42 kB
        Daryn Sharp
      8. HADOOP-7510-6.patch
        43 kB
        Daryn Sharp
      9. HADOOP-7510-5.patch
        42 kB
        Daryn Sharp
      10. HADOOP-7510-4.patch
        24 kB
        Daryn Sharp
      11. HADOOP-7510-3.patch
        22 kB
        Daryn Sharp
      12. HADOOP-7510-2.patch
        22 kB
        Daryn Sharp
      13. HADOOP-7510.patch
        34 kB
        Daryn Sharp

        Issue Links

          Activity

          Hide
          Jitendra Nath Pandey added a comment -

          There are a few considerations to be careful about:
          1) The hostname in service could be a vip name. The token selector is used in ipc.Client, which has InetSocketAddress of the remote server. How do we make sure we are matching the right hostnames? One way to address it is to get the ip address from the hostname service and use that for matching, but that needs a dns lookup.
          2) Dns lookup in token selector would be invoked for every connection using token authentication.

          Show
          Jitendra Nath Pandey added a comment - There are a few considerations to be careful about: 1) The hostname in service could be a vip name. The token selector is used in ipc.Client, which has InetSocketAddress of the remote server. How do we make sure we are matching the right hostnames? One way to address it is to get the ip address from the hostname service and use that for matching, but that needs a dns lookup. 2) Dns lookup in token selector would be invoked for every connection using token authentication.
          Hide
          Daryn Sharp added a comment -

          Yes, thank you for the considerations. You'll be glad to know that I have already carefully considered those concerns. I found a means of ensuring that an InetSocketAddress is constructed such that getHostName will return the exact host (whether hostname, cname, ip, etc) used to instantiate the object. That will allow ipc.Client to correctly match the tokens. Dns re-lookups are prevented which is important because:

          1. If a cname is used, then the resolved ip may point to a hostname that if resolved again will return a different ip. Ex. nn.domain is a cname for either nn1.domain or nn2.domain. The site may toggle the cname but ipc.Client will not reconnect to the proper host. Ie. cname (nn.domain) -> ip -> hostname (nn1.domain).
          2. If an exact ip is used, the ip may not resolve to a hostname which will cause failures. Or the user explicitly wants to connect to only that ip – one example may be for testing, or because dns is fouled. Using a dns lookup may cause the client to unexpectedly connect to a different ip.
          Show
          Daryn Sharp added a comment - Yes, thank you for the considerations. You'll be glad to know that I have already carefully considered those concerns. I found a means of ensuring that an InetSocketAddress is constructed such that getHostName will return the exact host (whether hostname, cname, ip, etc) used to instantiate the object. That will allow ipc.Client to correctly match the tokens. Dns re-lookups are prevented which is important because: If a cname is used, then the resolved ip may point to a hostname that if resolved again will return a different ip. Ex. nn.domain is a cname for either nn1.domain or nn2.domain. The site may toggle the cname but ipc.Client will not reconnect to the proper host. Ie. cname (nn.domain) -> ip -> hostname (nn1.domain). If an exact ip is used, the ip may not resolve to a hostname which will cause failures. Or the user explicitly wants to connect to only that ip – one example may be for testing, or because dns is fouled. Using a dns lookup may cause the client to unexpectedly connect to a different ip.
          Hide
          Kihwal Lee added a comment -

          Please note that the test in HADOOP-7492 will have to be modified, if this patch goes into trunk.

          Show
          Kihwal Lee added a comment - Please note that the test in HADOOP-7492 will have to be modified, if this patch goes into trunk.
          Hide
          Jitendra Nath Pandey added a comment -

          > URI uri = hasScheme ? URI.create(target) : URI.create("host://"+target);
          If hasScheme is false, the host is used as a scheme? Is it a typo?

          > NetUtils#makeSocketAddr
          InetSocketAddress#getHostName javadoc states that hostname part of the address is returned. How can we assume that an IP address will be returned even if it is resolved?

          > DistributedFileSystem.java
          Please don't remove the deprecated method. I think deprecated methods should be removed only in major releases.

          Show
          Jitendra Nath Pandey added a comment - > URI uri = hasScheme ? URI.create(target) : URI.create("host://"+target); If hasScheme is false, the host is used as a scheme? Is it a typo? > NetUtils#makeSocketAddr InetSocketAddress#getHostName javadoc states that hostname part of the address is returned. How can we assume that an IP address will be returned even if it is resolved? > DistributedFileSystem.java Please don't remove the deprecated method. I think deprecated methods should be removed only in major releases.
          Hide
          Daryn Sharp added a comment -

          > URI uri = hasScheme ? URI.create(target) : URI.create("host://"+target);
          If hasScheme is false, the host is used as a scheme? Is it a typo?

          The scheme "host" (not a variable) is used as a dummy scheme to allow the authority to subsequently be parsed out.

          > NetUtils#makeSocketAddr
          InetSocketAddress#getHostName javadoc states that hostname part of the address is returned. How can we assume that an IP address will be returned even if it is resolved?

          InetSocketAddress#getHostName returns the hostname within the embedded InetAddress. If instantiated with an ip, the InetAddress has a null hostname which causes the ip to be resolved when the hostname is requested. We don't want this behavior. If InetSocketAddress is instantiated with an ip, then the ip must be returned so the reconnect uses the ip and not the host from a reverse lookup.

          The trick is to instantiate an InetAddress with an exact hostname (whether ip or host) and it's ip – this prevents any dns lookups when getHostName is invoked. Then instantiate an InetSocketAddress with that InetAddress. Now getHostName is locked down to return the given hostname (whether ip or host).

          Here is the relevant code:

          iaddr = InetAddress.getByAddress(hostname, iaddr.getAddress());
          socketAddr = new InetSocketAddress(iaddr, port);
          

          > DistributedFileSystem.java
          Please don't remove the deprecated method. I think deprecated methods should be removed only in major releases.

          No problem, I'll add it back.

          Show
          Daryn Sharp added a comment - > URI uri = hasScheme ? URI.create(target) : URI.create("host://"+target); If hasScheme is false, the host is used as a scheme? Is it a typo? The scheme "host" (not a variable) is used as a dummy scheme to allow the authority to subsequently be parsed out. > NetUtils#makeSocketAddr InetSocketAddress#getHostName javadoc states that hostname part of the address is returned. How can we assume that an IP address will be returned even if it is resolved? InetSocketAddress#getHostName returns the hostname within the embedded InetAddress . If instantiated with an ip, the InetAddress has a null hostname which causes the ip to be resolved when the hostname is requested. We don't want this behavior. If InetSocketAddress is instantiated with an ip, then the ip must be returned so the reconnect uses the ip and not the host from a reverse lookup. The trick is to instantiate an InetAddress with an exact hostname (whether ip or host) and it's ip – this prevents any dns lookups when getHostName is invoked. Then instantiate an InetSocketAddress with that InetAddress . Now getHostName is locked down to return the given hostname (whether ip or host). Here is the relevant code: iaddr = InetAddress.getByAddress(hostname, iaddr.getAddress()); socketAddr = new InetSocketAddress(iaddr, port); > DistributedFileSystem.java Please don't remove the deprecated method. I think deprecated methods should be removed only in major releases. No problem, I'll add it back.
          Hide
          Jitendra Nath Pandey added a comment -

          > The trick is to instantiate an InetAddress with an exact hostname (whether ip or host) and it's ip – this prevents
          > any dns lookups when getHostName is invoked.
          This seems to be fragile because InetSocketAddress API doesn't gaurantee that dns lookup will not be invoked.

          I had discussion with Owen and there seems to be a compatibility issue as well. The new tokens will not work with old clients. For example a new client obtained a token and sets service to hostname:port. Now this token is sent over hftp and is finally used at an old client, the old client will not be able to use this token.

          Show
          Jitendra Nath Pandey added a comment - > The trick is to instantiate an InetAddress with an exact hostname (whether ip or host) and it's ip – this prevents > any dns lookups when getHostName is invoked. This seems to be fragile because InetSocketAddress API doesn't gaurantee that dns lookup will not be invoked. I had discussion with Owen and there seems to be a compatibility issue as well. The new tokens will not work with old clients. For example a new client obtained a token and sets service to hostname:port. Now this token is sent over hftp and is finally used at an old client, the old client will not be able to use this token.
          Hide
          Daryn Sharp added a comment -

          As far as I can tell, there isn't a compat issue with any permutation of new/old. Hftp requests appear to be handled as follows:

          1. The client requests a token from the NN. The NN will return a token with a service its format (ip:port or host:port).
          2. The client doesn't care about the NN's service format. It resets the token's service to format it uses.
          3. To read a file, the client sends the token to the NN. The NN doesn't care about the token's service format. It just redirects the client to a DN with the unchanged token.
          4. The DN doesn't care about the client's service format either. The DN resets the token's service to its format, and then the DFSClient connects to the NN using its format. The connection succeeds since the format is equal between the connection addr and the token's service.

          Am I overlooking something?

          Show
          Daryn Sharp added a comment - As far as I can tell, there isn't a compat issue with any permutation of new/old. Hftp requests appear to be handled as follows: The client requests a token from the NN. The NN will return a token with a service its format (ip:port or host:port). The client doesn't care about the NN's service format. It resets the token's service to format it uses. To read a file, the client sends the token to the NN. The NN doesn't care about the token's service format. It just redirects the client to a DN with the unchanged token. The DN doesn't care about the client's service format either. The DN resets the token's service to its format, and then the DFSClient connects to the NN using its format. The connection succeeds since the format is equal between the connection addr and the token's service. Am I overlooking something?
          Hide
          Daryn Sharp added a comment -

          Sorry, forgot to address the other point:

          This seems to be fragile because InetSocketAddress API doesn't gaurantee that dns lookup will not be invoked.

          I researched this very thoroughly:

          • InetSocketAddress does not perform any lookups of its own. It delegates to the contained InetAddress.
          • An InetAddress only resolves an ip if the host field is null.
          • If the InetAddress is instantiated with an explicit host string and ip, then the host field is not null, so it will just return that host. No lookup occurs.
          • Hence, instantiating an InetSocketAddress with a explicitly constructed InetAddress with both host/ip will guarantee that getHostName will not perform a lookup.
          Show
          Daryn Sharp added a comment - Sorry, forgot to address the other point: This seems to be fragile because InetSocketAddress API doesn't gaurantee that dns lookup will not be invoked. I researched this very thoroughly: InetSocketAddress does not perform any lookups of its own. It delegates to the contained InetAddress . An InetAddress only resolves an ip if the host field is null. If the InetAddress is instantiated with an explicit host string and ip, then the host field is not null, so it will just return that host. No lookup occurs. Hence, instantiating an InetSocketAddress with a explicitly constructed InetAddress with both host/ip will guarantee that getHostName will not perform a lookup.
          Hide
          Jitendra Nath Pandey added a comment -

          > Hftp requests appear to be handled..
          Agreed that Hftp should not have an issue because DN resets the service.
          However, there could be a usecase where we run into this issue. For example: A job is launched on cluster running new version and some of its tasks submit a job on a cluster running older version. I admit this is a contrived use case and may not exist anywhere. But that makes me worried that we might end up breaking something.

          Can we take following approach (Thanks to Owen and Suresh!)

          1. Don't change the service in the token and keep it ip:port
          2. Cache a map in TokenSelectors which maps ipNew to ipOld. Cache entry can be purged after a token lifetime.
          3. Token selector matches the new ip, if that doesn't work, it also tries old ip, if that exists.

          The cache will have an entry only if there is an ip failover, otherwise the TokenSelectors will behave exactly as they are doing today. Another plus is that tokens don't change at all.

          Show
          Jitendra Nath Pandey added a comment - > Hftp requests appear to be handled.. Agreed that Hftp should not have an issue because DN resets the service. However, there could be a usecase where we run into this issue. For example: A job is launched on cluster running new version and some of its tasks submit a job on a cluster running older version. I admit this is a contrived use case and may not exist anywhere. But that makes me worried that we might end up breaking something. Can we take following approach (Thanks to Owen and Suresh!) Don't change the service in the token and keep it ip:port Cache a map in TokenSelectors which maps ipNew to ipOld. Cache entry can be purged after a token lifetime. Token selector matches the new ip, if that doesn't work, it also tries old ip, if that exists. The cache will have an entry only if there is an ip failover, otherwise the TokenSelectors will behave exactly as they are doing today. Another plus is that tokens don't change at all.
          Hide
          Daryn Sharp added a comment -

          Actually, the contrived case isn't a problem because any permutation of old/new works. Everything that gets a token immediately stomps the service to its format. Both sides always ignore what the other side set.

          The suggested approach won't handle all use cases. One problem with the current implementation, but fixed by this patch, is that I can't specify an exact ip for a host and always have that ip used. The ip will be resolved to a host, and the host resolved to an ip. The problem is that ip->host->ip may not return the same ip!

          With the static TokenSelector cache, there's issues with how to handle multiple ip changes. The cache lookup will have to deal with circular loops. There would also need to be something like reference counting to expire the cache. Multiple tokens may be relying on the mappings being maintained in the TokenSelector.

          Certain code expects to be able to connect to the value in the service field. There would need to be a mapping that maintained the token ip to the original host or ip, and if a host, re-resolve the host to its current ip.

          Unless I'm misunderstanding the proposal, it sounds much more complicated than this patch?

          Show
          Daryn Sharp added a comment - Actually, the contrived case isn't a problem because any permutation of old/new works. Everything that gets a token immediately stomps the service to its format. Both sides always ignore what the other side set. The suggested approach won't handle all use cases. One problem with the current implementation, but fixed by this patch, is that I can't specify an exact ip for a host and always have that ip used. The ip will be resolved to a host, and the host resolved to an ip. The problem is that ip->host->ip may not return the same ip! With the static TokenSelector cache, there's issues with how to handle multiple ip changes. The cache lookup will have to deal with circular loops. There would also need to be something like reference counting to expire the cache. Multiple tokens may be relying on the mappings being maintained in the TokenSelector . Certain code expects to be able to connect to the value in the service field. There would need to be a mapping that maintained the token ip to the original host or ip, and if a host, re-resolve the host to its current ip. Unless I'm misunderstanding the proposal, it sounds much more complicated than this patch?
          Hide
          Jitendra Nath Pandey added a comment -

          > Everything that gets a token immediately stomps the service to its format. Both sides always ignore what the other
          > side set.
          I don't think this will work in this case. The token is actually obtained by the first jobClient only. After that it is just passed along in the credentials. The jobclient at the task submitting a new job will not (it can't) get a new token. The jobClient at the task in fact will not find a token for the other namenode and will not be able to issue one as well. The relevant section of the code I am referring to is TokenCache#obtainTokensForNamenodesInternal.

          The proposed scheme works only if it can be determined at the client side that a failover has occurred and TokenSelector can be updated with the mapping.

          > With the static TokenSelector cache, there's issues with how to handle multiple ip changes. The cache lookup will
          > have to deal with circular loops. There would also need to be something like reference counting to expire the
          > cache. Multiple tokens may be relying on the mappings being maintained in the TokenSelector.
          Multiple ip changes in the lifetoken of a same token, is very rare. We should be ok if we can handle one IP failure. Just lookup once and use the mapping if present. The cache can expire after a token lifetime, no need to maintain a reference count.
          I think the patch can be kept pretty simple if we just handle one ip failure in the lifetime of a token. Another enhancement here could be to stamp the token with new service after each renewal and expire the cache after the renewal time of the token.

          Show
          Jitendra Nath Pandey added a comment - > Everything that gets a token immediately stomps the service to its format. Both sides always ignore what the other > side set. I don't think this will work in this case. The token is actually obtained by the first jobClient only. After that it is just passed along in the credentials. The jobclient at the task submitting a new job will not (it can't) get a new token. The jobClient at the task in fact will not find a token for the other namenode and will not be able to issue one as well. The relevant section of the code I am referring to is TokenCache#obtainTokensForNamenodesInternal. The proposed scheme works only if it can be determined at the client side that a failover has occurred and TokenSelector can be updated with the mapping. > With the static TokenSelector cache, there's issues with how to handle multiple ip changes. The cache lookup will > have to deal with circular loops. There would also need to be something like reference counting to expire the > cache. Multiple tokens may be relying on the mappings being maintained in the TokenSelector. Multiple ip changes in the lifetoken of a same token, is very rare. We should be ok if we can handle one IP failure. Just lookup once and use the mapping if present. The cache can expire after a token lifetime, no need to maintain a reference count. I think the patch can be kept pretty simple if we just handle one ip failure in the lifetime of a token. Another enhancement here could be to stamp the token with new service after each renewal and expire the cache after the renewal time of the token.
          Hide
          Daryn Sharp added a comment -

          I think there's critical flaw with trying to maintain a mapping that I forgot to mention. Kihwal and I explored this approach about a month ago, but abandoned it due to too many problems. The mapping works iff a process is running when the ip changes. Example:

          • JT acquires the tokens with a given ip. The ip changes at some point during the job. The JT is a long running process so it could detect the change and maintain a mapping.
          • Tasks that are running prior to the ip change could detect and use their own mapping.
          • New tasks spawned after the ip change will fail. They not aware of the ip change so they cannot find the token in their cache with the old ip.

          If you have an elegant approach to solve this issue, we can discuss further issues with a mapping.

          Show
          Daryn Sharp added a comment - I think there's critical flaw with trying to maintain a mapping that I forgot to mention. Kihwal and I explored this approach about a month ago, but abandoned it due to too many problems. The mapping works iff a process is running when the ip changes. Example: JT acquires the tokens with a given ip. The ip changes at some point during the job. The JT is a long running process so it could detect the change and maintain a mapping. Tasks that are running prior to the ip change could detect and use their own mapping. New tasks spawned after the ip change will fail. They not aware of the ip change so they cannot find the token in their cache with the old ip. If you have an elegant approach to solve this issue, we can discuss further issues with a mapping.
          Hide
          Daryn Sharp added a comment -

          Simplified the patch. Perhaps it looked more complicated because I added documentation and other stuff to the files I touched to remove all warnings. I had also updated a few places to use more appropriate apis. I've now stripped the patch down to the minimum.

          Show
          Daryn Sharp added a comment - Simplified the patch. Perhaps it looked more complicated because I added documentation and other stuff to the files I touched to remove all warnings. I had also updated a few places to use more appropriate apis. I've now stripped the patch down to the minimum.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12493362/HADOOP-7510-2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/146//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12493362/HADOOP-7510-2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/146//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          Fix inf loop in dfs getDT. Passes smoke tests, will run full suite overnight. Please review and/or address prior questions so I may have a chance to remedy issues before kicking off the long test run.

          Show
          Daryn Sharp added a comment - Fix inf loop in dfs getDT. Passes smoke tests, will run full suite overnight. Please review and/or address prior questions so I may have a chance to remedy issues before kicking off the long test run.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12493381/HADOOP-7510-3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/147//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12493381/HADOOP-7510-3.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/147//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          > New tasks spawned after the ip change will fail.
          Good point. The alternative approach doesn't work.

          My concerns with the posted patch:
          1) The use-case mentioned earlier (the contrived one) doesn't work.
          2) The assumption about InetSocketAddress#getHostname is not a documented API behavior. Different java implementations could do it differently.
          3) This is a risky change.

          A safer approach, in my opinion, will be to make this change in a major release (e.g. 23) where some backward compatibility constraints can probably be relaxed. That will allow us a better fix using standard java api.

          Show
          Jitendra Nath Pandey added a comment - > New tasks spawned after the ip change will fail. Good point. The alternative approach doesn't work. My concerns with the posted patch: 1) The use-case mentioned earlier (the contrived one) doesn't work. 2) The assumption about InetSocketAddress#getHostname is not a documented API behavior. Different java implementations could do it differently. 3) This is a risky change. A safer approach, in my opinion, will be to make this change in a major release (e.g. 23) where some backward compatibility constraints can probably be relaxed. That will allow us a better fix using standard java api.
          Hide
          Daryn Sharp added a comment -

          To address your concerns:

          1. You can use "mapreduce.job.hdfs-servers" to specify a list of paths for which to get delegation tokens on the remote cluster.
            2) The behavior is pretty well defined in the javadocs for InetAddress and InetSocketAddress.
            3) True, but are the other 205 changes not risky? The append code is risky, the sync code is risky, the block receiver changes are risky, the lease renewal changes are risky, etc. This are arguably more likely to fail in subtle ways, whereas if the tokens don't match, the system flat out breaks.

          I'm not sure I understand why you think I'm not using standard java apis. The calls are all documented and the behavior is standard. I'm open to other means of determining exactly what string was used to instantiate the InetSocketAddress. In Java 7, it's called getHostString().

          How about if I add a config option to control whether the new behavior in this patch is enabled?

          Show
          Daryn Sharp added a comment - To address your concerns: You can use "mapreduce.job.hdfs-servers" to specify a list of paths for which to get delegation tokens on the remote cluster. 2) The behavior is pretty well defined in the javadocs for InetAddress and InetSocketAddress. 3) True, but are the other 205 changes not risky? The append code is risky, the sync code is risky, the block receiver changes are risky, the lease renewal changes are risky, etc. This are arguably more likely to fail in subtle ways, whereas if the tokens don't match, the system flat out breaks. I'm not sure I understand why you think I'm not using standard java apis. The calls are all documented and the behavior is standard. I'm open to other means of determining exactly what string was used to instantiate the InetSocketAddress. In Java 7, it's called getHostString() . How about if I add a config option to control whether the new behavior in this patch is enabled?
          Hide
          Daryn Sharp added a comment -

          added config param. will add additional unit tests if there is no strong opposition to the config param

          Show
          Daryn Sharp added a comment - added config param. will add additional unit tests if there is no strong opposition to the config param
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12493686/HADOOP-7510-4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/149//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12493686/HADOOP-7510-4.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/149//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          > How about if I add a config option to control whether the new behavior in this patch is enabled.
          This sounds ok. hostname should be added to token-service only if config is enabled. It should be off by default. Those who don't want ip-failover will not run into any surprises.

          1) mapreduce.job.hdfs-servers is indeed used in this use-case, and that is how the delegation token for all the namenodes involved will be obtained by the jobClient when first job is submitted. But, the problem is still there because the first jobClient is a new version client and it puts hostname in the token. When a task of this first job tries to submit another job to a cluster of earlier version, it can't get a new token issued because it doesn't have any kerberos credentials for the user. The existing tokens must be passed along.
          2) InetSocketAddress#getHostname javadoc states that it "returns the hostname part of the address." It doesn't say it would be the same name/ip with which it was instantiated. In fact java 7 document states that it may trigger a reverse lookup. getHostString looks like a new API added in java 7.
          3) I agree those changes have risks too.

          > I'm not sure I understand why you think I'm not using standard java apis. The calls are all documented and the
          > behavior is standard. I'm open to other means of determining exactly what string was used to instantiate the
          > InetSocketAddress.
          All I meant was that the assumption about the API may not work in all java implementations. In Java 7, getHostString does guarantee no reverse lookup. Unfortunately, that API is not in java 6.

          Show
          Jitendra Nath Pandey added a comment - > How about if I add a config option to control whether the new behavior in this patch is enabled. This sounds ok. hostname should be added to token-service only if config is enabled. It should be off by default. Those who don't want ip-failover will not run into any surprises. 1) mapreduce.job.hdfs-servers is indeed used in this use-case, and that is how the delegation token for all the namenodes involved will be obtained by the jobClient when first job is submitted. But, the problem is still there because the first jobClient is a new version client and it puts hostname in the token. When a task of this first job tries to submit another job to a cluster of earlier version, it can't get a new token issued because it doesn't have any kerberos credentials for the user. The existing tokens must be passed along. 2) InetSocketAddress#getHostname javadoc states that it "returns the hostname part of the address." It doesn't say it would be the same name/ip with which it was instantiated. In fact java 7 document states that it may trigger a reverse lookup. getHostString looks like a new API added in java 7. 3) I agree those changes have risks too. > I'm not sure I understand why you think I'm not using standard java apis. The calls are all documented and the > behavior is standard. I'm open to other means of determining exactly what string was used to instantiate the > InetSocketAddress. All I meant was that the assumption about the API may not work in all java implementations. In Java 7, getHostString does guarantee no reverse lookup. Unfortunately, that API is not in java 6.
          Hide
          Daryn Sharp added a comment -

          This is now more academic than anything:

          1. I meant when the task creates the new job conf for the remote job it will launch, it should set the mapreduce.job.hdfs-servers key so the remote job tracker will acquire the tokens. Ie. they won't be passed along. I think the user has to kinit on the other side irrespective of my change, so the remote JT should be able to get the tokens? This shouldn't be a deal breaker since you said it's a contrived use case?
          2. Not to beat a dead horse: The InetAddress docs are clear if/when a reverse lookup occurs – when only one arg (host or ip) is given. Give both and there's never a lookup. Typically an InetSocketAddress is instantiated with just a host or an ip, so it instantiates a InetAddress with one arg. Since InetSocketAddress delegates to InetAddress}, lookups will occur. However, when an {{InetSocketAddress is instantiated with an InetAddress that was instantiated with both host and ip, no lookups will occur due to the delegation. I have checked the code of multiple versions and vendor flavors of java and they all behave in this manner.

          I added the config param last night, so would you please review the changes to see if they are satisfactory?

          Show
          Daryn Sharp added a comment - This is now more academic than anything: I meant when the task creates the new job conf for the remote job it will launch, it should set the mapreduce.job.hdfs-servers key so the remote job tracker will acquire the tokens. Ie. they won't be passed along. I think the user has to kinit on the other side irrespective of my change, so the remote JT should be able to get the tokens? This shouldn't be a deal breaker since you said it's a contrived use case? Not to beat a dead horse: The InetAddress docs are clear if/when a reverse lookup occurs – when only one arg (host or ip) is given. Give both and there's never a lookup. Typically an InetSocketAddress is instantiated with just a host or an ip, so it instantiates a InetAddress with one arg. Since InetSocketAddress delegates to InetAddress}, lookups will occur. However, when an {{InetSocketAddress is instantiated with an InetAddress that was instantiated with both host and ip, no lookups will occur due to the delegation. I have checked the code of multiple versions and vendor flavors of java and they all behave in this manner. I added the config param last night, so would you please review the changes to see if they are satisfactory?
          Hide
          Kihwal Lee added a comment -

          I've investigated the semantics and the actual implementations of the InetSocketAddress API and I see no uncertainties or confusion regarding the particular semantics of the methods Daryn is utilizing.

          Show
          Kihwal Lee added a comment - I've investigated the semantics and the actual implementations of the InetSocketAddress API and I see no uncertainties or confusion regarding the particular semantics of the methods Daryn is utilizing.
          Hide
          Jitendra Nath Pandey added a comment -

          The patch needs to be updated, because its conflicting after MAPREDUCE-2764 commit.

          Show
          Jitendra Nath Pandey added a comment - The patch needs to be updated, because its conflicting after MAPREDUCE-2764 commit.
          Hide
          Daryn Sharp added a comment -

          Need to write more tests and re-run full test suite, but it's currently passing the commit tests. Please review while I work on tests.

          Show
          Daryn Sharp added a comment - Need to write more tests and re-run full test suite, but it's currently passing the commit tests. Please review while I work on tests.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12494637/HADOOP-7510-5.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/187//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12494637/HADOOP-7510-5.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 12 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/187//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          Very minor changes resulting from code review with Nathan Roberts, Kihwal Lee, and Robert Evans.

          Show
          Daryn Sharp added a comment - Very minor changes resulting from code review with Nathan Roberts, Kihwal Lee, and Robert Evans.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12494674/HADOOP-7510-6.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/192//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12494674/HADOOP-7510-6.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 12 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/192//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          It would be good to put on the jira the reasons behind some of the design choices in NetUtils.java
          1) The usage of ResolverConfiguration.
          2) The different handling of hosts with a dot in the end and those containing dot in the middle.

          HftpFileSystem.java: Why was getCanonicalServiceName removed?

          DFSClient.java:
          The change of createRPCNamenode call to createNamenode changes retry policy and exception policy. This change is not required.

          DistributedFileSystem.java:
          The patch will change a bit with latest commit in HADOOP-7625, the service is not being set in DFS anymore.

          JobClient.java:
          What is the need to change from proxy to JobClient?

          Show
          Jitendra Nath Pandey added a comment - It would be good to put on the jira the reasons behind some of the design choices in NetUtils.java 1) The usage of ResolverConfiguration. 2) The different handling of hosts with a dot in the end and those containing dot in the middle. HftpFileSystem.java: Why was getCanonicalServiceName removed? DFSClient.java: The change of createRPCNamenode call to createNamenode changes retry policy and exception policy. This change is not required. DistributedFileSystem.java: The patch will change a bit with latest commit in HADOOP-7625 , the service is not being set in DFS anymore. JobClient.java: What is the need to change from proxy to JobClient?
          Hide
          Jitendra Nath Pandey added a comment -

          Owen,
          Can you please take a look at the patch, particularly the changes in HftpFileSystem in the context of MAPREDUCE-7624 changes?

          Show
          Jitendra Nath Pandey added a comment - Owen, Can you please take a look at the patch, particularly the changes in HftpFileSystem in the context of MAPREDUCE-7624 changes?
          Hide
          Daryn Sharp added a comment -

          Fully qualified hostnames:

          • The reason for using fully qualified hostnames in the token service is that every machine in a cluster may not have the same resolv.conf domain search path. This is highly likely for the client host submitting the job versus the cluster itself. Clusters spanning different networks are also likely to have different domain search paths.
          • If a short name is used then host resolution may either fail or resolve to a different host depending on the machine performing the resolution. This may result in jobs failing if the task cannot find a token for the connection.
          • Short hostnames may result in multiple tokens being acquired for the same machine because "host" and "host.domain.com" appear to be different. Ex. the user acquires token using fetchdt with host "host". Paths in a job are "hdfs://host.domain.com". The JT will unnecessarily acquire another token.

          Use of ResolverConfiguration (most is documented in the code):

          • InetSocketAddress provides no api to get the fully qualified host with the search domain.
          • getCanonicalHostname always returns the A record, which cannot be used if a CNAME was given. Otherwise, ip changes will not be detected. Ex. "nn" is a CNAME to "nn1". The CNAME is flipped to "nn2". If getCanonicalHostname() is used, it will return "nn1", therefor the client not switch to "nn2" as expected.
          • I originally attempted to use getAllByName on the resolved ip, and then prefix matching against the hosts returned. Java only records the A record (canonical name) for an ip. Same problem as prior point.
          • The method of resolving matches the same standard that unix and java use to resolve unqualified hostnames. I even confirmed via packet capture.

          HftpFileSystem

          • getCanonicalServiceName was doing the same thing as the FileSystem base class. Removed it due to unnecessary redundancy which will cause maintenance issues if the default implementation changes.

          DFSClient

          • We need to renew/cancel tokens with the same configuration used to get the token.
          • Current implementation calls createRPCNamenode which unnecessarily forces RPC and eschews the RetryProxy. createNamenode abstracts both of these details.
          • Exception/retry policies appear to be changed only for file creation, thus not an issue.
          • Looking deeper, actually need to instantiate DFSClient to get the configuration timeout/retry for socket connects. Also tags the client with the job id for easier debugging.

          DistributedFileSystem

          • You're right. I'll need to mergeup and evaluate any impact. My initial reaction is we are pushing the token renewal far too deep into the stack. Completely bypassing the filesystem is preventing the renewal from using an identically configured DFSClient as used to get the token. I'll investigate.

          JobClient

          • Same reasons as DFSClient.
          • Should be using an identically configured client as used to acquire the token.
          • Should not assume RPC was used to acquire the token. The client abstracts the underlying protocol.

          Thanks for taking time to review this important feature!

          Show
          Daryn Sharp added a comment - Fully qualified hostnames: The reason for using fully qualified hostnames in the token service is that every machine in a cluster may not have the same resolv.conf domain search path. This is highly likely for the client host submitting the job versus the cluster itself. Clusters spanning different networks are also likely to have different domain search paths. If a short name is used then host resolution may either fail or resolve to a different host depending on the machine performing the resolution. This may result in jobs failing if the task cannot find a token for the connection. Short hostnames may result in multiple tokens being acquired for the same machine because "host" and "host.domain.com" appear to be different. Ex. the user acquires token using fetchdt with host "host". Paths in a job are "hdfs://host.domain.com". The JT will unnecessarily acquire another token. Use of ResolverConfiguration (most is documented in the code): InetSocketAddress provides no api to get the fully qualified host with the search domain. getCanonicalHostname always returns the A record, which cannot be used if a CNAME was given. Otherwise, ip changes will not be detected. Ex. "nn" is a CNAME to "nn1". The CNAME is flipped to "nn2". If getCanonicalHostname() is used, it will return "nn1", therefor the client not switch to "nn2" as expected. I originally attempted to use getAllByName on the resolved ip, and then prefix matching against the hosts returned. Java only records the A record (canonical name) for an ip. Same problem as prior point. The method of resolving matches the same standard that unix and java use to resolve unqualified hostnames. I even confirmed via packet capture. HftpFileSystem getCanonicalServiceName was doing the same thing as the FileSystem base class. Removed it due to unnecessary redundancy which will cause maintenance issues if the default implementation changes. DFSClient We need to renew/cancel tokens with the same configuration used to get the token. Current implementation calls createRPCNamenode which unnecessarily forces RPC and eschews the RetryProxy . createNamenode abstracts both of these details. Exception/retry policies appear to be changed only for file creation, thus not an issue. Looking deeper, actually need to instantiate DFSClient to get the configuration timeout/retry for socket connects. Also tags the client with the job id for easier debugging. DistributedFileSystem You're right. I'll need to mergeup and evaluate any impact. My initial reaction is we are pushing the token renewal far too deep into the stack. Completely bypassing the filesystem is preventing the renewal from using an identically configured DFSClient as used to get the token. I'll investigate. JobClient Same reasons as DFSClient. Should be using an identically configured client as used to acquire the token. Should not assume RPC was used to acquire the token. The client abstracts the underlying protocol. Thanks for taking time to review this important feature!
          Hide
          Owen O'Malley added a comment -

          I agree with Jitendra about removing the change to the higher level objects (JobClient and NameNode) instead of the rpc proxies.

          Remove both of the FIXME comments, since neither is relevant.

          Don't set the service in DistributedFileSystem, since the tokens from DFSClient must be equivalent, which also means you don't need the new namenode field.

          Show
          Owen O'Malley added a comment - I agree with Jitendra about removing the change to the higher level objects (JobClient and NameNode) instead of the rpc proxies. Remove both of the FIXME comments, since neither is relevant. Don't set the service in DistributedFileSystem, since the tokens from DFSClient must be equivalent, which also means you don't need the new namenode field.
          Hide
          Daryn Sharp added a comment -

          Would you please provide a counter-argument to my points on why JobClient and DFSClient should be used instead of the lowest level rpc methods?

          Show
          Daryn Sharp added a comment - Would you please provide a counter-argument to my points on why JobClient and DFSClient should be used instead of the lowest level rpc methods?
          Hide
          Jitendra Nath Pandey added a comment -

          HftpFileSystem.java:
          getCanonicalServiceName in this case uses hftpUri, instead of getUri. I think the difference is that hftpUri uses https port, while getUri uses nnAddr.getPort(). It doesn't seem to be identical to the default implementation in FileSystem.

          DFSClient.java:
          > We need to renew/cancel tokens with the same configuration used to get the token.
          This is not a requirement, particularly because delegation tokens are usually obtained at the client, while they are renewed at JT. We can't expect same configurations.

          1. Current implementation calls createRPCNamenode which unnecessarily forces RPC and eschews the RetryProxy. createNamenode abstracts both of these details.
          2. Exception/retry policies appear to be changed only for file creation, thus not an issue.
          3. Looking deeper, actually need to instantiate DFSClient to get the configuration timeout/retry for socket connects. Also tags the client with the job id for easier debugging.

          This is not relevant to this jira. We should discuss about it in a different jira, if something needs to be fixed here.

          JobClient.java
          Lets just focus on what needs to be fixed for this jira.

          Show
          Jitendra Nath Pandey added a comment - HftpFileSystem.java: getCanonicalServiceName in this case uses hftpUri, instead of getUri. I think the difference is that hftpUri uses https port, while getUri uses nnAddr.getPort(). It doesn't seem to be identical to the default implementation in FileSystem. DFSClient.java: > We need to renew/cancel tokens with the same configuration used to get the token. This is not a requirement, particularly because delegation tokens are usually obtained at the client, while they are renewed at JT. We can't expect same configurations. Current implementation calls createRPCNamenode which unnecessarily forces RPC and eschews the RetryProxy. createNamenode abstracts both of these details. Exception/retry policies appear to be changed only for file creation, thus not an issue. Looking deeper, actually need to instantiate DFSClient to get the configuration timeout/retry for socket connects. Also tags the client with the job id for easier debugging. This is not relevant to this jira. We should discuss about it in a different jira, if something needs to be fixed here. JobClient.java Lets just focus on what needs to be fixed for this jira.
          Hide
          Daryn Sharp added a comment -

          HftpFileSystem

          getCanonicalServiceName in this case uses hftpUri, instead of getUri.

          Yes, the prior code's getCanonicalServiceName did use hftpUri. However, getUri() returns hftpUri, so the default implementation of getCanonicalServiceName is sufficient. Am I overlooking a detail?

          I think the difference is that hftpUri uses https port, while getUri uses nnAddr.getPort(). It doesn't seem to be identical to the default implementation in FileSystem.

          No, hftpUri uses nnAddr.getPort(). The nnHttpUrl contains the https port. The FileSystem api expects FileSystem.getUri(fs.getUri(), conf) to return the same object, which I believe it does in this case.

          DFSClient/JobClient

          We need to renew/cancel tokens with the same configuration used to get the token.

          This is not a requirement, particularly because delegation tokens are usually obtained at the client, while they are renewed at JT. We can't expect same configurations.

          Maybe I'm overlooking something, but aren't the client and the JT operating with the same jobConf? The revamp of token renewal changed the behavior/instantiation of the client to omit settings by calling a lower-level method that used to be indirectly invoked by the client ctor. The ctor is performing configuration that is now lost.

          Lets just focus on what needs to be fixed for this jira.

          I am concerned about the impact on reconnection, hence why I felt it is relevant to this jira. I am only trying to instantiate the client exactly as before to minimize unexpected risk.

          All said, I'll investigate the impact of the settings that token renewal is omitting. If they have no impact on the reliability of reconnect and renew/cancel, I'll consider leaving it as-is, even though I feel it was not a good design decision to sacrifice previous flexibility under the assumption it's not needed for a particular use case.

          Show
          Daryn Sharp added a comment - HftpFileSystem getCanonicalServiceName in this case uses hftpUri, instead of getUri. Yes, the prior code's getCanonicalServiceName did use hftpUri . However, getUri() returns hftpUri , so the default implementation of getCanonicalServiceName is sufficient. Am I overlooking a detail? I think the difference is that hftpUri uses https port, while getUri uses nnAddr.getPort(). It doesn't seem to be identical to the default implementation in FileSystem. No, hftpUri uses nnAddr.getPort() . The nnHttpUrl contains the https port. The FileSystem api expects FileSystem.getUri(fs.getUri(), conf) to return the same object, which I believe it does in this case. DFSClient/JobClient We need to renew/cancel tokens with the same configuration used to get the token. This is not a requirement, particularly because delegation tokens are usually obtained at the client, while they are renewed at JT. We can't expect same configurations. Maybe I'm overlooking something, but aren't the client and the JT operating with the same jobConf? The revamp of token renewal changed the behavior/instantiation of the client to omit settings by calling a lower-level method that used to be indirectly invoked by the client ctor. The ctor is performing configuration that is now lost. Lets just focus on what needs to be fixed for this jira. I am concerned about the impact on reconnection, hence why I felt it is relevant to this jira. I am only trying to instantiate the client exactly as before to minimize unexpected risk. All said, I'll investigate the impact of the settings that token renewal is omitting. If they have no impact on the reliability of reconnect and renew/cancel, I'll consider leaving it as-is, even though I feel it was not a good design decision to sacrifice previous flexibility under the assumption it's not needed for a particular use case.
          Hide
          Daryn Sharp added a comment -

          Please let me know if the client instantiations are the only remaining issue. If not, would you please post the issues so the next patch might be the last. Thanks.

          Show
          Daryn Sharp added a comment - Please let me know if the client instantiations are the only remaining issue. If not, would you please post the issues so the next patch might be the last. Thanks.
          Hide
          Jitendra Nath Pandey added a comment -

          Following open issues in my opinion, if I am not missing something:
          1) Client instantiations.
          2) createRPCNamenode changed to createNamenode in DFSClient.
          3) New namenode field in DistributedFileSystem, which is redundant given that DFSClient also has a similar field. That means leaving the setting of service in DFSClient.
          4) Remove FIXMEs.

          Show
          Jitendra Nath Pandey added a comment - Following open issues in my opinion, if I am not missing something: 1) Client instantiations. 2) createRPCNamenode changed to createNamenode in DFSClient. 3) New namenode field in DistributedFileSystem, which is redundant given that DFSClient also has a similar field. That means leaving the setting of service in DFSClient. 4) Remove FIXMEs.
          Hide
          Daryn Sharp added a comment -

          I believe have addressed the listed issues. I also tightened up the sanity checks and exceptions for the building of a service name. Special cased loopback address name so it didn't turn into something valid, yet, silly: "localhost.your-default-domain".

          Will add more tests, but please review.

          Show
          Daryn Sharp added a comment - I believe have addressed the listed issues. I also tightened up the sanity checks and exceptions for the building of a service name. Special cased loopback address name so it didn't turn into something valid, yet, silly: "localhost.your-default-domain". Will add more tests, but please review.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12495101/HADOOP-7510-8.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/203//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12495101/HADOOP-7510-8.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 12 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/203//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          Please remove the rack related change in MiniDFSCluster. That doesn't seem necessary. It will be surprising if some tests break without that change, and might indicate some other bug.

          Patch looks good to me otherwise.

          Show
          Jitendra Nath Pandey added a comment - Please remove the rack related change in MiniDFSCluster. That doesn't seem necessary. It will be surprising if some tests break without that change, and might indicate some other bug. Patch looks good to me otherwise.
          Hide
          Daryn Sharp added a comment -

          Added a bunch of tests. To adequately test the resolver changes, I had to turn it into an object. Added many tests to verify the token service is set & decoded properly. Update the sasl rpc test to verify that the token's service is honoring the config setting.

          Will run full test suite overnight and post results.

          Show
          Daryn Sharp added a comment - Added a bunch of tests. To adequately test the resolver changes, I had to turn it into an object. Added many tests to verify the token service is set & decoded properly. Update the sasl rpc test to verify that the token's service is honoring the config setting. Will run full test suite overnight and post results.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12495286/HADOOP-7510-9.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 18 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/211//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12495286/HADOOP-7510-9.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 18 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/211//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          test-patch:

          [exec] +1 overall.
          [exec]
          [exec] +1 @author. The patch does not contain any @author tags.
          [exec]
          [exec] +1 tests included. The patch appears to include 18 new or modified tests.
          [exec]
          [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
          [exec]
          [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
          [exec]
          [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          Failed tests which I believe are "known failures":

          security/TestMapredGroupMappingServiceRefresh.java
          security/TestRefreshUserMappings.java
          security/authentication/client/TestKerberosAuthenticator.java
          security/authentication/server/TestKerberosAuthenticationHandler.java
          security/authentication/util/TestKerberosName.java
          tools/TestDelegationTokenFetcher.java

          I'm just performing a little more QE testing. If it passes, I think the patch is ready to go.

          Show
          Daryn Sharp added a comment - test-patch: [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 18 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. Failed tests which I believe are "known failures": security/TestMapredGroupMappingServiceRefresh.java security/TestRefreshUserMappings.java security/authentication/client/TestKerberosAuthenticator.java security/authentication/server/TestKerberosAuthenticationHandler.java security/authentication/util/TestKerberosName.java tools/TestDelegationTokenFetcher.java I'm just performing a little more QE testing. If it passes, I think the patch is ready to go.
          Hide
          Jitendra Nath Pandey added a comment -

          > Please remove the rack related change in MiniDFSCluster. That doesn't seem necessary. It will be surprising
          > if some tests break without that change, and might indicate some other bug.
          Please update the patch to address this.

          Show
          Jitendra Nath Pandey added a comment - > Please remove the rack related change in MiniDFSCluster. That doesn't seem necessary. It will be surprising > if some tests break without that change, and might indicate some other bug. Please update the patch to address this.
          Hide
          Jitendra Nath Pandey added a comment -

          Please also rebase the patch against the latest state of the branch, because HADOOP-7661 commit might conflict with this one. HADOOP-7661 requires that getCanonicalServiceName is not removed from HftpFileSystem.

          TestDelegationTokenFetcher is passing in my dev setup. Please make sure, it is not broken by the patch.

          Show
          Jitendra Nath Pandey added a comment - Please also rebase the patch against the latest state of the branch, because HADOOP-7661 commit might conflict with this one. HADOOP-7661 requires that getCanonicalServiceName is not removed from HftpFileSystem. TestDelegationTokenFetcher is passing in my dev setup. Please make sure, it is not broken by the patch.
          Hide
          Daryn Sharp added a comment -

          Removed changes to MiniDFSCluster as requested, although I have concerns about having it hardcode the old token service format.

          Removed trailing dot to root hostnames due to conflict with kerberos host equivalence.

          Fixed hftp/hsftp to return the correct uri with the correct port, but return a canonical service with the secure port (which strays from other filesystems).

          Modified the getCanonicalService changes to be compatible with expectations of the TokenCache.

          Even more tests! Will perform more QE tests on a secure cluster.

          Show
          Daryn Sharp added a comment - Removed changes to MiniDFSCluster as requested, although I have concerns about having it hardcode the old token service format. Removed trailing dot to root hostnames due to conflict with kerberos host equivalence. Fixed hftp/hsftp to return the correct uri with the correct port, but return a canonical service with the secure port (which strays from other filesystems). Modified the getCanonicalService changes to be compatible with expectations of the TokenCache. Even more tests! Will perform more QE tests on a secure cluster.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12496291/HADOOP-7510-10.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 24 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/226//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12496291/HADOOP-7510-10.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 24 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/226//console This message is automatically generated.
          Hide
          Robert Joseph Evans added a comment -

          I read through the match and it looks good to me.

          Show
          Robert Joseph Evans added a comment - I read through the match and it looks good to me.
          Hide
          Robert Joseph Evans added a comment -

          Sorry s/match/patch/g

          Show
          Robert Joseph Evans added a comment - Sorry s/match/patch/g
          Hide
          Jitendra Nath Pandey added a comment -

          > Fixed hftp/hsftp to return the correct uri with the correct port, but return a canonical service with the
          > secure port (which strays from other filesystems).
          What problems you ran into that compelled to use https port for service in hftp file system? Earlier and for hdfs it uses rpc port. Rpc port is used, because eventually its the Rpc port that is passed in TokenSelector to select a token (from ipc.Client). Was it tested with secure hftp setup or is there a different call flow that I am missing?

          > Modified the getCanonicalService changes to be compatible with expectations of the TokenCache.
          This method returns null for all the file systems that don't have a valid authority. Why is this change required? This is a public API, I am uncomfortable modifying it to return null for all file systems except hdfs, hftp. TokenCache should work fine if a string is returned, it has been working fine with strings so far.

          Show
          Jitendra Nath Pandey added a comment - > Fixed hftp/hsftp to return the correct uri with the correct port, but return a canonical service with the > secure port (which strays from other filesystems). What problems you ran into that compelled to use https port for service in hftp file system? Earlier and for hdfs it uses rpc port. Rpc port is used, because eventually its the Rpc port that is passed in TokenSelector to select a token (from ipc.Client). Was it tested with secure hftp setup or is there a different call flow that I am missing? > Modified the getCanonicalService changes to be compatible with expectations of the TokenCache. This method returns null for all the file systems that don't have a valid authority. Why is this change required? This is a public API, I am uncomfortable modifying it to return null for all file systems except hdfs, hftp. TokenCache should work fine if a string is returned, it has been working fine with strings so far.
          Hide
          Daryn Sharp added a comment -

          What problems you ran into that compelled to use https port for service in hftp file system? Earlier and for hdfs it uses rpc port. Rpc port is used, because eventually its the Rpc port that is passed in TokenSelector to select a token (from ipc.Client). Was it tested with secure hftp setup or is there a different call flow that I am missing?

          The way it worked before this change is that getUri() returned the wrong port, ie. the https instead of http. Thus if you tried newFs = FileSystem.get(hftpFs.getUri(), conf), you would get a filesystem that wouldn't work because it would try to talk http on the https port.

          My prior patch changed the uri to return the correct port. Unfortunately, the token renewal was relying on the broken port behavior in the uri (https instead of http) so that getCanonicalService would return the https port when it extracted the authority from the uri. My last change returns the correct port in uri, and the https port for the service.

          Nothing changed with regard to rpc. A hdfs token, whether obtained over rpc or http, used to be universal and could be used by either transport. The token renewal change made it so a token acquired over rpc can be used with hftp, but a token obtained over hftp cannot be used for rpc. Hftp looks for either a hftp or rpc token, makes a copy of the token and resets the copy's type to rpc. This copy is then serialized into hftp requests. The renewer requires the unaltered hftp token to contain the https address. None of this behavior was changed.

          Modified the getCanonicalService changes to be compatible with expectations of the TokenCache.

          This method returns null for all the file systems that don't have a valid authority. Why is this change required?

          Your change in HADOOP-7661 undid the agreed upon change in HADOOP-7602. I simply changed it back.

          To summarize: The TokenCache is the only user of getCanonicalServiceName. The cache expects the value to be the token's service. Until just recently, the default behavior for getCanonicalServiceName was to encode the authority of the fs's uri into a service. If the uri had no authority, and thus lacked tokens, it would return junk values like ":0". No external filesystems that relied on this behavior could have possibly produced a working token.

          Earlier, you were very concerned about the risk of returning an empty string instead of ":0" for filesystems with no authority. On HADOOP-7602, the agreement was to return null instead of ":0" and have the token cache skip the filesystem.

          This is a public API, I am uncomfortable modifying it to return null for all file systems except hdfs, hftp.

          That is an incorrect reading of the code. The default is to return a token service for any filesystem that contains an authority (like before). Null is not returned for all other filesystems – null is only returned when the filesystem has no authority, per HADOOP-7602.

          If you are concerned about changing a public api, I'm not sure why completely changing the semantics of getCanonicalServiceName is not a cause for concern. It's sometimes a token service (as the method name implies), or sometimes a uri. That's inconsistent and very risky and incompatible with the token cache's expectation of it being a service. Just because it "works" doesn't mean it's right to abuse the api.

          Show
          Daryn Sharp added a comment - What problems you ran into that compelled to use https port for service in hftp file system? Earlier and for hdfs it uses rpc port. Rpc port is used, because eventually its the Rpc port that is passed in TokenSelector to select a token (from ipc.Client). Was it tested with secure hftp setup or is there a different call flow that I am missing? The way it worked before this change is that getUri() returned the wrong port, ie. the https instead of http. Thus if you tried newFs = FileSystem.get(hftpFs.getUri(), conf) , you would get a filesystem that wouldn't work because it would try to talk http on the https port. My prior patch changed the uri to return the correct port. Unfortunately, the token renewal was relying on the broken port behavior in the uri (https instead of http) so that getCanonicalService would return the https port when it extracted the authority from the uri. My last change returns the correct port in uri, and the https port for the service. Nothing changed with regard to rpc. A hdfs token, whether obtained over rpc or http, used to be universal and could be used by either transport. The token renewal change made it so a token acquired over rpc can be used with hftp, but a token obtained over hftp cannot be used for rpc. Hftp looks for either a hftp or rpc token, makes a copy of the token and resets the copy's type to rpc. This copy is then serialized into hftp requests. The renewer requires the unaltered hftp token to contain the https address. None of this behavior was changed. Modified the getCanonicalService changes to be compatible with expectations of the TokenCache. This method returns null for all the file systems that don't have a valid authority. Why is this change required? Your change in HADOOP-7661 undid the agreed upon change in HADOOP-7602 . I simply changed it back. To summarize: The TokenCache is the only user of getCanonicalServiceName . The cache expects the value to be the token's service. Until just recently, the default behavior for getCanonicalServiceName was to encode the authority of the fs's uri into a service. If the uri had no authority, and thus lacked tokens, it would return junk values like ":0". No external filesystems that relied on this behavior could have possibly produced a working token. Earlier, you were very concerned about the risk of returning an empty string instead of ":0" for filesystems with no authority. On HADOOP-7602 , the agreement was to return null instead of ":0" and have the token cache skip the filesystem. This is a public API, I am uncomfortable modifying it to return null for all file systems except hdfs, hftp. That is an incorrect reading of the code. The default is to return a token service for any filesystem that contains an authority (like before). Null is not returned for all other filesystems – null is only returned when the filesystem has no authority, per HADOOP-7602 . If you are concerned about changing a public api, I'm not sure why completely changing the semantics of getCanonicalServiceName is not a cause for concern. It's sometimes a token service (as the method name implies), or sometimes a uri. That's inconsistent and very risky and incompatible with the token cache's expectation of it being a service. Just because it "works" doesn't mean it's right to abuse the api.
          Hide
          Kihwal Lee added a comment -

          I finished reviewing the code and thought it was good, but after seeing Jitendra's comment I decided to wait for Daryn's response. Having read his justification, I think it is quite reasonable. So I feel good about this patch again.

          I've been following the discussion around delegation token. It seems there are some areas that can be improved and there are many ways of doing them. I think this patch is a reasonable compromise.

          Show
          Kihwal Lee added a comment - I finished reviewing the code and thought it was good, but after seeing Jitendra's comment I decided to wait for Daryn's response. Having read his justification, I think it is quite reasonable. So I feel good about this patch again. I've been following the discussion around delegation token. It seems there are some areas that can be improved and there are many ways of doing them. I think this patch is a reasonable compromise.
          Hide
          Jitendra Nath Pandey added a comment -

          This jira is about setting hostname in token service. Shouldn't the ports in HftpFileSystem remain unchanged as far as this patch is concerned. Please file a different jira to fix "wrong" ports.

          > Your change in HADOOP-7661 undid the agreed upon change in HADOOP-7602. I simply changed it back.
          Unfortunately, HADOOP-7602 fix was incomplete. It fixed only HarFileSystem.
          But it was discovered that problem was more general and HADOOP-7602 didn't fix NPE for other file systems.
          HADOOP-7661 fixed it for all file systems that didn't have valid authority. I think returning null for all file systems except Hdfs and Hftp is not a good idea. This api is used in only TokenCache in hadoop, but it is a public API (unfortunately).

          > If you are concerned about changing a public api, I'm not sure why completely changing the semantics of
          > getCanonicalServiceName is not a cause for concern.
          Before HADOOP-7661, javadoc for this method said "@return a URI string that uniquely identifies this file system". That way returning URI by default is not really a big change and is not an "abuse". In fact, now I am convinced that HarFileSystem should also be changed back to return URI instead of null, but that should be addressed in a different jira.

          Show
          Jitendra Nath Pandey added a comment - This jira is about setting hostname in token service. Shouldn't the ports in HftpFileSystem remain unchanged as far as this patch is concerned. Please file a different jira to fix "wrong" ports. > Your change in HADOOP-7661 undid the agreed upon change in HADOOP-7602 . I simply changed it back. Unfortunately, HADOOP-7602 fix was incomplete. It fixed only HarFileSystem. But it was discovered that problem was more general and HADOOP-7602 didn't fix NPE for other file systems. HADOOP-7661 fixed it for all file systems that didn't have valid authority. I think returning null for all file systems except Hdfs and Hftp is not a good idea. This api is used in only TokenCache in hadoop, but it is a public API (unfortunately). > If you are concerned about changing a public api, I'm not sure why completely changing the semantics of > getCanonicalServiceName is not a cause for concern. Before HADOOP-7661 , javadoc for this method said "@return a URI string that uniquely identifies this file system". That way returning URI by default is not really a big change and is not an "abuse". In fact, now I am convinced that HarFileSystem should also be changed back to return URI instead of null, but that should be addressed in a different jira.
          Hide
          Daryn Sharp added a comment -

          This jira is about setting hostname in token service. Shouldn't the ports in HftpFileSystem remain unchanged as far as this patch is concerned. Please file a different jira to fix "wrong" ports.

          This change been in the patch for quite sometime, and it was not an objection as of 9/16. Why is this now an issue?

          But it was discovered that problem was more general and HADOOP-7602 didn't fix NPE for other file systems.

          Please re-examine the changes in this patch. I have already fixed the root issue involving the NPE, as per our discussions during HADOOP-7602.

          I think returning null for all file systems except Hdfs and Hftp is not a good idea.

          Please re-examine the code and re-read my previous comment. That's not what it's doing.

          Before HADOOP-7661, javadoc for this method said "@return a URI string that uniquely identifies this file system".

          Unfortunately, the documentation was wrong. Without my patch, the TokenCache uses this method to stomp on the service in a token. That's not going to work. Given the prior concern over changing it all, I'm honestly shocked that you want to completely change its semantics. getCanonicalServiceName is for getting a token's service. getUri() is for getting the uri. As much as I may have wanted service to be a uri, that was nixed during the renewal changes.

          In fact, now I am convinced that HarFileSystem should also be changed back to return URI instead of null.

          What benefit would that provide?

          Show
          Daryn Sharp added a comment - This jira is about setting hostname in token service. Shouldn't the ports in HftpFileSystem remain unchanged as far as this patch is concerned. Please file a different jira to fix "wrong" ports. This change been in the patch for quite sometime, and it was not an objection as of 9/16. Why is this now an issue? But it was discovered that problem was more general and HADOOP-7602 didn't fix NPE for other file systems. Please re-examine the changes in this patch. I have already fixed the root issue involving the NPE, as per our discussions during HADOOP-7602 . I think returning null for all file systems except Hdfs and Hftp is not a good idea. Please re-examine the code and re-read my previous comment. That's not what it's doing. Before HADOOP-7661 , javadoc for this method said "@return a URI string that uniquely identifies this file system". Unfortunately, the documentation was wrong. Without my patch, the TokenCache uses this method to stomp on the service in a token. That's not going to work. Given the prior concern over changing it all, I'm honestly shocked that you want to completely change its semantics. getCanonicalServiceName is for getting a token's service. getUri() is for getting the uri. As much as I may have wanted service to be a uri, that was nixed during the renewal changes. In fact, now I am convinced that HarFileSystem should also be changed back to return URI instead of null. What benefit would that provide?
          Hide
          Jitendra Nath Pandey added a comment -

          I don't really agree with the approach this patch is taking regarding getCanonicalServiceName and the fact that some of the changes should have rather been handled in separate jiras. But I will not block it.
          Given the size of the patch and some of the fundamental changes the patch is doing, I will recommend another committer takes a brief a look at the patch before committing.

          Show
          Jitendra Nath Pandey added a comment - I don't really agree with the approach this patch is taking regarding getCanonicalServiceName and the fact that some of the changes should have rather been handled in separate jiras. But I will not block it. Given the size of the patch and some of the fundamental changes the patch is doing, I will recommend another committer takes a brief a look at the patch before committing.
          Hide
          Suresh Srinivas added a comment -

          Jitendra, I will take a look at the patch. Here are some preliminary comments. There was lot of comments in this and related jiras, that required lot of time to catchup.

          1. NetUtils.java
            • throwIllegalAuthorityArgument - A separate method just for throwing exception is unnecessary. Could you please just throw the exception instead of calling a method?
            • Why is createSocketAddr() change needed in this patch? If it is really needed -
              • Instead of "host://" please use "dummyscheme://"
              • createSocketAddr - Method looks much better now. But why should is it calling makeSocketAddr(). This method does not have any requirement that makeSocketAddr() is trying to address right?
            • Please add class javadoc to HostResolver
            • Please make setHostResolver package private static instead protected static. protected static does not make sense and what you need is package private for test access. This holds good for SecurityUtil#setTokenServiceUseIp()
            • HostResolver#getInetAddressByName() seems unnecessary. You can just use InetAddress.getByName()
            • HostResolver#getByName() javadoc mentions getCanonicalHostName(), not sure what that is. If it is FileSystem#getCanonicalServiceName(), there is no need to reference FileSystem into NetUtils.
          2. It may be a good idea to move HostResolver, that uses sun specific libraries to separate file.
          3. HftpFileSystem.java - indentation of some of the methods is off (due to tab).

          Will post the comments based on reviewing the rest of the patch soon.

          Show
          Suresh Srinivas added a comment - Jitendra, I will take a look at the patch. Here are some preliminary comments. There was lot of comments in this and related jiras, that required lot of time to catchup. NetUtils.java throwIllegalAuthorityArgument - A separate method just for throwing exception is unnecessary. Could you please just throw the exception instead of calling a method? Why is createSocketAddr() change needed in this patch? If it is really needed - Instead of "host://" please use "dummyscheme://" createSocketAddr - Method looks much better now. But why should is it calling makeSocketAddr(). This method does not have any requirement that makeSocketAddr() is trying to address right? Please add class javadoc to HostResolver Please make setHostResolver package private static instead protected static. protected static does not make sense and what you need is package private for test access. This holds good for SecurityUtil#setTokenServiceUseIp() HostResolver#getInetAddressByName() seems unnecessary. You can just use InetAddress.getByName() HostResolver#getByName() javadoc mentions getCanonicalHostName(), not sure what that is. If it is FileSystem#getCanonicalServiceName(), there is no need to reference FileSystem into NetUtils. It may be a good idea to move HostResolver, that uses sun specific libraries to separate file. HftpFileSystem.java - indentation of some of the methods is off (due to tab). Will post the comments based on reviewing the rest of the patch soon.
          Hide
          Suresh Srinivas added a comment -
          1. FileSystem#getCanonicalServiceName()
            • "The token will not attempt to acquire tokens if the service is null." Needs rewording.
            • getCanonicalServiceName() is poorly named and we are tightly coupling token semantics with it. I know this is backward incompatible, but given that you have change the semantics to return token service name, should the method name change to getTokenServiceName() or perhaps add a new method leaving the exiting method as it is?
            • There is a link to buildDTServiceName(). It returns null service name, if the authority name is null. Yet getCanonicalServiceName() is saying if file sytem does not implement tokens, it returns null. Does that mean any file system that has null authority does not support tokens? This is some what taking cache implementation that we have and trying to define the API behavior.
            • "@see {@link SecurityUtil#buildDTServiceName(URI, int)}

              " should be "@see SecurityUtil#buildDTServiceName(URI, int)"

          2. HftpFileSystem.java - minor - You just need one copy of HftpDelegationTokenSelector and HDFSDelegationTo
          3. SecurityUtil.java
            • getTokenServiceAddr(), buildDTServiceName(), buildTokenService() should use makeSocketAddress()?
          4. Why is NetUtils#getConnectAddres() changed to use makeSocketAddress()?
          5. DFSClient.java
            • Datanodes makes use of block tokens and not delgation tokens. Do you need to call makeSocketAddr() for datanode proxy?
          6. MiniDFSCluster
            • Please add detail about config param use_ip in "//NOTE: the following is only true if use_ip=true"
          7. HftpFileSystem.java
            • Get rid of TODO: from the comment
            • selectHdfsDelegationToken, selectHftpDelegationToken unnecessarily declares throws IOException
          8. TestNetUtils
            • There is not need to catch UnknownHostException and ignore it in couple of places

          Additional question:
          How does token cache functionality work given IP address in the URI? A token with hostname that already exists in token cache will not match the URI that has IP address right? If so that is a changed behavior from previous implementation.

          Show
          Suresh Srinivas added a comment - FileSystem#getCanonicalServiceName() "The token will not attempt to acquire tokens if the service is null." Needs rewording. getCanonicalServiceName() is poorly named and we are tightly coupling token semantics with it. I know this is backward incompatible, but given that you have change the semantics to return token service name, should the method name change to getTokenServiceName() or perhaps add a new method leaving the exiting method as it is? There is a link to buildDTServiceName(). It returns null service name, if the authority name is null. Yet getCanonicalServiceName() is saying if file sytem does not implement tokens, it returns null. Does that mean any file system that has null authority does not support tokens? This is some what taking cache implementation that we have and trying to define the API behavior. "@see {@link SecurityUtil#buildDTServiceName(URI, int)} " should be "@see SecurityUtil#buildDTServiceName(URI, int)" HftpFileSystem.java - minor - You just need one copy of HftpDelegationTokenSelector and HDFSDelegationTo SecurityUtil.java getTokenServiceAddr(), buildDTServiceName(), buildTokenService() should use makeSocketAddress()? Why is NetUtils#getConnectAddres() changed to use makeSocketAddress()? DFSClient.java Datanodes makes use of block tokens and not delgation tokens. Do you need to call makeSocketAddr() for datanode proxy? MiniDFSCluster Please add detail about config param use_ip in "//NOTE: the following is only true if use_ip=true" HftpFileSystem.java Get rid of TODO: from the comment selectHdfsDelegationToken, selectHftpDelegationToken unnecessarily declares throws IOException TestNetUtils There is not need to catch UnknownHostException and ignore it in couple of places Additional question: How does token cache functionality work given IP address in the URI? A token with hostname that already exists in token cache will not match the URI that has IP address right? If so that is a changed behavior from previous implementation.
          Hide
          Daryn Sharp added a comment -

          I'm going through all the thoughtful comments, but I'd like to clarify this one in particular since it's been mentioned a few times:

          getCanonicalServiceName() is poorly named and we are tightly coupling token semantics with it. I know this is backward incompatible, but given that you have change the semantics to return token service name

          This patch does not change the semantics of getCanonicalServiceName. It always returned a token service until HADOOP-7661. Jitendra's altered the semantics to sometimes return a service, sometimes return a uri. This patch was updated to change it back.

          Show
          Daryn Sharp added a comment - I'm going through all the thoughtful comments, but I'd like to clarify this one in particular since it's been mentioned a few times: getCanonicalServiceName() is poorly named and we are tightly coupling token semantics with it. I know this is backward incompatible, but given that you have change the semantics to return token service name This patch does not change the semantics of getCanonicalServiceName . It always returned a token service until HADOOP-7661 . Jitendra's altered the semantics to sometimes return a service, sometimes return a uri. This patch was updated to change it back.
          Hide
          Suresh Srinivas added a comment -

          This patch does not change the semantics of getCanonicalServiceName. It always returned a token service until HADOOP-7661. Jitendra's altered the semantics to sometimes return a service, sometimes return a uri. This patch was updated to change it back.

          Daryn, I do not want this to distract us from making quick progress on this. However, I have to disagree here. Here is what 7661 change made:

             /**
          -   * Get a canonical name for this file system.
          -   * @return a URI string that uniquely identifies this file system
          +   * Get a canonical name for this file system. It returns the uri of the file
          +   * system unless overridden by a FileSystem implementation. File Systems with
          +   * a valid authority can choose to return host:port or ip:port.
          +   * 
          +   * @return A string that uniquely identifies this file system
              */
             public String getCanonicalServiceName() {
          -    return SecurityUtil.buildDTServiceName(getUri(), getDefaultPort());
          +    return getUri().toString();
             }
          

          Looking at the javadoc of the method, there is nothing that talks about the token service (only implementation does). All javadoc says is, it returns URI. Given a generic and vague method name like that, you can add any description to it and it would seem okay. Granted, getCanonicalServiceName() was added for token. However that intent could have been made really clear by a name such as getTokenServiceName(). Some of the disagreements about this method, in my opinion stems from the poor naming of the method.

          We are compounding that by defining it with the implementation details of how token cache is implemented (such as authority is null, then getCanonicalServiceName returns null etc.

          That said, I would like to focus on getting this patch done and not drag out these discussions.

          Show
          Suresh Srinivas added a comment - This patch does not change the semantics of getCanonicalServiceName. It always returned a token service until HADOOP-7661 . Jitendra's altered the semantics to sometimes return a service, sometimes return a uri. This patch was updated to change it back. Daryn, I do not want this to distract us from making quick progress on this. However, I have to disagree here. Here is what 7661 change made: /** - * Get a canonical name for this file system. - * @return a URI string that uniquely identifies this file system + * Get a canonical name for this file system. It returns the uri of the file + * system unless overridden by a FileSystem implementation. File Systems with + * a valid authority can choose to return host:port or ip:port. + * + * @return A string that uniquely identifies this file system */ public String getCanonicalServiceName() { - return SecurityUtil.buildDTServiceName(getUri(), getDefaultPort()); + return getUri().toString(); } Looking at the javadoc of the method, there is nothing that talks about the token service (only implementation does). All javadoc says is, it returns URI. Given a generic and vague method name like that, you can add any description to it and it would seem okay. Granted, getCanonicalServiceName() was added for token. However that intent could have been made really clear by a name such as getTokenServiceName(). Some of the disagreements about this method, in my opinion stems from the poor naming of the method. We are compounding that by defining it with the implementation details of how token cache is implemented (such as authority is null, then getCanonicalServiceName returns null etc. That said, I would like to focus on getting this patch done and not drag out these discussions.
          Hide
          Suresh Srinivas added a comment -

          Daryn, I want your opinion on this - should we turn off HostResolver functionality based on use_ip configuration? That way system that does not use host:port in token service, remains largely unaffected?

          Show
          Suresh Srinivas added a comment - Daryn, I want your opinion on this - should we turn off HostResolver functionality based on use_ip configuration? That way system that does not use host:port in token service, remains largely unaffected?
          Hide
          Nathan Roberts added a comment -

          On trunk, we should definitely look at fixing up getCanonicalServiceName(). This method (whether it be its name, or its javadoc, or its use), has caused significant confusion, which is an obvious sign it's an interface that needs some rethinking.

          Show
          Nathan Roberts added a comment - On trunk, we should definitely look at fixing up getCanonicalServiceName(). This method (whether it be its name, or its javadoc, or its use), has caused significant confusion, which is an obvious sign it's an interface that needs some rethinking.
          Hide
          Daryn Sharp added a comment -

          Good question. I had considered disabling the custom resolver based on use_ip, but it provides additional benefits that I think may be useful in general. The javadocs that I'm adding detail these differences from the standard resolver. After I post the patch, you can help me decide if the resolver should be conditionally enabled.

          I don't want to belabor the getCanonicalServiceName issue either. I completely agree that the method is poorly named (although it does contain "Service" in its name) and the javadocs are wrong. However, if there is any confusion: I did not implement this method and it has existed for quite awhile. Since this is a sustaining release, I feel more comfortable leaving it as it was for now.

          The TokenCache uses the return value to "assume" it knows how the Credentials is keying the tokens. Credentials doesn't even use the method. This false coupling appears to be the only reason the method exists. My renewal patch contained a number of beneficial improvements, including one that negates the need for the TokenCache to have any intimate knowledge the internal implementation of Credentials. I'll file a jira against trunk to discuss correcting this behavior, in which case the getCanonicalServiceName method need not even exist...

          Show
          Daryn Sharp added a comment - Good question. I had considered disabling the custom resolver based on use_ip, but it provides additional benefits that I think may be useful in general. The javadocs that I'm adding detail these differences from the standard resolver. After I post the patch, you can help me decide if the resolver should be conditionally enabled. – I don't want to belabor the getCanonicalServiceName issue either. I completely agree that the method is poorly named (although it does contain "Service" in its name) and the javadocs are wrong. However, if there is any confusion: I did not implement this method and it has existed for quite awhile. Since this is a sustaining release, I feel more comfortable leaving it as it was for now. The TokenCache uses the return value to "assume" it knows how the Credentials is keying the tokens. Credentials doesn't even use the method. This false coupling appears to be the only reason the method exists. My renewal patch contained a number of beneficial improvements, including one that negates the need for the TokenCache to have any intimate knowledge the internal implementation of Credentials . I'll file a jira against trunk to discuss correcting this behavior, in which case the getCanonicalServiceName method need not even exist...
          Hide
          Daryn Sharp added a comment -

          Address Suresh's comments, as clarified by a discussion with him. Please let me know if I missed anything. I am kicking off the full test suite to run tonight.

          Show
          Daryn Sharp added a comment - Address Suresh's comments, as clarified by a discussion with him. Please let me know if I missed anything. I am kicking off the full test suite to run tonight.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12496584/HADOOP-7510-11.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 24 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/234//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12496584/HADOOP-7510-11.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 24 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/234//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          Daryn, I have some minor comments:

          1. The following is disabled also because, the hostResolver functionality is needed only use_ip is enabled, even if it is safe. You could make a case otherwise. Until then there is no need for comment below.
            +  // although this setting isn't strictly related to this class, it will
            +  // short out the custom host resolution until it can be proven to be safe
            
            +   * NOTE: although this resolver is of general utility, it is only used
            +   *       if hadoop.security.token.service.use_ip=false 
            
          2. Please remove "Intended only for temporary use by NetUtils. Do not use.". You are just providing access to the config item, where this comment is not necessary.
          Show
          Suresh Srinivas added a comment - Daryn, I have some minor comments: The following is disabled also because, the hostResolver functionality is needed only use_ip is enabled, even if it is safe. You could make a case otherwise. Until then there is no need for comment below. + // although this setting isn't strictly related to this class, it will + // short out the custom host resolution until it can be proven to be safe + * NOTE: although this resolver is of general utility, it is only used + * if hadoop.security.token.service.use_ip=false Please remove "Intended only for temporary use by NetUtils. Do not use.". You are just providing access to the config item, where this comment is not necessary.
          Hide
          Suresh Srinivas added a comment -

          Please run test-patch and attach the results and also please comment on results of unit tests.

          Show
          Suresh Srinivas added a comment - Please run test-patch and attach the results and also please comment on results of unit tests.
          Hide
          Daryn Sharp added a comment -

          Update comments. Update unit tests for failures due to conditionalizing the qualified host resolver. Will post test-patch and test results ASAP.

          Show
          Daryn Sharp added a comment - Update comments. Update unit tests for failures due to conditionalizing the qualified host resolver. Will post test-patch and test results ASAP.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12496794/HADOOP-7510-12.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 24 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/238//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12496794/HADOOP-7510-12.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 24 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/238//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          Tests pass, running test-patch...

          Show
          Daryn Sharp added a comment - Tests pass, running test-patch...
          Hide
          Suresh Srinivas added a comment -

          +1 for the patch - will commit once test-patch completes

          Show
          Suresh Srinivas added a comment - +1 for the patch - will commit once test-patch completes
          Hide
          Daryn Sharp added a comment -

          [exec] +1 overall.
          [exec]
          [exec] +1 @author. The patch does not contain any @author tags.
          [exec]
          [exec] +1 tests included. The patch appears to include 24 new or modified tests.
          [exec]
          [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
          [exec]
          [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
          [exec]
          [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
          [exec]

          Show
          Daryn Sharp added a comment - [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 24 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. [exec]
          Hide
          Matt Foley added a comment -

          Suresh also merged to 0.20-security, but typo'ed the bug number:
          ------------------------------------------------------------------------
          r1176646 | suresh | 2011-09-27 15:21:14 -0700 (Tue, 27 Sep 2011) | 2 lines
          Merging change r1176645 for HADOOP-7515 from 0.20.205
          ------------------------------------------------------------------------

          Show
          Matt Foley added a comment - Suresh also merged to 0.20-security, but typo'ed the bug number: ------------------------------------------------------------------------ r1176646 | suresh | 2011-09-27 15:21:14 -0700 (Tue, 27 Sep 2011) | 2 lines Merging change r1176645 for HADOOP-7515 from 0.20.205 ------------------------------------------------------------------------
          Hide
          Matt Foley added a comment -

          Daryn, please port this to trunk now that the kinks are worked out. Thanks.

          Show
          Matt Foley added a comment - Daryn, please port this to trunk now that the kinks are worked out. Thanks.
          Hide
          Todd Lipcon added a comment -

          Any news on the trunk forward-port?

          Show
          Todd Lipcon added a comment - Any news on the trunk forward-port?
          Hide
          Suresh Srinivas added a comment -

          This is blocked by MR-2764. Should be done hopefully this week.

          Show
          Suresh Srinivas added a comment - This is blocked by MR-2764. Should be done hopefully this week.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          This issue is no longer blocked. MAPREDUCE-2764 was committed to 0.23 and trunk. Note that HDFS-2385 is blocked by this.

          Show
          Tsz Wo Nicholas Sze added a comment - This issue is no longer blocked. MAPREDUCE-2764 was committed to 0.23 and trunk. Note that HDFS-2385 is blocked by this.
          Hide
          Daryn Sharp added a comment -

          Merge is being worked on, but delayed by 205 issues that should be resolved soon.

          Show
          Daryn Sharp added a comment - Merge is being worked on, but delayed by 205 issues that should be resolved soon.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #1572 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1572/)
          HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #1572 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1572/ ) HADOOP-7808 . Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #1500 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1500/)
          HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1500 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1500/ ) HADOOP-7808 . Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #1520 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1520/)
          HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1520 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1520/ ) HADOOP-7808 . Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #917 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/917/)
          HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #917 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/917/ ) HADOOP-7808 . Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #950 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/950/)
          HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #950 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/950/ ) HADOOP-7808 . Port HADOOP-7510 - Add configurable option to use original hostname in token instead of IP to allow server IP change. Contributed by Daryn Sharp. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1227737 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics/spi/Util.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/Servers.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/NetUtilsTestResolver.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
          Hide
          Harsh J added a comment -

          Please fix this bit in HftpFileSystem as well, if still relevant:

              //TODO: un-comment the following once HDFS-7510 is committed. 
          //    return getConf().getInt(DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_KEY,
          //        DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT);
            }
          
          Show
          Harsh J added a comment - Please fix this bit in HftpFileSystem as well, if still relevant: //TODO: un-comment the following once HDFS-7510 is committed. // return getConf().getInt(DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_KEY, // DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT); }
          Hide
          Kihwal Lee added a comment -

          Please fix this bit in HftpFileSystem as well, if still relevant:

          It is being worked on in a separate Jira, HDFS-2784

          Show
          Kihwal Lee added a comment - Please fix this bit in HftpFileSystem as well, if still relevant: It is being worked on in a separate Jira, HDFS-2784
          Hide
          Eli Collins added a comment -

          Looks like this still needs to be merged to 23.

          Show
          Eli Collins added a comment - Looks like this still needs to be merged to 23.
          Hide
          Daryn Sharp added a comment -

          Yes, this is actively being worked on.

          Show
          Daryn Sharp added a comment - Yes, this is actively being worked on.
          Hide
          Daryn Sharp added a comment -

          Taking out of PA until the other patches for trunk, etc are ready.

          Show
          Daryn Sharp added a comment - Taking out of PA until the other patches for trunk, etc are ready.
          Hide
          Daryn Sharp added a comment -

          The bulk of the 205 changes were brought in when hftp and webhdfs token changes were merged to trunk. This patch includes:

          • the most crucial change to IPC's selection of the token
          • NetUtils.getConnectAddress will substitute the local host name for the wildcard address, instead of a hardcoded loopback reference. Token services are sometimes based on this value, so returning the loopback by default makes no sense.
          Show
          Daryn Sharp added a comment - The bulk of the 205 changes were brought in when hftp and webhdfs token changes were merged to trunk. This patch includes: the most crucial change to IPC's selection of the token NetUtils.getConnectAddress will substitute the local host name for the wildcard address, instead of a hardcoded loopback reference. Token services are sometimes based on this value, so returning the loopback by default makes no sense.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12521695/HADOOP-7510.trunk.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/833//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/833//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12521695/HADOOP-7510.trunk.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/833//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/833//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          For 23, had to fix a conflict with the removal of SupressWarnings, and the test file wasn't merging correctly.

          Show
          Daryn Sharp added a comment - For 23, had to fix a conflict with the removal of SupressWarnings, and the test file wasn't merging correctly.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12522265/HADOOP-7510.branch-0.23.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/845//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522265/HADOOP-7510.branch-0.23.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/845//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          The last failure is because jenkins tried to apply the 23 version of the patch to trunk.

          Show
          Daryn Sharp added a comment - The last failure is because jenkins tried to apply the 23 version of the patch to trunk.
          Hide
          Robert Joseph Evans added a comment -

          The latest set of patches look good to me. They are fairly small and straight forward. +1

          Show
          Robert Joseph Evans added a comment - The latest set of patches look good to me. They are fairly small and straight forward. +1
          Hide
          Robert Joseph Evans added a comment -

          Thanks Daryn, I put this into trunk, branch-2, and branch-0.23

          Show
          Robert Joseph Evans added a comment - Thanks Daryn, I put this into trunk, branch-2, and branch-0.23
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #2079 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2079/)
          HADOOP-7510. Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500)

          Result = SUCCESS
          bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2079 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2079/ ) HADOOP-7510 . Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #2065 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2065/)
          HADOOP-7510. Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500)

          Result = SUCCESS
          bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2065 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2065/ ) HADOOP-7510 . Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #2139 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2139/)
          HADOOP-7510. Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500)

          Result = SUCCESS
          bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2139 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2139/ ) HADOOP-7510 . Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Build #226 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/226/)
          HADOOP-7510. Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325505)

          Result = SUCCESS
          bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325505
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #226 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/226/ ) HADOOP-7510 . Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325505) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325505 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1013 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1013/)
          HADOOP-7510. Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500)

          Result = FAILURE
          bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1013 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1013/ ) HADOOP-7510 . Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1048 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1048/)
          HADOOP-7510. Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500)

          Result = SUCCESS
          bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1048 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1048/ ) HADOOP-7510 . Tokens should use original hostname provided instead of ip (Daryn Sharp via bobby) (Revision 1325500) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1325500 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java

            People

            • Assignee:
              Daryn Sharp
              Reporter:
              Daryn Sharp
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development