Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3148

The client should be able to use multiple local interfaces for data transfer

    Details

    • Hadoop Flags:
      Reviewed

      Description

      HDFS-3147 covers using multiple interfaces on the server (Datanode) side. Clients should also be able to utilize multiple local interfaces for outbound connections instead of always using the interface for the local hostname. This can be accomplished with a new configuration parameter (dfs.client.local.interfaces) that accepts a list of interfaces the client should use. Acceptable configuration values are the same as the dfs.datanode.available.interfaces parameter. The client binds its socket to a specific interface, which enables outbound traffic to use that interface. Binding the client socket to a specific address is not sufficient to ensure egress traffic uses that interface. Eg if multiple interfaces are on the same subnet the host requires IP rules that use the source address (which bind sets) to select the destination interface. The SO_BINDTODEVICE socket option could be used to select a specific interface for the connection instead, however it requires JNI (is not in Java's SocketOptions) and root access, which we don't want to require clients have.

      Like HDFS-3147, the client can use multiple local interfaces for data transfer. Since the client already cache their connections to DNs choosing a local interface at random seems like a good policy. Users can also pin a specific client to a specific interface by specifying just that interface in dfs.client.local.interfaces.

      This change was discussed in HADOOP-6210 a while back, and is actually useful/independent of the other HDFS-3140 changes.

      1. hdfs-3148.txt
        11 kB
        Eli Collins
      2. hdfs-3148.txt
        11 kB
        Eli Collins
      3. hdfs-3148.txt
        10 kB
        Eli Collins
      4. hdfs-3148-b1.txt
        21 kB
        Eli Collins
      5. hdfs-3148-b1.txt
        20 kB
        Eli Collins

        Issue Links

          Activity

          Hide
          Eli Collins added a comment -

          Patch attached (for trunk). Depends on HADOOP-8210.

          Show
          Eli Collins added a comment - Patch attached (for trunk). Depends on HADOOP-8210 .
          Hide
          Eli Collins added a comment -

          Patch for branch-1 attached. Aside from the new tests I also tested:

          • Verify multiple interface names (eg eth3, wlan2) work
          • Verify IPs for these interfaces works
          • Verify specifying 172.29.20.0/20 matches 172.29.20.38 and 172.29.20.39
          • TestFTPFileSystem for sanity, we only use commons-net for FTPFileSystem
          Show
          Eli Collins added a comment - Patch for branch-1 attached. Aside from the new tests I also tested: Verify multiple interface names (eg eth3, wlan2) work Verify IPs for these interfaces works Verify specifying 172.29.20.0/20 matches 172.29.20.38 and 172.29.20.39 TestFTPFileSystem for sanity, we only use commons-net for FTPFileSystem
          Hide
          Todd Lipcon added a comment -
          • I think it makes more sense to make getLocalInterfaceAddrs static, and take localInterfaces as a parameter.

          +  public static final String  DFS_CLIENT_LOCAL_INTERFACES = "dfs.client.local.interfaces";
          

          Move this higher in the file, near the other DFS_CLIENT configs


          +    final int idx = r.nextInt(localInterfaceAddrs.length);
          +    final SocketAddress addr = localInterfaceAddrs[idx];
          +    if (LOG.isDebugEnabled()) {
          +      LOG.debug("Using local interface " + localInterfaces[idx] + " " + addr);
          

          This doesn't seem right, since localInterfaces and localInterfaceAddrs may have different lengths – a given configured local interface could have multiple addrs in the localInterfaceAddrs list.

          This brings up another question: if a NIC has multiple IPs, should it be weighted in the load balancing based on the number of IPs assigned? That doesn't seem right.

          Maybe the right solution to both of these issues is to actually require that the list of addresses decided upon has at most one IP corresponding to each device?

          Another possibility is that you could change the member variable to a MultiMap<String, SocketAddress> – first randomly choose a key from the map, and then randomly choose among that key's values. My hunch is this would give the right behavior most of the time.


          +  <description>A comma separate list of network interface names to use
          +    for data transfer between the client and datanodes. When creating
          

          typo: comma separate*d* list

          Show
          Todd Lipcon added a comment - I think it makes more sense to make getLocalInterfaceAddrs static, and take localInterfaces as a parameter. + public static final String DFS_CLIENT_LOCAL_INTERFACES = "dfs.client.local.interfaces" ; Move this higher in the file, near the other DFS_CLIENT configs + final int idx = r.nextInt(localInterfaceAddrs.length); + final SocketAddress addr = localInterfaceAddrs[idx]; + if (LOG.isDebugEnabled()) { + LOG.debug( "Using local interface " + localInterfaces[idx] + " " + addr); This doesn't seem right, since localInterfaces and localInterfaceAddrs may have different lengths – a given configured local interface could have multiple addrs in the localInterfaceAddrs list. This brings up another question: if a NIC has multiple IPs, should it be weighted in the load balancing based on the number of IPs assigned? That doesn't seem right. Maybe the right solution to both of these issues is to actually require that the list of addresses decided upon has at most one IP corresponding to each device? Another possibility is that you could change the member variable to a MultiMap<String, SocketAddress> – first randomly choose a key from the map, and then randomly choose among that key's values. My hunch is this would give the right behavior most of the time. + <description>A comma separate list of network interface names to use + for data transfer between the client and datanodes. When creating typo: comma separate*d* list
          Hide
          Eli Collins added a comment -

          Updated patch:

          • Makes getLocalInterfaceAddrs static
          • Didn't move the DFSConfigsKey because the section of client keys farther up is for keys which have defaults (this one does not, it's as high as client key) and the HA section
          • Good catch wrt the debug log, fixed
          • Fixed typo in hdfs-default.xml
          • Fixed a bug in getLocalInterfaceAddrs where it wasn't filtering sub-interface IPs in the IP range case and added a better method-level comment

          Wrt multiple IPs we only add one address for each interface. The user can specify a list of either IP address, IP range or interface name. If they specify a raw IP, we use it verbatim, eg if they specify the IP of a host-level bond we just use the IP of the bond, and ignore the fact that there may be sub-IPs because that's taken care of by the host. Ie we use just one IP. If they specify an IP range we use all the interface IPs, and not sub-interface IPs, that match the range. If they specify an interface name we use just the IP of that interface and not its sub-interfaces. If we didn't use java.net.preferIPv4Stack we could get multiple addresses here (a v4 and v6 one) so v6 will have to be handled here as well whenever we add support.

          In an earlier version I had precondition checks in NetUtils#addMatchingAddrs and DFSClient#getLocalInterfaceAddrs to check the 1:1 but it got convoluted. Similarly, it's not an error if the configured # is different than the actual # as we allow specifying a range which will match multiple interfaces. Given that we log both the configured set and the actual set we can tell if the ones used match our expectations.

          Show
          Eli Collins added a comment - Updated patch: Makes getLocalInterfaceAddrs static Didn't move the DFSConfigsKey because the section of client keys farther up is for keys which have defaults (this one does not, it's as high as client key) and the HA section Good catch wrt the debug log, fixed Fixed typo in hdfs-default.xml Fixed a bug in getLocalInterfaceAddrs where it wasn't filtering sub-interface IPs in the IP range case and added a better method-level comment Wrt multiple IPs we only add one address for each interface. The user can specify a list of either IP address, IP range or interface name. If they specify a raw IP, we use it verbatim, eg if they specify the IP of a host-level bond we just use the IP of the bond, and ignore the fact that there may be sub-IPs because that's taken care of by the host. Ie we use just one IP. If they specify an IP range we use all the interface IPs, and not sub-interface IPs, that match the range. If they specify an interface name we use just the IP of that interface and not its sub-interfaces. If we didn't use java.net.preferIPv4Stack we could get multiple addresses here (a v4 and v6 one) so v6 will have to be handled here as well whenever we add support. In an earlier version I had precondition checks in NetUtils#addMatchingAddrs and DFSClient#getLocalInterfaceAddrs to check the 1:1 but it got convoluted. Similarly, it's not an error if the configured # is different than the actual # as we allow specifying a range which will match multiple interfaces. Given that we log both the configured set and the actual set we can tell if the ones used match our expectations.
          Hide
          Eli Collins added a comment -

          Updated branch-1 patch. Also incorporates your feedback from HADOOP-8210 that apply to it.

          Show
          Eli Collins added a comment - Updated branch-1 patch. Also incorporates your feedback from HADOOP-8210 that apply to it.
          Hide
          Daryn Sharp added a comment -

          Are you sure this actually works? Based upon a little research, binding to a specific address does not bypass the routing table. Apparently it only sets the source ip but still follows the routing policy of selecting the interface with the most specific match based on netmask, else the default route.

          If you haven't already, please packet sniff an interface to verify the behavior.

          Show
          Daryn Sharp added a comment - Are you sure this actually works? Based upon a little research, binding to a specific address does not bypass the routing table. Apparently it only sets the source ip but still follows the routing policy of selecting the interface with the most specific match based on netmask, else the default route. If you haven't already, please packet sniff an interface to verify the behavior.
          Hide
          Eli Collins added a comment -

          Yes, tested on a machine with 4 interfaces, that traffic flows out all 4. Wrt your second comment see the following section from the design doc which covers this explicitly:

          2.5 Enabling Clients to Use Multiple Local Interfaces

          So far we’ve discussed using multiple interfaces on the server side. Clients should also be able to utilize multiple local interfaces for outbound connections instead of always using the interface for the local hostname. For example, Client 1 in the above diagram can use both i1 and i2 to connect to other workers in the cluster. This can be accomplished with a configuration parameter that accepts a list of interfaces the client should use (same configuration as the previously discussed options). The client binds its socket to a specific interface, which will enable outbound traffic to use that interface. If both interfaces are on the same subnet the host requires IP rules that use the source address (which bind sets) to select the destination interface. The SO_BINDTODEVICE socket option could be used to select a specific interface for the connection instead, however it requires root access, which clients may not have. Clients can use these interfaces as they see fit. For example, an HDFS client where connections to Datanodes are cached, selecting an interface at random or based on load makes sense.

          Show
          Eli Collins added a comment - Yes, tested on a machine with 4 interfaces, that traffic flows out all 4. Wrt your second comment see the following section from the design doc which covers this explicitly: 2.5 Enabling Clients to Use Multiple Local Interfaces So far we’ve discussed using multiple interfaces on the server side. Clients should also be able to utilize multiple local interfaces for outbound connections instead of always using the interface for the local hostname. For example, Client 1 in the above diagram can use both i1 and i2 to connect to other workers in the cluster. This can be accomplished with a configuration parameter that accepts a list of interfaces the client should use (same configuration as the previously discussed options). The client binds its socket to a specific interface, which will enable outbound traffic to use that interface. If both interfaces are on the same subnet the host requires IP rules that use the source address (which bind sets) to select the destination interface. The SO_BINDTODEVICE socket option could be used to select a specific interface for the connection instead, however it requires root access, which clients may not have. Clients can use these interfaces as they see fit. For example, an HDFS client where connections to Datanodes are cached, selecting an interface at random or based on load makes sense.
          Hide
          Daryn Sharp added a comment -

          Interesting. Were the 4 interfaces all configured for the same subnet? Traffic will always come back on the interface for the bound ip, but outgoing is supposedly subject to the standard routing table. It seems people are using iptables contortions to force traffic out the interface for a bound ip. If the interfaces are multi-homed to different subnets, does outbound go out the correct interface?

          Show
          Daryn Sharp added a comment - Interesting. Were the 4 interfaces all configured for the same subnet? Traffic will always come back on the interface for the bound ip, but outgoing is supposedly subject to the standard routing table. It seems people are using iptables contortions to force traffic out the interface for a bound ip. If the interfaces are multi-homed to different subnets, does outbound go out the correct interface?
          Hide
          Eli Collins added a comment -

          Yes, per my comment above, if all interfaces are on the same subnet then you need to use ip(8) to add an ip rule so the destination interface is determined by the source address (why ip route add lets you specify a "src" field).

          Show
          Eli Collins added a comment - Yes, per my comment above, if all interfaces are on the same subnet then you need to use ip(8) to add an ip rule so the destination interface is determined by the source address (why ip route add lets you specify a "src" field).
          Hide
          Todd Lipcon added a comment -

          +1, looks good.

          Can you please file another subtask of HDFS-3140 to add documentation about how to configure this, and how it interacts with source-based routing tables? This is a nice feature, so we need to make sure it's well documented, or else no one will be able to use it.

          Show
          Todd Lipcon added a comment - +1, looks good. Can you please file another subtask of HDFS-3140 to add documentation about how to configure this, and how it interacts with source-based routing tables? This is a nice feature, so we need to make sure it's well documented, or else no one will be able to use it.
          Hide
          Todd Lipcon added a comment -

          (btw please wait for a green hudson before commit)

          Show
          Todd Lipcon added a comment - (btw please wait for a green hudson before commit)
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #2045 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2045/)
          HADOOP-8210. Common side of HDFS-3148: The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457)

          Result = SUCCESS
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2045 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2045/ ) HADOOP-8210 . Common side of HDFS-3148 : The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #1970 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1970/)
          HADOOP-8210. Common side of HDFS-3148: The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457)

          Result = SUCCESS
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1970 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1970/ ) HADOOP-8210 . Common side of HDFS-3148 : The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Hide
          Eli Collins added a comment -

          Updated patch, same as before just fixes a small javadoc, no space after a @param. Kicking jenkins now that the common side is in.

          Show
          Eli Collins added a comment - Updated patch, same as before just fixes a small javadoc, no space after a @param. Kicking jenkins now that the common side is in.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #1982 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1982/)
          HADOOP-8210. Common side of HDFS-3148: The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457)

          Result = FAILURE
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1982 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1982/ ) HADOOP-8210 . Common side of HDFS-3148 : The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12521013/hdfs-3148.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed the unit tests build

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2155//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2155//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12521013/hdfs-3148.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed the unit tests build +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2155//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2155//console This message is automatically generated.
          Hide
          Eli Collins added a comment -

          Looks the common-side changes hadn't made it over yet. Kicking jenkins again.

          Show
          Eli Collins added a comment - Looks the common-side changes hadn't made it over yet. Kicking jenkins again.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12521013/hdfs-3148.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2161//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2161//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12521013/hdfs-3148.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2161//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2161//console This message is automatically generated.
          Hide
          Eli Collins added a comment -

          The TestPipelinesFailover timeout is unrelated, passes for me locally. I ran ant test on the branch-1 patch again as well.

          Show
          Eli Collins added a comment - The TestPipelinesFailover timeout is unrelated, passes for me locally. I ran ant test on the branch-1 patch again as well.
          Hide
          Eli Collins added a comment -

          Thanks Todd! I've committed this.

          Show
          Eli Collins added a comment - Thanks Todd! I've committed this.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #1975 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1975/)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614)

          Result = SUCCESS
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1975 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1975/ ) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #2049 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2049/)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614)

          Result = SUCCESS
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2049 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2049/ ) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #1987 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1987/)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614)

          Result = SUCCESS
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1987 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1987/ ) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1004 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1004/)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614)
          HADOOP-8210. Common side of HDFS-3148: The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457)

          Result = FAILURE
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1004 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1004/ ) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614) HADOOP-8210 . Common side of HDFS-3148 : The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Hide
          Suresh Srinivas added a comment -

          Hey guys, can you do this work in a separate branch as well. There are too many things going on to catchup on things. I have not had time to look into the proposal and my feeling was, is this complexity worth adding. Though I have not had time to think about how much complexity this feature adds.

          Also, is Daryn's concern addressed?

          Show
          Suresh Srinivas added a comment - Hey guys, can you do this work in a separate branch as well. There are too many things going on to catchup on things. I have not had time to look into the proposal and my feeling was, is this complexity worth adding. Though I have not had time to think about how much complexity this feature adds. Also, is Daryn's concern addressed?
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1039 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1039/)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617)
          HDFS-3148. The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614)
          HADOOP-8210. Common side of HDFS-3148: The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457)

          Result = FAILURE
          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614
          Files :

          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1039 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1039/ ) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308617) HDFS-3148 . The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308614) HADOOP-8210 . Common side of HDFS-3148 : The client should be able to use multiple local interfaces for data transfer. Contributed by Eli Collins (Revision 1308457) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308617 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308614 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/permission/TestStickyBit.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/FileAppendTest4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientProtocolForPipelineRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend2.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend4.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileConcurrentReader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreationDelete.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRenameWhileOpen.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308457 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/DNS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
          Hide
          Daryn Sharp added a comment -

          > Also, is Daryn's concern addressed?

          I believe so. Part of the confusion was I didn't fully comprehend Eli's earlier responses. Todd made a great point that we need to ensure we have really good documentation for the feature. It's going to require system level configuration to work correctly.

          Show
          Daryn Sharp added a comment - > Also, is Daryn's concern addressed? I believe so. Part of the confusion was I didn't fully comprehend Eli's earlier responses. Todd made a great point that we need to ensure we have really good documentation for the feature. It's going to require system level configuration to work correctly.
          Hide
          Eli Collins added a comment -

          Hey Suresh,

          This feature is actually independent of all the other hdfs-3140 sub-tasks, and multihoming in general, and therefore does not require any further jiras. It covers using multiple interfaces on the client side, the others are all about using multiple jiras on the server side. These can both be used independently, eg it's just as valuable to use multiple local interfaces on the client side even if you don't use multihoming on the server side. Happy to pull it out to it's own top-level jira if that's more clear. Ditto, lemme know if you think the other HDFS-3140 jiras should be in a branch. Just enabling multihoming requires HDFS-3146 and HDFS-3147 and a branch for a couple jiras felt like overkill. Much of the work has been in the cleanup of DatanodeID and friends.

          Thanks,
          Eli

          Show
          Eli Collins added a comment - Hey Suresh, This feature is actually independent of all the other hdfs-3140 sub-tasks, and multihoming in general, and therefore does not require any further jiras. It covers using multiple interfaces on the client side, the others are all about using multiple jiras on the server side. These can both be used independently, eg it's just as valuable to use multiple local interfaces on the client side even if you don't use multihoming on the server side. Happy to pull it out to it's own top-level jira if that's more clear. Ditto, lemme know if you think the other HDFS-3140 jiras should be in a branch. Just enabling multihoming requires HDFS-3146 and HDFS-3147 and a branch for a couple jiras felt like overkill. Much of the work has been in the cleanup of DatanodeID and friends. Thanks, Eli
          Hide
          Suresh Srinivas added a comment -

          Eli, given it might be a few jiras, I agree it might be an over kill. I will try and make time, when patch on other multihoming jiras become available.

          Show
          Suresh Srinivas added a comment - Eli, given it might be a few jiras, I agree it might be an over kill. I will try and make time, when patch on other multihoming jiras become available.
          Hide
          Eli Collins added a comment -

          I ended up creating an HDFS-3148 branch. HDFS-3218 tracks the server side issue btw.

          Show
          Eli Collins added a comment - I ended up creating an HDFS-3148 branch. HDFS-3218 tracks the server side issue btw.
          Hide
          Eli Collins added a comment -

          Oops, meant to say HDFS-3140 branch.

          Show
          Eli Collins added a comment - Oops, meant to say HDFS-3140 branch.
          Hide
          Sanjay Radia added a comment -

          Eli, is this motivated by use cases where the clients outside of the Hadoop cluster? Generally if inside the Hadoop cluster you
          don;t want this flexibility. Could you please expand on the use case with some more details. Since you are pushing this new
          config parameter to branch 1 I assume that you see immediate customer use cases?
          BTW is this motivated by Hadoop-8198's use case 2 (multiple network domains)

          Show
          Sanjay Radia added a comment - Eli, is this motivated by use cases where the clients outside of the Hadoop cluster? Generally if inside the Hadoop cluster you don;t want this flexibility. Could you please expand on the use case with some more details. Since you are pushing this new config parameter to branch 1 I assume that you see immediate customer use cases? BTW is this motivated by Hadoop-8198's use case 2 (multiple network domains)
          Hide
          Eli Collins added a comment -

          Sanjay, good questions.
          This is motivated for a use case where the client is outside the Hadoop cluster, specifically for the case of a system co-located with the Hadoop cluster where individual hosts have strong connectivity, eg integration with a DB that has multiple high-bandwidth interfaces to use for data import/export. This patch has been tested on a system with 4 dual port Infiniband cards, Hadoop clients running on this host can use the available bandwidth when accessing data on the Hadoop cluster. The Hadoop client in this case is configured with 4 interfaces (each representing a bond of the two ports). The co-located DB use case is mentioned in the design doc, but not explicitly in section 2.5, I'll update it.

          Show
          Eli Collins added a comment - Sanjay, good questions. This is motivated for a use case where the client is outside the Hadoop cluster, specifically for the case of a system co-located with the Hadoop cluster where individual hosts have strong connectivity, eg integration with a DB that has multiple high-bandwidth interfaces to use for data import/export. This patch has been tested on a system with 4 dual port Infiniband cards, Hadoop clients running on this host can use the available bandwidth when accessing data on the Hadoop cluster. The Hadoop client in this case is configured with 4 interfaces (each representing a bond of the two ports). The co-located DB use case is mentioned in the design doc, but not explicitly in section 2.5, I'll update it.
          Hide
          Eli Collins added a comment -

          Unparented this jira from HDFS-3140 since this feature is independent.

          Show
          Eli Collins added a comment - Unparented this jira from HDFS-3140 since this feature is independent.
          Hide
          Jing Zhao added a comment -

          @Eli, branch-1 code related question.

          I saw TestFileCreation fail on my MAC due to the following code:

            /** Same test but the client should use DN hostname instead of IPs */
            public void testFileCreationByHostname() throws IOException {
              assumeTrue(System.getProperty("os.name").startsWith("Linux"));
              ....
            }
          

          assumeTrue works only for Junit 4 tests. I have created jira HDFS-3966 to fix this issue.

          Show
          Jing Zhao added a comment - @Eli, branch-1 code related question. I saw TestFileCreation fail on my MAC due to the following code: /** Same test but the client should use DN hostname instead of IPs */ public void testFileCreationByHostname() throws IOException { assumeTrue(System.getProperty("os.name").startsWith("Linux")); .... } assumeTrue works only for Junit 4 tests. I have created jira HDFS-3966 to fix this issue.
          Hide
          Eli Collins added a comment -

          Thanks Jing, I'll review.

          Show
          Eli Collins added a comment - Thanks Jing, I'll review.
          Hide
          Matt Foley added a comment -

          Closed upon release of Hadoop-1.1.0.

          Show
          Matt Foley added a comment - Closed upon release of Hadoop-1.1.0.

            People

            • Assignee:
              Eli Collins
              Reporter:
              Eli Collins
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development