Hadoop Common
  1. Hadoop Common
  2. HADOOP-10127

Add ipc.client.connect.retry.interval to control the frequency of connection retries

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 2.3.0
    • Component/s: ipc
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Target Version/s:

      Description

      Currently, ipc.Client client attempts to connect to the server every 1 second. It would be nice to make this configurable to be able to connect more/less frequently. Changing the number of retries alone is not granular enough.

      1. hadoop-10127-1.patch
        3 kB
        Karthik Kambatla

        Issue Links

          Activity

          Hide
          Karthik Kambatla added a comment -

          Straight-forward patch that makes the retry-interval configurable. Tested manually in a YARN HA cluster with configured-failover. Sample output below:

          kasha@keka:~/install/rmha-tests/hadoop-3.0.0-SNAPSHOT$ sudo -u root bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar pi -Dipc.client.connect.max.retries=20 -Dipc.client.connect.retry.interval=50 2 10
          Number of Maps  = 2
          Samples per Map = 10
          13/11/25 17:47:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
          Wrote input for Map #0
          Wrote input for Map #1
          Starting Job
          13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS)
          13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS)
          13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS)
          13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS)
          13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS)
          
          
          Show
          Karthik Kambatla added a comment - Straight-forward patch that makes the retry-interval configurable. Tested manually in a YARN HA cluster with configured-failover. Sample output below: kasha@keka:~/install/rmha-tests/hadoop-3.0.0-SNAPSHOT$ sudo -u root bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar pi -Dipc.client.connect.max.retries=20 -Dipc.client.connect.retry.interval=50 2 10 Number of Maps = 2 Samples per Map = 10 13/11/25 17:47:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Wrote input for Map #0 Wrote input for Map #1 Starting Job 13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS) 13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS) 13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS) 13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS) 13/11/25 17:47:22 INFO ipc.Client: Retrying connect to server: rm-ha-2.ent.cloudera.com/10.20.195.36:23140. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=20, sleepTime=50 MILLISECONDS)
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12615768/hadoop-10127-1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3313//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3313//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12615768/hadoop-10127-1.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/3313//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/3313//console This message is automatically generated.
          Hide
          Steve Loughran added a comment -

          Doesn't the retry policy mechanism let you do this with things like timed intervals, exponential backoff etc. You set a short #of retries in the client, and hand it off to the policy to react to failures?

          Show
          Steve Loughran added a comment - Doesn't the retry policy mechanism let you do this with things like timed intervals, exponential backoff etc. You set a short #of retries in the client, and hand it off to the policy to react to failures?
          Hide
          Karthik Kambatla added a comment -

          ipc.Client does use RetryPolicies#retryUpToMaximumCountWithFixedSleep, but the fixed-sleep value passed to this policy is 1. Without a config, we ll have to hardcode the sleep-duration to some value (may be, 1 second) irrespective of the policy used.

          Higher level clients (e.g. NM -> RM etc.) use ipc.Client internally and hence adding configs at the higher level doesn't help either.

          Show
          Karthik Kambatla added a comment - ipc.Client does use RetryPolicies#retryUpToMaximumCountWithFixedSleep , but the fixed-sleep value passed to this policy is 1. Without a config, we ll have to hardcode the sleep-duration to some value (may be, 1 second) irrespective of the policy used. Higher level clients (e.g. NM -> RM etc.) use ipc.Client internally and hence adding configs at the higher level doesn't help either.
          Hide
          Karthik Kambatla added a comment -

          You set a short #of retries in the client, and hand it off to the policy to react to failures?

          Setting #retries to 1 only lets us retry after 1 second. I think there is merit to allowing retry even more frequently.

          Show
          Karthik Kambatla added a comment - You set a short #of retries in the client, and hand it off to the policy to react to failures? Setting #retries to 1 only lets us retry after 1 second. I think there is merit to allowing retry even more frequently.
          Hide
          Steve Loughran added a comment -

          Which clients are you thinking of here?
          What we need to avoid is is overload on any failover restart operations at v. large scale clusters, where the scenarios are

          1. a master service fails, failover begins and all the worker nodes in the cluster generate large numbers of connect requests to the successor.
          2. cluster power restart event where all started nodes/clients start hitting the booting services in near-perfect sync. I've seen this with flash-based devices where the boot time is constant for all nodes -it's why jitter is important, and why clock-based & time-since-boot jitter needs an extra bit of randomness
          3. server offline explicitly with heavy client load coming in from outside. Here the more clients that block retrying connection requests build up more and more pending calls, so the server ends up receiving a massive multiple of the normal working load the moment it goes live.
          4. more than one of the above problems. This is what led to the infamous facebook HDFS cascade failure -and hence why NN heartbeats now come in on a different RPC port from DFS client operations.

          Shrink the retry time and the load generated against starting/failing over endpoints can increase massively. That doesn't mean it shouldn't be allowed -just that you need to understand that special problems arise at a few thousand servers and plan for it.

          If it really is NM->RM calls you are worried about, then perhaps rather than make changes to the general IPC client, this is a good time to impose a better retry policy here, where exponential backoff with jitter is what I'd propose. The initial delay could be small, but it would back off fast if the cluster was down for any length of time

          Show
          Steve Loughran added a comment - Which clients are you thinking of here? What we need to avoid is is overload on any failover restart operations at v. large scale clusters, where the scenarios are a master service fails, failover begins and all the worker nodes in the cluster generate large numbers of connect requests to the successor. cluster power restart event where all started nodes/clients start hitting the booting services in near-perfect sync. I've seen this with flash-based devices where the boot time is constant for all nodes -it's why jitter is important, and why clock-based & time-since-boot jitter needs an extra bit of randomness server offline explicitly with heavy client load coming in from outside. Here the more clients that block retrying connection requests build up more and more pending calls, so the server ends up receiving a massive multiple of the normal working load the moment it goes live. more than one of the above problems. This is what led to the infamous facebook HDFS cascade failure -and hence why NN heartbeats now come in on a different RPC port from DFS client operations. Shrink the retry time and the load generated against starting/failing over endpoints can increase massively. That doesn't mean it shouldn't be allowed -just that you need to understand that special problems arise at a few thousand servers and plan for it. If it really is NM->RM calls you are worried about, then perhaps rather than make changes to the general IPC client, this is a good time to impose a better retry policy here, where exponential backoff with jitter is what I'd propose. The initial delay could be small, but it would back off fast if the cluster was down for any length of time
          Hide
          Karthik Kambatla added a comment -

          Thanks Steve Leland for clarifying the potential issues arising out of setting a higher frequency for retries.

          The context for this is indeed YARN-1028 - ConfiguredFailoverProxy for RM failover. In an HA setting where the second RM is the active, with the current default for ipc.client.connect.max.retries (10), Clients / AMs / NMs retry the first RM for 10 seconds before trying the second RM. This leads to a significant performance hit. This delay in the clients failing over can be mitigated by setting ipc.client.connect.max.retries to 1, but I thought there might be merit to connect to the same RM multiple times (> 1) before trying the other one. Hence, the proposal to allow making the retry-interval shorter - try connecting to the same RM twice with a delay of half-a-second before failing over.

          If it really is NM->RM calls you are worried about, then perhaps rather than make changes to the general IPC client, this is a good time to impose a better retry policy here, where exponential backoff with jitter is what I'd propose.

          Even if we improve the retry policy in

          {Client|Server}

          *RMProxy, the ipc.Client delay of 10 seconds to failover still exists. What do you think of making the general Client dumb enough to try connecting only once and let the higher layers take care of the actual retry policies? I know that would be a significant change, but worth making?

          Show
          Karthik Kambatla added a comment - Thanks Steve Leland for clarifying the potential issues arising out of setting a higher frequency for retries. The context for this is indeed YARN-1028 - ConfiguredFailoverProxy for RM failover. In an HA setting where the second RM is the active, with the current default for ipc.client.connect.max.retries (10), Clients / AMs / NMs retry the first RM for 10 seconds before trying the second RM. This leads to a significant performance hit. This delay in the clients failing over can be mitigated by setting ipc.client.connect.max.retries to 1, but I thought there might be merit to connect to the same RM multiple times (> 1) before trying the other one. Hence, the proposal to allow making the retry-interval shorter - try connecting to the same RM twice with a delay of half-a-second before failing over. If it really is NM->RM calls you are worried about, then perhaps rather than make changes to the general IPC client, this is a good time to impose a better retry policy here, where exponential backoff with jitter is what I'd propose. Even if we improve the retry policy in {Client|Server} *RMProxy, the ipc.Client delay of 10 seconds to failover still exists. What do you think of making the general Client dumb enough to try connecting only once and let the higher layers take care of the actual retry policies? I know that would be a significant change, but worth making?
          Hide
          Sandy Ryza added a comment -

          What do you think of making the general Client dumb enough to try connecting only once and let the higher layers take care of the actual retry policies? I know that would be a significant change, but worth making?

          If we want different IPC users, we don't necessarily need to dumb down/change the general Client. Yarn clients could have their own yarn.ipc.client.connect.retry.interval own property which they would set the ipc.client.connect.retry.interval to for the RM clients they create.

          Show
          Sandy Ryza added a comment - What do you think of making the general Client dumb enough to try connecting only once and let the higher layers take care of the actual retry policies? I know that would be a significant change, but worth making? If we want different IPC users, we don't necessarily need to dumb down/change the general Client. Yarn clients could have their own yarn.ipc.client.connect.retry.interval own property which they would set the ipc.client.connect.retry.interval to for the RM clients they create.
          Hide
          Steve Loughran added a comment -

          a YARN specific retry interval would stop the patch impacting HDFS or your own apps, so have less side effects. I think maybe also the NM could maybe set up to do more backoff & jitter, as the simultaneous load of many NMs is something guaranteed to scale with cluster size.

          Show
          Steve Loughran added a comment - a YARN specific retry interval would stop the patch impacting HDFS or your own apps, so have less side effects. I think maybe also the NM could maybe set up to do more backoff & jitter, as the simultaneous load of many NMs is something guaranteed to scale with cluster size.
          Hide
          Karthik Kambatla added a comment -

          Sandy Ryza and Steve Leland, thanks for the discussion. Agree with you both on how to handle this.

          I propose the following:

          1. Add ipc.client.connect.retry.interval in this JIRA, so the knob is available to be played with in YARN.
          2. YARN-1028 would implement ConfiguredFailoverProxyProvider with the not-so-nice 10 second delay to failover for all entities connecting to the RM.
          3. YARN-1460 to define yarn-specific ipc-client configs that the clients use, and add an exponential backoff and jitter at least for NM -> RM.

          With that plan, what do you think of the current patch?

          Show
          Karthik Kambatla added a comment - Sandy Ryza and Steve Leland , thanks for the discussion. Agree with you both on how to handle this. I propose the following: Add ipc.client.connect.retry.interval in this JIRA, so the knob is available to be played with in YARN. YARN-1028 would implement ConfiguredFailoverProxyProvider with the not-so-nice 10 second delay to failover for all entities connecting to the RM. YARN-1460 to define yarn-specific ipc-client configs that the clients use, and add an exponential backoff and jitter at least for NM -> RM. With that plan, what do you think of the current patch?
          Hide
          Sandy Ryza added a comment -

          That makes sense to me. The patch LGTM except for one thing: we should include the unit (ms) in the config name.

          Show
          Sandy Ryza added a comment - That makes sense to me. The patch LGTM except for one thing: we should include the unit (ms) in the config name.
          Hide
          Karthik Kambatla added a comment -

          Not having the timeunit is consistent with other similar IPC time configs here - https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java#L176

          I think we should leave it alone.

          Show
          Karthik Kambatla added a comment - Not having the timeunit is consistent with other similar IPC time configs here - https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java#L176 I think we should leave it alone.
          Hide
          Sandy Ryza added a comment -

          Ah, you're right, that's too bad. In that case I'm +1. Steve Loughran, what do you think?

          Show
          Sandy Ryza added a comment - Ah, you're right, that's too bad. In that case I'm +1. Steve Loughran , what do you think?
          Hide
          Steve Loughran added a comment -

          looks good to me, +1

          Show
          Steve Loughran added a comment - looks good to me, +1
          Hide
          Sandy Ryza added a comment -

          I just committed this. Thanks Karthik.

          Show
          Sandy Ryza added a comment - I just committed this. Thanks Karthik.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #4821 (See https://builds.apache.org/job/Hadoop-trunk-Commit/4821/)
          HADOOP-10127. Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #4821 (See https://builds.apache.org/job/Hadoop-trunk-Commit/4821/ ) HADOOP-10127 . Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #411 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/411/)
          HADOOP-10127. Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #411 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/411/ ) HADOOP-10127 . Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1628 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1628/)
          HADOOP-10127. Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1628 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1628/ ) HADOOP-10127 . Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1602 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1602/)
          HADOOP-10127. Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1602 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1602/ ) HADOOP-10127 . Add ipc.client.connect.retry.interval to control the frequency of connection retries (Karthik Kambatla via Sandy Ryza) (sandy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1547626 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml

            People

            • Assignee:
              Karthik Kambatla
              Reporter:
              Karthik Kambatla
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development