Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-5016

Deadlock in pipeline recovery causes Datanode to be marked dead

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.1.0-beta
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Target Version/s:

      Description

      In the testing of some failure scenarios for HBase MTTR, we have been simulating node failures via firewalling of nodes (where all communication ports would be firewalled except ssh's port). We have noticed that when a (data)node is firewalled, we lose certain other datanodes - those that were involved in some communication with the firewalled node before the latter was firewalled. Will attach jstack output from one of the lost datanodes. The heartbeating thread seems to be locked up.
      This jira is to track a fix for the problem.

      1. jstack1.txt
        213 kB
        Devaraj Das
      2. HDFS-5016.patch
        2 kB
        Suresh Srinivas
      3. HDFS-5016.3.patch
        17 kB
        Suresh Srinivas
      4. HDFS-5016.2.patch
        16 kB
        Suresh Srinivas
      5. HDFS-5016.1.patch
        2 kB
        Suresh Srinivas

        Issue Links

          Activity

          Hide
          Devaraj Das added a comment -

          The jstack output from one of the lost DNs.

          Show
          Devaraj Das added a comment - The jstack output from one of the lost DNs.
          Hide
          Suresh Srinivas added a comment -

          Based on the thread dump, the following code path causes the issue (this code corresponds to current branch-2.1.0-beta):

          1. Block is being recovered, which interrupts the current writer thread (receiving the block) at FsDatasetImpl.recoverRbw(FsDatasetImpl.java:738).
          • This hold FSDatasetImpl lock and calls for writer.join() at ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:157)
          1. Writer thread is interrupted. It in turn interrupts the responder thread and calls join on the responder at BlockReceiver.receiveBlock(BlockReceiver.java:709)
          2. Responder thread is stuck doing flush on the socket to write response to the node that has been firewalled.
            • Flush cannot be interrupted.
            • We cannot enable socket write timeouts (in java only socket read timeouts can be set)

          To summarize, responder thread is stuck in flush call, writer thread is stuck on calling join() on the responder thread, FSDataset recoverRbw is holding the FSDataset lock and is stuck waiting on join() for the responder thread. Since the FSDataset lock is held, which is crucial for the datanode, the heart beat thread, data transceiver threads are blocked waiting on FSDataset lock.

          Here is a simple patch that adds timeouts to the join call. Devaraj, can you see if this fixes the issue you are seeing?

          Show
          Suresh Srinivas added a comment - Based on the thread dump, the following code path causes the issue (this code corresponds to current branch-2.1.0-beta): Block is being recovered, which interrupts the current writer thread (receiving the block) at FsDatasetImpl.recoverRbw(FsDatasetImpl.java:738). This hold FSDatasetImpl lock and calls for writer.join() at ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:157) Writer thread is interrupted. It in turn interrupts the responder thread and calls join on the responder at BlockReceiver.receiveBlock(BlockReceiver.java:709) Responder thread is stuck doing flush on the socket to write response to the node that has been firewalled. Flush cannot be interrupted. We cannot enable socket write timeouts (in java only socket read timeouts can be set) To summarize, responder thread is stuck in flush call, writer thread is stuck on calling join() on the responder thread, FSDataset recoverRbw is holding the FSDataset lock and is stuck waiting on join() for the responder thread. Since the FSDataset lock is held, which is crucial for the datanode, the heart beat thread, data transceiver threads are blocked waiting on FSDataset lock. Here is a simple patch that adds timeouts to the join call. Devaraj, can you see if this fixes the issue you are seeing?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12593439/HDFS-5016.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4702//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4702//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12593439/HDFS-5016.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4702//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4702//console This message is automatically generated.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I think it should throw an IOException in both cases. It will then fail receiveBlock(..) and stopWriter() (i.e. block recovery). Otherwise, these operations may fail silently.

          Show
          Tsz Wo Nicholas Sze added a comment - I think it should throw an IOException in both cases. It will then fail receiveBlock(..) and stopWriter() (i.e. block recovery). Otherwise, these operations may fail silently.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... throw an IOException in both case. ...

          I mean throw an IOException when timeout.

          Show
          Tsz Wo Nicholas Sze added a comment - > ... throw an IOException in both case. ... I mean throw an IOException when timeout.
          Hide
          Suresh Srinivas added a comment -

          I think throwing an IOException seems unnecessary. As I described earlier, in this case, writer can be interrupted successfully, since it is reading the socket and socket timeout can be configured for reads. So join called on writer thread should not block. That is not the case with responder that is writing to a socket.

          Since writer can be interrupted successfully, no more block data is received and hence no further change can occur to the block.

          Show
          Suresh Srinivas added a comment - I think throwing an IOException seems unnecessary. As I described earlier, in this case, writer can be interrupted successfully, since it is reading the socket and socket timeout can be configured for reads. So join called on writer thread should not block. That is not the case with responder that is writing to a socket. Since writer can be interrupted successfully, no more block data is received and hence no further change can occur to the block.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ..., in this case, writer can be interrupted successfully, ...

          But would there be other cases that the writer stuck for different reasons? Then, it may continue writing to the block later on.

          Show
          Tsz Wo Nicholas Sze added a comment - > ..., in this case, writer can be interrupted successfully, ... But would there be other cases that the writer stuck for different reasons? Then, it may continue writing to the block later on.
          Hide
          Suresh Srinivas added a comment -

          But would there be other cases that the writer stuck for different reasons? Then, it may continue writing to the block later on.

          The only legitimate reasons I can think of are - lock related issues as we have seen in this case or socket related. Given we have socket timeout, it should not hang forever.

          What is the impact of throwing IOException on the replica recovery and the impact seen at the client because of that?

          Show
          Suresh Srinivas added a comment - But would there be other cases that the writer stuck for different reasons? Then, it may continue writing to the block later on. The only legitimate reasons I can think of are - lock related issues as we have seen in this case or socket related. Given we have socket timeout, it should not hang forever. What is the impact of throwing IOException on the replica recovery and the impact seen at the client because of that?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Like other failure cases, the corresponding operation (block recovery/write block/replace block) will fail. The client will retry the operation (possibly with a different datanode). The major impact is that the wait-time will increase.

          Show
          Tsz Wo Nicholas Sze added a comment - Like other failure cases, the corresponding operation (block recovery/write block/replace block) will fail. The client will retry the operation (possibly with a different datanode). The major impact is that the wait-time will increase.
          Hide
          Kihwal Lee added a comment -

          We need to guarantee that no unintended data modification occurs after block recoveries. If a writer could not be stopped right away, either 1) it has to stop writing when it unblocks, or 2) the block shouldn't be considered as recovered.

          I think Suresh is saying 1) will happen and Nicholas is saying 2) will be safer. I agree with Suresh for this particular scenario, but am not 100% sure about all possible cases. E.g. if a writer is in the middle of slow disk write, it can continue to write. As a result, data on disk can get modified after successful recoverRbw(). I would prefer failing block recovery after timeout.

          Show
          Kihwal Lee added a comment - We need to guarantee that no unintended data modification occurs after block recoveries. If a writer could not be stopped right away, either 1) it has to stop writing when it unblocks, or 2) the block shouldn't be considered as recovered. I think Suresh is saying 1) will happen and Nicholas is saying 2) will be safer. I agree with Suresh for this particular scenario, but am not 100% sure about all possible cases. E.g. if a writer is in the middle of slow disk write, it can continue to write. As a result, data on disk can get modified after successful recoverRbw(). I would prefer failing block recovery after timeout.
          Hide
          Suresh Srinivas added a comment -

          I am fine throwing an IOException. I will post an updated patch once Devaraj is done with testing.

          Show
          Suresh Srinivas added a comment - I am fine throwing an IOException. I will post an updated patch once Devaraj is done with testing.
          Hide
          Devaraj Das added a comment -

          Happy to say that with a variant of this patch (that is, throw exception when there is a timeout in the join), the HDFS has been running pretty stably.

          Show
          Devaraj Das added a comment - Happy to say that with a variant of this patch (that is, throw exception when there is a timeout in the join), the HDFS has been running pretty stably.
          Hide
          Suresh Srinivas added a comment -

          Updated patch to throw IOException.

          Show
          Suresh Srinivas added a comment - Updated patch to throw IOException.
          Hide
          Devaraj Das added a comment -

          +1
          (only minor nit is that we should fix the hardcoded 60 seconds to something that we derive out of some related configuration etc.)

          Show
          Devaraj Das added a comment - +1 (only minor nit is that we should fix the hardcoded 60 seconds to something that we derive out of some related configuration etc.)
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12594084/HDFS-5016.1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4729//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4729//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594084/HDFS-5016.1.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4729//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4729//console This message is automatically generated.
          Hide
          Todd Lipcon added a comment -

          Is this a duplicate of HDFS-4851? Seems similar if not the same

          Show
          Todd Lipcon added a comment - Is this a duplicate of HDFS-4851 ? Seems similar if not the same
          Hide
          Andrew Wang added a comment -

          I agree with Todd, this looks like the same deadlock (and basically the same fix) as what we have at HDFS-4851.

          Show
          Andrew Wang added a comment - I agree with Todd, this looks like the same deadlock (and basically the same fix) as what we have at HDFS-4851 .
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... only minor nit is that we should fix the hardcoded 60 seconds to something that we derive out of some related configuration etc.)

          The timeout period could depend on dfs.datanode.socket.write.timeout. The default of it is HdfsServerConstants.WRITE_TIMEOUT = 8 minutes.

          Show
          Tsz Wo Nicholas Sze added a comment - > ... only minor nit is that we should fix the hardcoded 60 seconds to something that we derive out of some related configuration etc.) The timeout period could depend on dfs.datanode.socket.write.timeout. The default of it is HdfsServerConstants.WRITE_TIMEOUT = 8 minutes.
          Hide
          Suresh Srinivas added a comment -

          I considered it and decided against it. It is a very long time to hold the fsdataset lock and block all other threads. I recommend adding a static constant DATANODE_XCIEVER_STOP_TIMEOUT. If there is a need, in the future we could make this configurable. Thoughts?

          Sent from a mobile device

          Show
          Suresh Srinivas added a comment - I considered it and decided against it. It is a very long time to hold the fsdataset lock and block all other threads. I recommend adding a static constant DATANODE_XCIEVER_STOP_TIMEOUT. If there is a need, in the future we could make this configurable. Thoughts? Sent from a mobile device
          Hide
          Nicolas Liochon added a comment -

          nice analysis. as Devaraj, I think the timrout should be configurable from day 1.
          what will happen to the writer thread in this scenario?

          Show
          Nicolas Liochon added a comment - nice analysis. as Devaraj, I think the timrout should be configurable from day 1. what will happen to the writer thread in this scenario?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... I think the timrout should be configurable from day 1.

          Making it configurable sounds good.

          Show
          Tsz Wo Nicholas Sze added a comment - > ... I think the timrout should be configurable from day 1. Making it configurable sounds good.
          Hide
          Suresh Srinivas added a comment -

          I agree with Todd, this looks like the same deadlock (and basically the same fix) as what we have at HDFS-4851.

          This patch is slightly different in that it adds timeout for writer thread as well. I prefer to get this in (I am going to post a patch in a couple of minutes) with timeout configurable, as soon as possible, given this is marked as release blocker (and rightfully so).

          Lets either close HDFS-4851 as duplicate or if you want some of the changes from that, do it as part of HDFS-4851.

          Show
          Suresh Srinivas added a comment - I agree with Todd, this looks like the same deadlock (and basically the same fix) as what we have at HDFS-4851 . This patch is slightly different in that it adds timeout for writer thread as well. I prefer to get this in (I am going to post a patch in a couple of minutes) with timeout configurable, as soon as possible, given this is marked as release blocker (and rightfully so). Lets either close HDFS-4851 as duplicate or if you want some of the changes from that, do it as part of HDFS-4851 .
          Hide
          Suresh Srinivas added a comment -

          New patch adds an undocumented configuration for xceiver thread stop timeouts. I also liked the way HDFS-4815 captured the stack trace. I have folded equivalent change in to this patch.

          Show
          Suresh Srinivas added a comment - New patch adds an undocumented configuration for xceiver thread stop timeouts. I also liked the way HDFS-4815 captured the stack trace. I have folded equivalent change in to this patch.
          Hide
          Andrew Wang added a comment -

          Patch looks good, and I agree we can just close out HDFS-4851 after this one goes in.

          Some nitty comments:

          + final String msg = "Join on writer thread timedout "

          + String msg = "Responder thread join timedout\n"

          Needs a space in "timed out".

                  if (writer.isAlive()) {
                    final String msg = "Join on writer thread timedout "
                        + writer.toString() + "\n" + StringUtils.getStackTrace(writer);
                    DataNode.LOG.warn(msg);
                    throw new IOException(msg);
          

          We probably don't want to stick the entire stack trace in the IOException msg. Same for BlockReceiver.

          It's somewhat ambiguous from the log message who's name and trace we're printing here, could we instead say "Timeout while stopping writer thread <thread name>:" followed by the stack trace?

          For BlockReceiver, how about "Timeout while aborting responder thread <thread name>:" for consistency? Unsure if you wanted to put the thread name here, since it's missing right now.

          Show
          Andrew Wang added a comment - Patch looks good, and I agree we can just close out HDFS-4851 after this one goes in. Some nitty comments: + final String msg = "Join on writer thread timedout " + String msg = "Responder thread join timedout\n" Needs a space in "timed out". if (writer.isAlive()) { final String msg = "Join on writer thread timedout " + writer.toString() + "\n" + StringUtils.getStackTrace(writer); DataNode.LOG.warn(msg); throw new IOException(msg); We probably don't want to stick the entire stack trace in the IOException msg. Same for BlockReceiver. It's somewhat ambiguous from the log message who's name and trace we're printing here, could we instead say "Timeout while stopping writer thread <thread name>:" followed by the stack trace? For BlockReceiver, how about "Timeout while aborting responder thread <thread name>:" for consistency? Unsure if you wanted to put the thread name here, since it's missing right now.
          Hide
          Suresh Srinivas added a comment -

          We probably don't want to stick the entire stack trace in the IOException msg. Same for BlockReceiver.

          Can you explain why it is not a good idea? Note that this issue should happen quite rarely.

          As regards to your other comment, you want some thing like this?

                      String msg = "Join on responder thread " + responder
                          + " timed out\n" + StringUtils.getStackTrace(responder);
          
          Show
          Suresh Srinivas added a comment - We probably don't want to stick the entire stack trace in the IOException msg. Same for BlockReceiver. Can you explain why it is not a good idea? Note that this issue should happen quite rarely. As regards to your other comment, you want some thing like this? String msg = "Join on responder thread " + responder + " timed out\n" + StringUtils.getStackTrace(responder);
          Hide
          Andrew Wang added a comment -

          Can you explain why it is not a good idea?

          I think it'll be confusing when this gets printed in the log, since first the writer's stack gets WARN logged, then when the exception gets caught and printed, we'll see the writer's stack again since it's part of the exception msg and then the waiter's stack after that. Kinda spew-y, and it makes it look like the exception was thrown from the writer since it comes first when the exception is printed.

          FWIW, we saw this triggering daily on a customer cluster. Not that common, but not that rare either.

          you want some thing like this?

          Sure, that works.

          Show
          Andrew Wang added a comment - Can you explain why it is not a good idea? I think it'll be confusing when this gets printed in the log, since first the writer's stack gets WARN logged, then when the exception gets caught and printed, we'll see the writer's stack again since it's part of the exception msg and then the waiter's stack after that. Kinda spew-y, and it makes it look like the exception was thrown from the writer since it comes first when the exception is printed. FWIW, we saw this triggering daily on a customer cluster. Not that common, but not that rare either. you want some thing like this? Sure, that works.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12594301/HDFS-5016.2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4733//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4733//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594301/HDFS-5016.2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4733//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4733//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          Updated patch to address the comments.

          Show
          Suresh Srinivas added a comment - Updated patch to address the comments.
          Hide
          Andrew Wang added a comment -

          +1 LGTM. It'd be great if you dupe HDFS-4851 too when you resolve this one.

          Show
          Andrew Wang added a comment - +1 LGTM. It'd be great if you dupe HDFS-4851 too when you resolve this one.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12594319/HDFS-5016.3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4735//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4735//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12594319/HDFS-5016.3.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4735//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4735//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          I committed the patch to trunk, branch-2 and branch-2.1. Thank you Devaraj for reporting the issue. Thank you Andrew for the review.

          Show
          Suresh Srinivas added a comment - I committed the patch to trunk, branch-2 and branch-2.1. Thank you Devaraj for reporting the issue. Thank you Andrew for the review.

            People

            • Assignee:
              Suresh Srinivas
              Reporter:
              Devaraj Das
            • Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development