Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3436

adding new datanode to existing pipeline fails in case of Append/Recovery

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha, 3.0.0, 0.23.8
    • Fix Version/s: 2.0.2-alpha
    • Component/s: datanode
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Scenario:
      =========

      1. Cluster with 4 DataNodes.
      2. Written file to 3 DNs, DN1->DN2->DN3
      3. Stopped DN3,
      Now Append to file is failing due to addDatanode2ExistingPipeline is failed.

      CLinet Trace

      2012-04-24 22:06:09,947 INFO  hdfs.DFSClient (DFSOutputStream.java:createBlockOutputStream(1063)) - Exception in createBlockOutputStream
      java.io.IOException: Bad connect ack with firstBadLink as *******:50010
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1053)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:943)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
      2012-04-24 22:06:09,947 WARN  hdfs.DFSClient (DFSOutputStream.java:setupPipelineForAppendOrRecovery(916)) - Error Recovery for block BP-1023239-10.18.40.233-1335275282109:blk_296651611851855249_1253 in pipeline *****:50010, ******:50010, *****:50010: bad datanode ******:50010
      2012-04-24 22:06:10,072 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) - DataStreamer Exception
      java.io.EOFException: Premature EOF: no length prefix available
      	at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:866)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:843)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
      2012-04-24 22:06:10,072 WARN  hdfs.DFSClient (DFSOutputStream.java:hflush(1515)) - Error while syncing
      java.io.EOFException: Premature EOF: no length prefix available
      	at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:866)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:843)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
      java.io.EOFException: Premature EOF: no length prefix available
      	at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:866)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:843)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
      	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
      

      DataNode Trace

      2012-05-17 15:39:12,261 ERROR datanode.DataNode (DataXceiver.java:run(193)) - host0.foo.com:49744:DataXceiver error processing TRANSFER_BLOCK operation  src: /127.0.0.1:49811 dest: /127.0.0.1:49744
      java.io.IOException: BP-2001850558-xx.xx.xx.xx-1337249347060:blk_-8165642083860293107_1002 is neither a RBW nor a Finalized, r=ReplicaBeingWritten, blk_-8165642083860293107_1003, RBW
        getNumBytes()     = 1024
        getBytesOnDisk()  = 1024
        getVisibleLength()= 1024
        getVolume()       = E:\MyWorkSpace\branch-2\Test\build\test\data\dfs\data\data1\current
        getBlockFile()    = E:\MyWorkSpace\branch-2\Test\build\test\data\dfs\data\data1\current\BP-2001850558-xx.xx.xx.xx-1337249347060\current\rbw\blk_-8165642083860293107
        bytesAcked=1024
        bytesOnDisk=102
      at org.apache.hadoop.hdfs.server.datanode.DataNode.transferReplicaForPipelineRecovery(DataNode.java:2038)
      	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.transferBlock(DataXceiver.java:525)
      	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opTransferBlock(Receiver.java:114)
      	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:78)
      	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
      	at java.lang.Thread.run(Unknown Source)
      

        Activity

        Hide
        Vinayakumar B added a comment -

        Scenario is as follows:
        ---------------------
        1. Cluster is having 4 DNs.
        2. File is written to 3 DNs DN1->DN2->DN3 with genstamp of 1001
        3. Now DN3 is stopped.
        4. Now append is called.
        5. For this append Client will try to create the pipeline DN1->DN2->DN3
        During this time following things will happen
        1. The Generation stamp will be updated in volumeMap to 1002
        2. Now datanode will try to connect to next DN in pipeline.
        If Next DN in pipeline is down, then exception will be thrown and client will try to reform the pipeline.

        Now since DN3 is down, in DN1 and DN2 genstamp is already updated to 1002. But client doesnot know about this.
        6. Now client is trying to add one more datanode to append pipeline. i.e. DN4. and ask DN1 or DN2 to transfer block to DN4. But Client will ask to transfer block with genstamp 1001.
        7. Since DN1 and DN2 dont have block with genstamp 1001, so transfer will fail and Client write also will fail.

        Proposed solution
        ------------------
        In DataXceiver#writeBlock(), before creating the BlockReceiver instance, if we try to create mirror connection, then this solves the problem.

        Show
        Vinayakumar B added a comment - Scenario is as follows: --------------------- 1. Cluster is having 4 DNs. 2. File is written to 3 DNs DN1->DN2->DN3 with genstamp of 1001 3. Now DN3 is stopped. 4. Now append is called. 5. For this append Client will try to create the pipeline DN1->DN2->DN3 During this time following things will happen 1. The Generation stamp will be updated in volumeMap to 1002 2. Now datanode will try to connect to next DN in pipeline. If Next DN in pipeline is down, then exception will be thrown and client will try to reform the pipeline. Now since DN3 is down, in DN1 and DN2 genstamp is already updated to 1002. But client doesnot know about this. 6. Now client is trying to add one more datanode to append pipeline. i.e. DN4. and ask DN1 or DN2 to transfer block to DN4. But Client will ask to transfer block with genstamp 1001. 7. Since DN1 and DN2 dont have block with genstamp 1001, so transfer will fail and Client write also will fail. Proposed solution ------------------ In DataXceiver#writeBlock(), before creating the BlockReceiver instance, if we try to create mirror connection, then this solves the problem.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Good catch! I think the bug is in the following:

        //DataNode.transferReplicaForPipelineRecovery(..)
            synchronized(data) {
              if (data.isValidRbw(b)) {
                stage = BlockConstructionStage.TRANSFER_RBW;
              } else if (data.isValidBlock(b)) {
                stage = BlockConstructionStage.TRANSFER_FINALIZED;
              } else {
                final String r = data.getReplicaString(b.getBlockPoolId(), b.getBlockId());
                throw new IOException(b + " is neither a RBW nor a Finalized, r=" + r);
              }
        
              storedGS = data.getStoredBlock(b.getBlockPoolId(),
                  b.getBlockId()).getGenerationStamp();
              if (storedGS < b.getGenerationStamp()) {
                throw new IOException(
                    storedGS + " = storedGS < b.getGenerationStamp(), b=" + b);        
              }
              visible = data.getReplicaVisibleLength(b);
            }
        

        It should first call getStoredBlock(..) and then use the stored block to call isValidRbw(..) and isValidBlock(..). It expects GS to be updated but does not handle it correctly.

        Show
        Tsz Wo Nicholas Sze added a comment - Good catch! I think the bug is in the following: //DataNode.transferReplicaForPipelineRecovery(..) synchronized (data) { if (data.isValidRbw(b)) { stage = BlockConstructionStage.TRANSFER_RBW; } else if (data.isValidBlock(b)) { stage = BlockConstructionStage.TRANSFER_FINALIZED; } else { final String r = data.getReplicaString(b.getBlockPoolId(), b.getBlockId()); throw new IOException(b + " is neither a RBW nor a Finalized, r=" + r); } storedGS = data.getStoredBlock(b.getBlockPoolId(), b.getBlockId()).getGenerationStamp(); if (storedGS < b.getGenerationStamp()) { throw new IOException( storedGS + " = storedGS < b.getGenerationStamp(), b=" + b); } visible = data.getReplicaVisibleLength(b); } It should first call getStoredBlock(..) and then use the stored block to call isValidRbw(..) and isValidBlock(..). It expects GS to be updated but does not handle it correctly.
        Hide
        Vinayakumar B added a comment -

        Thanks Nicholas, that works. I will upload a patch for that.

        Show
        Vinayakumar B added a comment - Thanks Nicholas, that works. I will upload a patch for that.
        Hide
        Uma Maheswara Rao G added a comment -

        Thanks a lot Vinay for taking this issue. Please edit the issue title as this will come in recovery flow also, not only in Append.

        Show
        Uma Maheswara Rao G added a comment - Thanks a lot Vinay for taking this issue. Please edit the issue title as this will come in recovery flow also, not only in Append.
        Hide
        Vinayakumar B added a comment -

        Attaching the Patch for same

        Show
        Vinayakumar B added a comment - Attaching the Patch for same
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12528560/HDFS-3436.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2506//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12528560/HDFS-3436.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2506//console This message is automatically generated.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        The patch file is corrupted.

        patching file hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
        patch: **** malformed patch at line 75: }
        
        Show
        Tsz Wo Nicholas Sze added a comment - The patch file is corrupted. patching file hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java patch: **** malformed patch at line 75: }
        Hide
        Vinayakumar B added a comment -

        Hi Nicholas, I am working on it. Thanks.

        Show
        Vinayakumar B added a comment - Hi Nicholas, I am working on it. Thanks.
        Hide
        Vinayakumar B added a comment -

        Submitting the patch for trunk

        Show
        Vinayakumar B added a comment - Submitting the patch for trunk
        Hide
        Vinayakumar B added a comment -

        test-patch.sh result for the latest patch.

        +1 overall.  
        
            +1 @author.  The patch does not contain any @author tags.
        
            +1 tests included.  The patch appears to include 1 new or modified test files.
        
            +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
        
            +1 javadoc.  The javadoc tool did not generate any warning messages.
        
            +1 eclipse:eclipse.  The patch built with eclipse:eclipse.
        
            +1 findbugs.  The patch does not introduce any new Findbugs (version ) warnings.
        
            +1 release audit.  The applied patch does not increase the total number of release audit warnings.
        Show
        Vinayakumar B added a comment - test-patch.sh result for the latest patch. +1 overall. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version ) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12528719/HDFS-3436-trunk.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2510//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2510//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12528719/HDFS-3436-trunk.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2510//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2510//console This message is automatically generated.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        +1 patch looks good.

        Show
        Tsz Wo Nicholas Sze added a comment - +1 patch looks good.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        I have committed this. Thanks, Vinay!

        Show
        Tsz Wo Nicholas Sze added a comment - I have committed this. Thanks, Vinay!
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #2355 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2355/)
        HDFS-3436. In DataNode.transferReplicaForPipelineRecovery(..), it should use the stored generation stamp to check if the block is valid. Contributed by Vinay (Revision 1341961)

        Result = SUCCESS
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1341961
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #2355 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2355/ ) HDFS-3436 . In DataNode.transferReplicaForPipelineRecovery(..), it should use the stored generation stamp to check if the block is valid. Contributed by Vinay (Revision 1341961) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1341961 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #2282 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2282/)
        HDFS-3436. In DataNode.transferReplicaForPipelineRecovery(..), it should use the stored generation stamp to check if the block is valid. Contributed by Vinay (Revision 1341961)

        Result = SUCCESS
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1341961
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #2282 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2282/ ) HDFS-3436 . In DataNode.transferReplicaForPipelineRecovery(..), it should use the stored generation stamp to check if the block is valid. Contributed by Vinay (Revision 1341961) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1341961 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Hide
        Vinayakumar B added a comment -

        Thanks a lot Nicholas.

        Show
        Vinayakumar B added a comment - Thanks a lot Nicholas.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #2300 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2300/)
        HDFS-3436. In DataNode.transferReplicaForPipelineRecovery(..), it should use the stored generation stamp to check if the block is valid. Contributed by Vinay (Revision 1341961)

        Result = FAILURE
        szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1341961
        Files :

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #2300 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2300/ ) HDFS-3436 . In DataNode.transferReplicaForPipelineRecovery(..), it should use the stored generation stamp to check if the block is valid. Contributed by Vinay (Revision 1341961) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1341961 Files : /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Hide
        Kihwal Lee added a comment -

        I saw this happening when the second DN in a pipeline becomes very slow due to hardware issues. It will cause all downstream dn to fail in writes and pipeline recoveries and eventually itself will fail. Adding a new dn and transferring block fails since recoverRbw() in previous failed recovery attempt modified the gen stamp. I am glad it's already fixed in branch-2. I am pulling into branch-0.23.

        HDFS-4257 would have made writes to succeed, but at the same time this bug might have gone undetected. It's fortunate this was fixed before HDFS-4257.

        Show
        Kihwal Lee added a comment - I saw this happening when the second DN in a pipeline becomes very slow due to hardware issues. It will cause all downstream dn to fail in writes and pipeline recoveries and eventually itself will fail. Adding a new dn and transferring block fails since recoverRbw() in previous failed recovery attempt modified the gen stamp. I am glad it's already fixed in branch-2. I am pulling into branch-0.23. HDFS-4257 would have made writes to succeed, but at the same time this bug might have gone undetected. It's fortunate this was fixed before HDFS-4257 .
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Build #600 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/600/)
        svn merge -c 1341961 Merging from trunk to branch-0.23 to fix HDFS-3436. (Revision 1478889)

        Result = SUCCESS
        kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1478889
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
        • /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #600 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/600/ ) svn merge -c 1341961 Merging from trunk to branch-0.23 to fix HDFS-3436 . (Revision 1478889) Result = SUCCESS kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1478889 Files : /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java

          People

          • Assignee:
            Vinayakumar B
            Reporter:
            Brahma Reddy Battula
          • Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development