Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-8586

Dead Datanode is allocated for write when client is from deadnode

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      DataNode marked as Dead
      2015-06-11 19:39:00,862 | INFO | org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@28ec166e | BLOCK* removeDeadDatanode: lost heartbeat from XX.XX.39.33:25009 | org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.removeDeadDatanode(DatanodeManager.java:584)
      2015-06-11 19:39:00,863 | INFO | org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@28ec166e | Removing a node: /default/rack3/XX.XX.39.33:25009 | org.apache.hadoop.net.NetworkTopology.remove(NetworkTopology.java:488)

      Deadnode got Allocated
      2015-06-11 19:39:45,148 | WARN | IPC Server handler 26 on 25000 | The cluster does not contain node: /default/rack3/XX.XX.39.33:25009 | org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
      2015-06-11 19:39:45,149 | WARN | IPC Server handler 26 on 25000 | The cluster does not contain node: /default/rack3/XX.XX.39.33:25009 | org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
      2015-06-11 19:39:45,149 | WARN | IPC Server handler 26 on 25000 | The cluster does not contain node: /default/rack3/XX.XX.39.33:25009 | org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
      2015-06-11 19:39:45,149 | WARN | IPC Server handler 26 on 25000 | The cluster does not contain node: /default/rack3/XX.XX.39.33:25009 | org.apache.hadoop.net.NetworkTopology.getDistance(NetworkTopology.java:616)
      2015-06-11 19:39:45,149 | INFO | IPC Server handler 26 on 25000 | BLOCK* allocate blk_1073754030_13252 {UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-e8d29773-dfc2-4224-b1d6-9b0588bca55e:NORMAL:XX.XX.39.33:25009|RBW], ReplicaUC[[DISK]DS-f7d2ab3c-88f7-470c-9097-84387c0bec83:NORMAL:XX.XX.38.32:25009|RBW], ReplicaUC[[DISK]DS-8c2a464a-ac81-4651-890a-dbfd07ddd95f:NORMAL: XX.XX.38.33:25009|RBW]] } for /t1.COPYING | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3657)
      2015-06-11 19:39:45,191 | INFO | IPC Server handler 35 on 25000 | BLOCK* allocate blk_1073754031_13253{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-ed8ad579-50c0-4e3e-8780-9776531763b6:NORMAL:XX.XX.39.31:25009|RBW], ReplicaUC[[DISK]DS-19ddd6da-4a3e-481a-8445-dde5c90aaff3:NORMAL:XX.XX.37.32:25009|RBW], ReplicaUC[[DISK]DS-4ce4ce39-4973-42ce-8c7d-cb41f899db85: NORMAL:XX.XX.37.33:25009 |RBW]]} for /t1.COPYING | org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3657)

      1. HDFS-8586.patch
        5 kB
        Brahma Reddy Battula

        Activity

        Hide
        brahmareddy Brahma Reddy Battula added a comment -

        Test Code to reproduce this bug

        public void testDeadDatanodeForBlockLocation() throws Exception {
            Configuration conf = new HdfsConfiguration();
            conf.setInt(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 500);
            conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1L);
            cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
            cluster.waitActive();
            
            String poolId = cluster.getNamesystem().getBlockPoolId();
            // wait for datanode to be marked live
            DataNode dn = cluster.getDataNodes().get(0);
            DatanodeRegistration reg = 
                DataNodeTestUtils.getDNRegistrationForBP(dn, poolId);
            
            DFSTestUtil.waitForDatanodeState(cluster, reg.getDatanodeUuid(), true, 20000);
            
            // Shutdown and wait for data node to be marked dead
            dn.shutdown();
            DFSTestUtil.waitForDatanodeState(cluster, reg.getDatanodeUuid(), false, 20000);
            System.out.println("Dn downXXX: " + dn.getDisplayName());
            
            Path file = new Path("afile");
            try (FSDataOutputStream outputStream = cluster.getFileSystem().create(file))
            {
              outputStream.writeChars("testContent");
            }
            
            
            BlockLocation block = cluster.getFileSystem().getFileBlockLocations(file, 0, 10)[0];
            System.out.println("Dn down: " + dn.getDisplayName());
            for(String node : block.getNames())
            {
              System.out.println(node);
              if(node.equals(dn.getDisplayName()))
                  {
                    fail("Not expecting the block in a dead node");
                  }
            }
          }
        

        Impact which I seen
        The cluster have 9 Datanode´╝înow stop 5. dfs.replications=3. Put files to HDFS continuously, but some operations failed.

        I think, Here we can deadnodes also...

        if (isGoodTarget(storage, blockSize, maxNodesPerRack, considerLoad,
                results, avoidStaleNodes, storageType)) {
              results.add(storage);
              // add node and related nodes to excludedNode
              return addToExcludedNodes(storage.getDatanodeDescriptor(), excludedNodes);
            }
        
        Show
        brahmareddy Brahma Reddy Battula added a comment - Test Code to reproduce this bug public void testDeadDatanodeForBlockLocation() throws Exception { Configuration conf = new HdfsConfiguration(); conf.setInt(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 500); conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1L); cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build(); cluster.waitActive(); String poolId = cluster.getNamesystem().getBlockPoolId(); // wait for datanode to be marked live DataNode dn = cluster.getDataNodes().get(0); DatanodeRegistration reg = DataNodeTestUtils.getDNRegistrationForBP(dn, poolId); DFSTestUtil.waitForDatanodeState(cluster, reg.getDatanodeUuid(), true , 20000); // Shutdown and wait for data node to be marked dead dn.shutdown(); DFSTestUtil.waitForDatanodeState(cluster, reg.getDatanodeUuid(), false , 20000); System .out.println( "Dn downXXX: " + dn.getDisplayName()); Path file = new Path( "afile" ); try (FSDataOutputStream outputStream = cluster.getFileSystem().create(file)) { outputStream.writeChars( "testContent" ); } BlockLocation block = cluster.getFileSystem().getFileBlockLocations(file, 0, 10)[0]; System .out.println( "Dn down: " + dn.getDisplayName()); for ( String node : block.getNames()) { System .out.println(node); if (node.equals(dn.getDisplayName())) { fail( "Not expecting the block in a dead node" ); } } } Impact which I seen The cluster have 9 Datanode´╝înow stop 5. dfs.replications=3. Put files to HDFS continuously, but some operations failed. I think, Here we can deadnodes also... if (isGoodTarget(storage, blockSize, maxNodesPerRack, considerLoad, results, avoidStaleNodes, storageType)) { results.add(storage); // add node and related nodes to excludedNode return addToExcludedNodes(storage.getDatanodeDescriptor(), excludedNodes); }
        Hide
        brahmareddy Brahma Reddy Battula added a comment -

        Currently only stale nodes are excluded but here we can consider deadnodes also ..

        Show
        brahmareddy Brahma Reddy Battula added a comment - Currently only stale nodes are excluded but here we can consider deadnodes also ..
        Hide
        brahmareddy Brahma Reddy Battula added a comment -

        Workaround can be done by enabling " dfs.name.avoid.write.stale.datanode " where node is considered as stale( if there is no heartbeat in 30 sec's bydefault)..such that it's not allocated for the write...Anyone have some other thoughts..?

        Show
        brahmareddy Brahma Reddy Battula added a comment - Workaround can be done by enabling " dfs.name.avoid.write.stale.datanode " where node is considered as stale( if there is no heartbeat in 30 sec's bydefault)..such that it's not allocated for the write...Anyone have some other thoughts..?
        Hide
        vinayrpet Vinayakumar B added a comment -

        Thanks Brahma Reddy Battula for reporting this.
        This will come, if the NameNode have the list of deadnodes, and block allocation request comes from the same machine as of DeadNode, then dead node is being chosen as localnode irrespective of whether its part of the cluster or not. Adding one check in BlockPlacementPolicyDefault.java#choseLocalStorage(..) will be the fix for this.

        Regarding the test proposed above, it will not fail always, since its a minidfscluster test, and all datanodes will be on the same machine And Probabiity of deadnode being chosen as localstorage is not guaranteed.

        Show
        vinayrpet Vinayakumar B added a comment - Thanks Brahma Reddy Battula for reporting this. This will come, if the NameNode have the list of deadnodes, and block allocation request comes from the same machine as of DeadNode, then dead node is being chosen as localnode irrespective of whether its part of the cluster or not. Adding one check in BlockPlacementPolicyDefault.java#choseLocalStorage(..) will be the fix for this. Regarding the test proposed above, it will not fail always, since its a minidfscluster test, and all datanodes will be on the same machine And Probabiity of deadnode being chosen as localstorage is not guaranteed.
        Hide
        brahmareddy Brahma Reddy Battula added a comment -

        Vinayakumar B thanks a lot for taking a look into this issue.. Added the one check in BlockPlacementPolicyDefault.java#choseLocalStorage(..) and corrected the testcase.. Kindly Review

        Show
        brahmareddy Brahma Reddy Battula added a comment - Vinayakumar B thanks a lot for taking a look into this issue.. Added the one check in BlockPlacementPolicyDefault.java#choseLocalStorage(..) and corrected the testcase.. Kindly Review
        Hide
        hadoopqa Hadoop QA added a comment -



        -1 overall



        Vote Subsystem Runtime Comment
        0 pre-patch 18m 3s Pre-patch trunk compilation is healthy.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 tests included 0m 0s The patch appears to include 1 new or modified test files.
        +1 javac 7m 39s There were no new javac warning messages.
        +1 javadoc 9m 48s There were no new javadoc warning messages.
        +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
        +1 checkstyle 2m 18s There were no new checkstyle issues.
        -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
        +1 install 1m 31s mvn install still works.
        +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
        +1 findbugs 3m 17s The patch does not introduce any new Findbugs (version 3.0.0) warnings.
        +1 native 3m 17s Pre-build of native portion
        +1 hdfs tests 158m 22s Tests passed in hadoop-hdfs.
            205m 15s  



        Subsystem Report/Notes
        Patch URL http://issues.apache.org/jira/secure/attachment/12741218/HDFS-8586.patch
        Optional Tests javadoc javac unit findbugs checkstyle
        git revision trunk / 41ae776
        whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11444/artifact/patchprocess/whitespace.txt
        hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11444/artifact/patchprocess/testrun_hadoop-hdfs.txt
        Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11444/testReport/
        Java 1.7.0_55
        uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11444/console

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 18m 3s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 tests included 0m 0s The patch appears to include 1 new or modified test files. +1 javac 7m 39s There were no new javac warning messages. +1 javadoc 9m 48s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 checkstyle 2m 18s There were no new checkstyle issues. -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 install 1m 31s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 findbugs 3m 17s The patch does not introduce any new Findbugs (version 3.0.0) warnings. +1 native 3m 17s Pre-build of native portion +1 hdfs tests 158m 22s Tests passed in hadoop-hdfs.     205m 15s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12741218/HDFS-8586.patch Optional Tests javadoc javac unit findbugs checkstyle git revision trunk / 41ae776 whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11444/artifact/patchprocess/whitespace.txt hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11444/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11444/testReport/ Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11444/console This message was automatically generated.
        Hide
        brahmareddy Brahma Reddy Battula added a comment -

        Vinayakumar B kindly review the attached patch!!! thanks

        Show
        brahmareddy Brahma Reddy Battula added a comment - Vinayakumar B kindly review the attached patch!!! thanks
        Hide
        vinayrpet Vinayakumar B added a comment -

        Thanks a lot Brahma Reddy Battula for the patch.
        +1 LGTM.
        Will commit soon.

        Show
        vinayrpet Vinayakumar B added a comment - Thanks a lot Brahma Reddy Battula for the patch. +1 LGTM. Will commit soon.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Committed to trunk and branch-2.
        Thanks for the contribution Brahma Reddy Battula.

        Show
        vinayrpet Vinayakumar B added a comment - Committed to trunk and branch-2. Thanks for the contribution Brahma Reddy Battula .
        Hide
        brahmareddy Brahma Reddy Battula added a comment -

        Thanks a lot Vinayakumar B for review and commit!!

        Show
        brahmareddy Brahma Reddy Battula added a comment - Thanks a lot Vinayakumar B for review and commit!!
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #8083 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8083/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8083 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8083/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #243 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/243/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #243 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/243/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Yarn-trunk #973 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/973/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #973 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/973/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk #2171 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2171/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2171 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2171/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #232 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/232/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #232 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/232/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2189 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2189/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2189 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2189/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #241 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/241/)
        HDFS-8586. Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
        • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #241 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/241/ ) HDFS-8586 . Dead Datanode is allocated for write when client is from deadnode (Contributed by Brahma Reddy Battula) (vinayakumarb: rev 88ceb382ef45bd09cf004cf44aedbabaf3976759) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

          People

          • Assignee:
            brahmareddy Brahma Reddy Battula
            Reporter:
            brahmareddy Brahma Reddy Battula
          • Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development