Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
a file with two replications and keep it opening.
first it writes to DN1 and DN2.
2020-06-23 14:22:51,379 | DEBUG | pipeline = [DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK], DatanodeInfoWithStorage[DN2:25009,DS-7e434b35-0b10-44fa-9d3b-c3c938f1724d,DISK]] | DataStreamer.java:1757
after DN2 restart, it writes to DN1 and DN3,
2020-06-23 14:24:04,559 | DEBUG | pipeline = [DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK], DatanodeInfoWithStorage[DN3:25009,DS-1810c3d5-b6e8-4403-a0fc-071ea6e5489f,DISK]] | DataStreamer.java:1757
after DN1 restart. it writes to DN3 and DN4.
2020-06-23 14:25:21,340 | DEBUG | pipeline = [DatanodeInfoWithStorage[DN3:25009,DS-1810c3d5-b6e8-4403-a0fc-071ea6e5489f,DISK], DatanodeInfoWithStorage[DN4:25009,DS-5fbb2232-e7c8-4186-8eb9-87a6aff86cef,DISK]] | DataStreamer.java:1757
restart Active NameNode. then try to get the file.
NameNode return locatedblocks with DN1 and DN2. Can not obtain block Exception occurred.
20/06/20 17:57:06 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{ fileLength=0 underConstruction=true blocks=[LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796; getBlockSize()=53; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK], DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]}] lastLocatedBlock=LocatedBlock{BP-1590194288-10.162.26.113-1587096223927:blk_1073895975_155796; getBlockSize()=53; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[DN1:25009,DS-1dcbe5bd-f69a-422c-bea6-a41bda773084,DISK], DatanodeInfoWithStorage[DN2:25009,DS-cd06a4f9-c25d-42ab-887b-f129707dba17,DISK]]} isLastBlockComplete=false}