Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-16215

File read fails with CannotObtainBlockLengthException after Namenode is restarted

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.2.2, 3.3.1
    • None
    • datanode
    • None

    Description

      When a file is being written by first client and fsck shows OPENFORWRITE and HDFS outage happens and brough back up , first client is disconnected and a new client tries to open the file we see "Cannot obtain block length for" as shown below.

      /tmp/hosts7 134217728 bytes, replicated: replication=3, 1 block(s), OPENFORWRITE:  OK
      0. BP-1958960150-172.25.40.87-1628677864204:blk_1073745252_4430 len=134217728 Live_repl=3  [DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK], DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK]]
      
      Under Construction Block:
      1. BP-1958960150-172.25.40.87-1628677864204:blk_1073745253_4431 len=0 Expected_repl=3  [DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK], DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK]]
      
      [root@c1265-node2 ~]# hdfs dfs -get /tmp/hosts7
      get: Cannot obtain block length for LocatedBlock{BP-1958960150-172.25.40.87-1628677864204:blk_1073745253_4431; getBlockSize()=0; corrupt=false; offset=134217728; locs=[DatanodeInfoWithStorage[172.25.40.70:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK], DatanodeInfoWithStorage[172.25.33.132:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.36.14:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK]]}
      
      *Exception trace from the logs:*
      
      Exception in thread "main" org.apache.hadoop.hdfs.CannotObtainBlockLengthException: Cannot obtain block length for LocatedBlock{BP-1958960150-172.25.40.87-1628677864204:blk_1073742720_1896; getBlockSize()=0; corrupt=false; offset=134217728; locs=[DatanodeInfoWithStorage[172.25.33.140:9866,DS-92e75140-d066-4ab5-b250-dbfd329289c5,DISK], DatanodeInfoWithStorage[172.25.40.87:9866,DS-1e280bcd-a2ce-4320-9ebb-33fc903d3a47,DISK], DatanodeInfoWithStorage[172.25.36.17:9866,DS-6357ab37-84ae-4c7c-8794-fef905bcde05,DISK]]}
      	at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:363)
      	at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
      	at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:201)
      	at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:185)
      	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1006)
      	at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
      	at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:312)
      	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
      	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:324)
      	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            smajeti Srinivasu Majeti
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated: