Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-5697

connection leak in DFSInputStream

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None
    • None

    Description

      While getting the BlockReader from DFSInputStream, if the cache is miss, the DFSInputStream creates a new peer. But if error occured when creating the new blockreader with the give peer and IOException is thrown, the created peer is not closed and will cause too many CLOSE-WAIT status.
      here's the stacktrace:
      java.io.IOException: Got error for OP_READ_BLOCK, self=/10.130.100.32:26657, remote=/10.130.100.32:50010, for file /hbase/STAT_RESULT_SALT/d17e9cf1d1de34910bc6724c7cc21ed8/_0/c75770dbed6444488b609385e8bc9e0d, for pool BP-2041309608-10.130.100.157-1361861188734 block -7893680960325255689_107620083
      at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:429)
      at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:394)
      at org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
      at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
      at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538)
      at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
      at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794)
      at java.io.DataInputStream.read(DataInputStream.java:149)
      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1409)
      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1921)
      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1703)
      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:338)
      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.seekTo(HFileReaderV2.java:997)
      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:229)
      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
      at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:165)

      So there should be a catch clause at the end of the function to check if IOException is thrown , the peer should be closed.

      Attachments

        1. HDFS-5697.patch
          1 kB
          Haitao Yao
        2. HDFS-5697.patch
          1 kB
          Haitao Yao

        Issue Links

          Activity

            People

              Unassigned Unassigned
              haitao.yao Haitao Yao
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: