Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-13264

Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 2.7.2
    • None
    • None
    • None

    Description

      Using:
      hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java

      Close method fails when the client can't connect to any data nodes. When re-using the same DistributedFileSystem in the same JVM, if all the datanodes can't be accessed, then this causes a memory leak as the DFSClient#filesBeingWritten map is never cleared after that.

      See test program provided by sebyonthenet in comments below.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              sebyonthenet Seb Mo
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: