Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-3592

org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 0.19.0
    • 0.19.0
    • fs
    • None
    • Reviewed

    Description

      FileUtil.copy() relies on IOUtils.copyBytes() to close the incoming streams, which it does. Normally.

      But if dstFS.create() raises any kind of IOException, then the inputstream "in", which was created in the line above, will never get closed, and hence be leaked.

      InputStream in = srcFS.open(src);
      OutputStream out = dstFS.create(dst, overwrite);
      IOUtils.copyBytes(in, out, conf, true);

      Some try/catch wrapper around the open operations could close the streams if any exception gets thrown at that point in the copy process.

      Attachments

        1. HADOOP-3592-200807022209.patch
          2 kB
          Bill de hOra
        2. HADOOP-3592.patch
          3 kB
          Bill de hOra
        3. HADOOP-3592.patch
          9 kB
          Steve Loughran
        4. HADOOP-3592.patch
          2 kB
          Raghu Angadi

        Issue Links

          Activity

            People

              dehora Bill de hOra
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: