Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-3592

org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.19.0
    • Fix Version/s: 0.19.0
    • Component/s: fs
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      FileUtil.copy() relies on IOUtils.copyBytes() to close the incoming streams, which it does. Normally.

      But if dstFS.create() raises any kind of IOException, then the inputstream "in", which was created in the line above, will never get closed, and hence be leaked.

      InputStream in = srcFS.open(src);
      OutputStream out = dstFS.create(dst, overwrite);
      IOUtils.copyBytes(in, out, conf, true);

      Some try/catch wrapper around the open operations could close the streams if any exception gets thrown at that point in the copy process.

        Attachments

        1. HADOOP-3592.patch
          2 kB
          Raghu Angadi
        2. HADOOP-3592.patch
          9 kB
          Steve Loughran
        3. HADOOP-3592.patch
          3 kB
          Bill de hOra
        4. HADOOP-3592-200807022209.patch
          2 kB
          Bill de hOra

          Issue Links

            Activity

              People

              • Assignee:
                dehora Bill de hOra
                Reporter:
                stevel@apache.org Steve Loughran
              • Votes:
                0 Vote for this issue
                Watchers:
                0 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: