Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14107

FileContext Delete on Exit Improvements

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Patch Available
    • Minor
    • Resolution: Unresolved
    • 3.2.0
    • None
    • fs
    • None
    • Patch

    Description

          synchronized (DELETE_ON_EXIT) {
            Set<Entry<FileContext, Set<Path>>> set = DELETE_ON_EXIT.entrySet();
            for (Entry<FileContext, Set<Path>> entry : set) {
              FileContext fc = entry.getKey();
              Set<Path> paths = entry.getValue();
              for (Path path : paths) {
                try {
                  fc.delete(path, true);
                } catch (IOException e) {
                  LOG.warn("Ignoring failure to deleteOnExit for path " + path);
                }
              }
            }
            DELETE_ON_EXIT.clear();
      
      1. Include the IOException in the logging so that admins can know why the file was not deleted
      2. Do not bother clearing out the data structure. This code is only called if the JVM is going down. Better to spend the time allowing another shutdown hook to run than to spend time cleaning this thing up.
      3. Use Guava MultiMap for readability
      4. Paths are currently stored in a TreeSet. This set implementation orders the files by names. It does not seem worth much to order the files. Use a faster HashSet.

      Attachments

        1. HDFS-14107.1.patch
          3 kB
          David Mollitor
        2. HADOOP-14107.2.patch
          6 kB
          David Mollitor

        Activity

          People

            belugabehr David Mollitor
            belugabehr David Mollitor
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: