Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11592

Closing a file has a wasteful preconditions in NameNode

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.9.0, 3.0.0-alpha4, 2.8.2
    • namenode
    • None

    Description

      When a file is closed, the NN checks if all the blocks are complete. Instead of a simple 'if (!complete) throw new IllegalState(expensive-err-string)" it invokes "Preconditions.checkStatus(complete, expensive-err-string)". The check is done in a loop for all blocks, so more blocks = more penalty. The expensive string should only be computed when an error actually occurs. A telltale sign is seeing this in a stacktrace:

             at java.lang.Class.getEnclosingMethod0(Native Method)
              at java.lang.Class.getEnclosingMethodInfo(Class.java:1072)
              at java.lang.Class.getEnclosingClass(Class.java:1272)
              at java.lang.Class.getSimpleBinaryName(Class.java:1443)
              at java.lang.Class.getSimpleName(Class.java:1309)
              at org.apache.hadoop.hdfs.server.namenode.INodeFile.assertAllBlocksComplete(INodeFile.java:246)
      

      Attachments

        1. HDFS-11592.001.patch
          1 kB
          Eric Badger

        Activity

          People

            ebadger Eric Badger
            ebadger Eric Badger
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: