Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9023

When NN is not able to identify DN for replication, reason behind it can be logged

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      When NN is not able to identify DN for replication, reason behind it can be logged (at least critical information why DNs not chosen like disk is full). At present it is expected to enable debug log.

      For example the reason for below error looks like all 7 DNs are busy for data writes. But at client or NN side no hint is given in the log message.

      File /tmp/logs/spark/logs/application_1437051383180_0610/xyz-195_26009.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 7 datanode(s) running and no node(s) are excluded in this operation.
      	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1553) 
      

      Attachments

        1. HDFS-9023.branch-2.patch
          7 kB
          Xiao Chen
        2. HDFS-9023.03.patch
          6 kB
          Xiao Chen
        3. HDFS-9023.02.patch
          6 kB
          Xiao Chen
        4. HDFS-9023.01.patch
          6 kB
          Xiao Chen

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            xiaochen Xiao Chen
            surendralilhore Surendra Singh Lilhore
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment