Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3085

Local data node may need to reconsider for read, when reading a very big file as that local DN may get recover in some time.

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 2.0.0-alpha
    • None
    • datanode, hdfs-client
    • None

    Description

      While reading the file, we will add the DN to deadNodes list and will skip from reads.
      If we are reading very huge file (may take hours), and failed read from local datanode, then this will be added to deadnode list and will be excluded for the further reads for that file.
      If the local node recovered immediately,but that will not used for further read. Read may continue with the remote nodes. It will effect the read performance.

      It will be good if we reconsider the local node after certain period based on some factors.

      Attachments

        Activity

          People

            Unassigned Unassigned
            umamaheswararao Uma Maheswara Rao G
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: