Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3085

Local data node may need to reconsider for read, when reading a very big file as that local DN may get recover in some time.

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.24.0
    • Fix Version/s: None
    • Component/s: datanode, hdfs-client
    • Labels:
      None

      Description

      While reading the file, we will add the DN to deadNodes list and will skip from reads.
      If we are reading very huge file (may take hours), and failed read from local datanode, then this will be added to deadnode list and will be excluded for the further reads for that file.
      If the local node recovered immediately,but that will not used for further read. Read may continue with the remote nodes. It will effect the read performance.

      It will be good if we reconsider the local node after certain period based on some factors.

        Activity

        There are no comments yet on this issue.

          People

          • Assignee:
            Unassigned
            Reporter:
            Uma Maheswara Rao G
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:

              Development