Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3085

Local data node may need to reconsider for read, when reading a very big file as that local DN may get recover in some time.

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: None
    • Component/s: datanode, hdfs-client
    • Labels:
      None
    • Target Version/s:

      Description

      While reading the file, we will add the DN to deadNodes list and will skip from reads.
      If we are reading very huge file (may take hours), and failed read from local datanode, then this will be added to deadnode list and will be excluded for the further reads for that file.
      If the local node recovered immediately,but that will not used for further read. Read may continue with the remote nodes. It will effect the read performance.

      It will be good if we reconsider the local node after certain period based on some factors.

        Activity

        Uma Maheswara Rao G created issue -
        Uma Maheswara Rao G made changes -
        Field Original Value New Value
        Summary Local data node may need to reconsider for read, while reading a very big file as it may get recover in some time. Local data node may need to reconsider for read, when reading a very big file as that local DN may get recover in some time.
        Allen Wittenauer made changes -
        Affects Version/s 2.0.0-alpha [ 12320353 ]
        Affects Version/s 0.24.0 [ 12317653 ]

          People

          • Assignee:
            Unassigned
            Reporter:
            Uma Maheswara Rao G
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:

              Development