Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6524

Choosing datanode retries times considering with block replica number

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Not A Problem
    • Affects Version/s: 3.0.0-alpha1
    • Fix Version/s: None
    • Component/s: hdfs-client
    • Labels:

      Description

      Currently the chooseDataNode() does retry with the setting: dfsClientConf.maxBlockAcquireFailures, which by default is 3 (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better having another option, block replication factor. One cluster with only two block replica setting, or using Reed-solomon encoding solution with one replica factor. It helps to reduce the long tail latency.

        Attachments

        1. HDFS-6524.txt
          2 kB
          Liang Xie
        2. HDFS-6524.001.patch
          2 kB
          Lisheng Sun
        3. HDFS-6524.002.patch
          2 kB
          Lisheng Sun
        4. HDFS-6524.003.patch
          3 kB
          Lisheng Sun
        5. HDFS-6524.004.patch
          4 kB
          Lisheng Sun
        6. HDFS-6524.005.patch
          3 kB
          Lisheng Sun
        7. HDFS-6524.005(2).patch
          3 kB
          Lisheng Sun
        8. HDFS-6524.006.patch
          4 kB
          Lisheng Sun
        9. HDFS-6524.007.patch
          4 kB
          Lisheng Sun

          Activity

            People

            • Assignee:
              leosun08 Lisheng Sun
              Reporter:
              xieliang007 Liang Xie
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: