Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13573

Javadoc for BlockPlacementPolicyDefault is inaccurate

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Trivial
    • Resolution: Fixed
    • Affects Version/s: 3.1.0
    • Fix Version/s: 3.2.0
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Current rule of default block placement policy:

      The replica placement strategy is that if the writer is on a datanode,
      the 1st replica is placed on the local machine,
      otherwise a random datanode. The 2nd replica is placed on a datanode
      that is on a different rack. The 3rd replica is placed on a datanode
      which is on a different node of the rack as the second replica.

      if the writer is on a datanode, the 1st replica is placed on the local machine, actually this can be decided by the hdfs client. The client can pass CreateFlag#NO_LOCAL_WRITE that request to not put a block replica on the local datanode. But subsequent replicas will still follow default block placement policy.

        Attachments

        1. HDFS-13573.02.patch
          2 kB
          Zsolt Venczel
        2. HDFS-13573.01.patch
          2 kB
          Zsolt Venczel

          Issue Links

            Activity

              People

              • Assignee:
                zvenczel Zsolt Venczel
                Reporter:
                linyiqun Yiqun Lin
              • Votes:
                0 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: