Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-14786

A new block placement policy tolerating availability zone failure

    XMLWordPrintableJSON

    Details

    • Type: New Feature
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: block placement
    • Labels:
      None
    • Target Version/s:

      Description

      NetworkTopology assumes "/datacenter/rack/host" 3 layer topology. Default block placement policies are rack awareness for better fault tolerance. Newer block placement policy like BlockPlacementPolicyRackFaultTolerant tries its best to place the replicas to most racks, which further tolerates more racks failing. HADOOP-8470 brought NetworkTopologyWithNodeGroup to add another layer under rack, i.e. "/datacenter/rack/host/nodegroup" 4 layer topology. With that, replicas within a rack can be placed in different node groups for better isolation.

      Existing block placement policies tolerate one rack failure since at least two racks are chosen in those cases. Chances are all replicas could be placed in the same datacenter, though there are multiple data centers in the same cluster topology. In other words, fault of higher layers beyond rack is not well tolerated.

      However, more deployments in public cloud are leveraging multiple available zones (AZ) for high-availability since the inter-AZ latency seems affordable in many cases. In a single AZ, some cloud providers like AWS support partitioned placement groups which basically are different racks. A simple network topology mapped to HDFS is "/availabilityzone/rack/host" 3 layers.

      To achieve high availability tolerating zone failure, this JIRA proposes a new data placement policy which tries its best to place replicas in most AZs, most racks, and most evenly distributed.

      Examples with 3 replicas, we choose racks as following:

      • 1AZ: fall back to BlockPlacementPolicyRackFaultTolerant to place among most racks
      • 2AZ: randomly choose one rack in one AZ and randomly choose two racks in the other AZ
      • 3AZ: randomly choose one rack in every AZ
      • 4AZ: randomly choose three AZs and randomly choose one rack in every AZ

      After racks are picked, hosts are chosen randomly within racks honoring local storage, favorite nodes, excluded nodes, storage types etc. Data may become imbalance if topology is very uneven in AZs. This seems not a problem as in public cloud, infrastructure provisioning is more flexible than 1P.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              liuml07 Mingliang Liu
            • Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

              • Created:
                Updated: