-
Type:
Bug
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0
-
Component/s: None
-
Labels:None
-
Hadoop Flags:Reviewed
-
Release Note:
During pipeline recovery, if not enough DNs can be found, if
dfs.client.block.write.replace-datanode-on-failure.best-effort
is enabled, we let the pipeline to continue, even if there is a single DN.
Similarly, when we create the write pipeline initially, if for some reason we can't find enough DNs, we can have a similar config to enable writing with a single DN.
More study will be done.