Bug reported by Arpit Gupta:
If the dfs.nameservices is set to arpit,
does not work. You have to provide the exact active namenode hostname. On an HA cluster using dfs client one should not need to provide the active nn hostname.
To fix this, we try to
1) let WebHdfsFileSystem support logical NN service name
2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA
|webhdfs failover append to stand-by namenode fails||Open||Unassigned|