Description
Bug reported by arpitgupta:
If the dfs.nameservices is set to arpit,
hdfs dfs -ls webhdfs://arpit/tmp
does not work. You have to provide the exact active namenode hostname. On an HA cluster using dfs client one should not need to provide the active nn hostname.
To fix this, we try to
1) let WebHdfsFileSystem support logical NN service name
2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA
Attachments
Attachments
Issue Links
- depends upon
-
HDFS-5219 Add configuration keys for retry policy in WebHDFSFileSystem
- Closed
- duplicates
-
HDFS-4299 WebHDFS Should Support HA Configuration
- Resolved
- is duplicated by
-
HDFS-5181 Fail-over support for HA cluster in WebHDFS
- Resolved
-
HDFS-5176 WebHDFS should support logical service names in URIs
- Resolved
- is related to
-
HDFS-5123 Hftp should support namenode logical service names in URI
- Resolved
1.
|
webhdfs failover append to stand-by namenode fails | Open | Unassigned |