Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
0.17.1
Description
'Legacy' pyarrow.hdfs.connect was somehow able to get the namenode info from the hadoop configuration files.
The new pyarrow.fs.HadoopFileSystem requires the host to be specified.
Inferring this info from "the environment" makes it easier to deploy pipelines.
But more important, for HA namenodes it is almost impossible to know for sure what to specify. If a rolling restart is ongoing, the namenode is changing. There is no guarantee on which will be active in a HA setup.
I tried connecting to the standby namenode. The connection gets established, but when writing a file an error is raised that standby namenodes are not allowed to write to.
Attachments
Issue Links
- supercedes
-
ARROW-448 [Python] Load HdfsClient default options from core-site.xml or hdfs-site.xml, if available
- Closed
- links to