'Legacy' pyarrow.hdfs.connect was somehow able to get the namenode info from the hadoop configuration files.
The new pyarrow.fs.HadoopFileSystem requires the host to be specified.
Inferring this info from "the environment" makes it easier to deploy pipelines.
But more important, for HA namenodes it is almost impossible to know for sure what to specify. If a rolling restart is ongoing, the namenode is changing. There is no guarantee on which will be active in a HA setup.
I tried connecting to the standby namenode. The connection gets established, but when writing a file an error is raised that standby namenodes are not allowed to write to.