Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-5122

Support failover and retry in WebHdfsFileSystem for NN HA

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.1.0-beta
    • 2.3.0
    • ha, webhdfs
    • None
    • Reviewed

    Description

      Bug reported by arpitgupta:

      If the dfs.nameservices is set to arpit,

      hdfs dfs -ls webhdfs://arpit/tmp
      

      does not work. You have to provide the exact active namenode hostname. On an HA cluster using dfs client one should not need to provide the active nn hostname.

      To fix this, we try to
      1) let WebHdfsFileSystem support logical NN service name
      2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA

      Attachments

        1. HDFS-5122.patch
          19 kB
          Haohui Mai
        2. HDFS-5122.001.patch
          19 kB
          Haohui Mai
        3. HDFS-5122.002.patch
          19 kB
          Haohui Mai
        4. HDFS-5122.003.patch
          37 kB
          Haohui Mai
        5. HDFS-5122.004.patch
          24 kB
          Haohui Mai

        Issue Links

          Activity

            People

              wheat9 Haohui Mai
              arpitgupta Arpit Gupta
              Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: