HBase
  1. HBase
  2. HBASE-5738

Using HBase with HA HDFS requires bogus hardcoded port value

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: 0.92.1, 0.94.0
    • Fix Version/s: None
    • Component/s: master
    • Labels:
      None

      Description

      When configuring HBase with HDFS HA, we currently have to have the 8020 port (regardless of what port HDFS is using for the namenode rpc address) in the following property in hbase-site.xml:

        <property>
          <name>hbase.rootdir</name>
          <value>hdfs://ha-nn-uri:8020/hbase</value>
        </property>
      

      Otherwise the master and regionservers will not start.

      The value in the above property should really just be "hdfs://ha-nn-uri/hbase" (replace "ha-nn-uri" with your uri and "hbase" with the name of the hbase directory in HDFS that you are using, as appropriate).

      1. HBASE-5738.patch
        0.7 kB
        Shaneal Manek

        Issue Links

          Activity

          Shaneal Manek created issue -
          Hide
          Shaneal Manek added a comment -

          The problem is that the HDFS client changed the fs.default.name property to fs.defaultFS in HA.

          Most of the references in HBase were appropriately updated - this one seems to have been forgotten.

          Show
          Shaneal Manek added a comment - The problem is that the HDFS client changed the fs.default.name property to fs.defaultFS in HA. Most of the references in HBase were appropriately updated - this one seems to have been forgotten.
          Shaneal Manek made changes -
          Field Original Value New Value
          Attachment HBASE-5738.patch [ 12521649 ]
          Shaneal Manek made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Assignee Shaneal Manek [ smanek ]
          Hide
          Todd Lipcon added a comment -

          The problem is that the HDFS client changed the fs.default.name property to fs.defaultFS in HA.

          That's not quite right – the config key change was from 0.20 to 0.21, but the old config key should still work. So I'm confused why this fixes the problem...

          Also, this might break 0.20 (if it doesnt, then I'm suspicious of why this code is there at all!)

          Show
          Todd Lipcon added a comment - The problem is that the HDFS client changed the fs.default.name property to fs.defaultFS in HA. That's not quite right – the config key change was from 0.20 to 0.21, but the old config key should still work. So I'm confused why this fixes the problem... Also, this might break 0.20 (if it doesnt, then I'm suspicious of why this code is there at all!)
          Jonathan Hsieh made changes -
          Link This issue relates to HBASE-5697 [ HBASE-5697 ]
          Hide
          Jonathan Hsieh added a comment -

          Added link to hadoop 20/23 deprecated properties list.

          Show
          Jonathan Hsieh added a comment - Added link to hadoop 20/23 deprecated properties list.
          Hide
          Shaneal Manek added a comment -

          Ah, thanks for the clarification Todd!

          I wouldn't have expected it to break anything in 0.20 (since this patch sets both fs.default.name and fs.defaultFS) - but I'll test it before we continue any further.

          I'll post an update after I've had a chance to see how it works on 0.20 (I'm using my dev cluster for security testing right now).

          Show
          Shaneal Manek added a comment - Ah, thanks for the clarification Todd! I wouldn't have expected it to break anything in 0.20 (since this patch sets both fs.default.name and fs.defaultFS) - but I'll test it before we continue any further. I'll post an update after I've had a chance to see how it works on 0.20 (I'm using my dev cluster for security testing right now).
          Hide
          Shaneal Manek added a comment -

          Thanks Jon - I didn't know about that ticket.

          Show
          Shaneal Manek added a comment - Thanks Jon - I didn't know about that ticket.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12521649/HBASE-5738.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1426//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1426//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1426//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12521649/HBASE-5738.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1426//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1426//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1426//console This message is automatically generated.
          Hide
          Eli Collins added a comment -

          Shaneal,

          Does using defaultFS fix the issues?

          Looking at the exception..

          Caused by: org.apache.hadoop.fs.InvalidPathException: Invalid path
          name Wrong FS: hdfs://ha-nn-uri/hbase/-ROOT-/70236052/.logs/hlog.1327624121445,
          expected: hdfs://ha-nn-uri:8020
                 at org.apache.hadoop.fs.AbstractFileSystem.checkPath(AbstractFileSystem.java:361)
                 at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:462)
                 at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:657)
                 at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:654)
                 at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2319)
                 at org.apache.hadoop.fs.FileContext.create(FileContext.java:654)
                 at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:501)
                 at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:468)
          

          AFS#checkPath is failing because the authority of the FC URI (ha-nn-uri:8020) doesn't match the authority of the given URI (ha-nn-uri).

          Looking at the FC URI, AFS#getUri...

              int port = uri.getPort();
              port = (port == -1 ? defaultPort : port);
              if (port == -1) { // no port supplied and default port is not specified
                return new URI(supportedScheme, authority, "/", null);
              }
          

          So if we don't specify a port when creating the FC (which we don't, we're using hdfs://ha-nn-uri/hbase) we'll fail to match the authority because we didn't set a port and we're not using the default port for HDFS (8020).

          Seems like a HADOOP bug to me, specifically the path given the AFS checkPath should be one of the configured NN addresses which will have a port (ie the address returned by the proxy provider) instead of the logical NN uri (or the AFS#checkPath should be more lenient if the given path doesn't specify a port).

          Show
          Eli Collins added a comment - Shaneal, Does using defaultFS fix the issues? Looking at the exception.. Caused by: org.apache.hadoop.fs.InvalidPathException: Invalid path name Wrong FS: hdfs://ha-nn-uri/hbase/-ROOT-/70236052/.logs/hlog.1327624121445, expected: hdfs://ha-nn-uri:8020 at org.apache.hadoop.fs.AbstractFileSystem.checkPath(AbstractFileSystem.java:361) at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:462) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:657) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:654) at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2319) at org.apache.hadoop.fs.FileContext.create(FileContext.java:654) at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:501) at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:468) AFS#checkPath is failing because the authority of the FC URI (ha-nn-uri:8020) doesn't match the authority of the given URI (ha-nn-uri). Looking at the FC URI, AFS#getUri... int port = uri.getPort(); port = (port == -1 ? defaultPort : port); if (port == -1) { // no port supplied and default port is not specified return new URI(supportedScheme, authority, "/" , null ); } So if we don't specify a port when creating the FC (which we don't, we're using hdfs://ha-nn-uri/hbase) we'll fail to match the authority because we didn't set a port and we're not using the default port for HDFS (8020). Seems like a HADOOP bug to me, specifically the path given the AFS checkPath should be one of the configured NN addresses which will have a port (ie the address returned by the proxy provider) instead of the logical NN uri (or the AFS#checkPath should be more lenient if the given path doesn't specify a port).
          Hide
          Eli Collins added a comment -

          I believe this is a dupe of HADOOP-8310.

          Show
          Eli Collins added a comment - I believe this is a dupe of HADOOP-8310 .
          Hide
          stack added a comment -

          Shaneal, shall we close this (as per Eli's suggestion above)?

          Show
          stack added a comment - Shaneal, shall we close this (as per Eli's suggestion above)?
          Shaneal Manek made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Resolution Duplicate [ 3 ]

            People

            • Assignee:
              Shaneal Manek
              Reporter:
              Shaneal Manek
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development