Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Minor
    • Resolution: Later
    • Affects Version/s: None
    • Fix Version/s: 0.92.0
    • Component/s: None
    • Labels:
      None

      Description

      The Hadoop NameNode is a single point of failure. If the master instance fails, HDFS is down; therefore, HBase as well. So we do not try to deploy HBase in a multimaster configuration for that reason. Instead we colocate the HDFS NameNode and HBase HMaster on the same instance and run with its failure as a known risk. As these EC2 scripts are starter scripts which can (and should) be customized, this is ok, but we can do better. We should deploy a fully fault tolerant Hadoop+HBase cluster as a worked example of how to do it.

        Issue Links

          Activity

          Hide
          lars_francke Lars Francke added a comment -

          This issue was closed as part of a bulk closing operation on 2015-11-20. All issues that have been resolved and where all fixVersions have been released have been closed (following discussions on the mailing list).

          Show
          lars_francke Lars Francke added a comment - This issue was closed as part of a bulk closing operation on 2015-11-20. All issues that have been resolved and where all fixVersions have been released have been closed (following discussions on the mailing list).
          Hide
          apurtell Andrew Purtell added a comment -

          Closed per HBASE-2543.

          Show
          apurtell Andrew Purtell added a comment - Closed per HBASE-2543 .
          Hide
          stack stack added a comment -

          Moved from 0.21 to 0.22 just after merge of old 0.20 branch into TRUNK.

          Show
          stack stack added a comment - Moved from 0.21 to 0.22 just after merge of old 0.20 branch into TRUNK.
          Hide
          apurtell Andrew Purtell added a comment -

          An alternative approach is to set up two clusters in two availability zones and use HBASE-1295 to set up bidirectional replication. The ZooKeeper quorum ensemble could perhaps be shared and could span two or three availability zones for durable operation.

          Show
          apurtell Andrew Purtell added a comment - An alternative approach is to set up two clusters in two availability zones and use HBASE-1295 to set up bidirectional replication. The ZooKeeper quorum ensemble could perhaps be shared and could span two or three availability zones for durable operation.
          Hide
          apurtell Andrew Purtell added a comment -

          Link to HDFS-976

          Show
          apurtell Andrew Purtell added a comment - Link to HDFS-976
          Hide
          apurtell Andrew Purtell added a comment -

          Link to HBASE-2108.

          Show
          apurtell Andrew Purtell added a comment - Link to HBASE-2108 .
          Hide
          eli Eli Collins added a comment -

          Failing over to another node where a healthy replica of NN store exists and starting an NN instance will cause the NN to collect block information from every "new" and "unknown" DataNode for the first time.

          Check out HDFS-839 (NN forwards block reports to the BNN). Enabling high availability via fast automatic fail over to the backup name node is something HDFS developers are working on. You also might find Dhruba's recent post on HA of interest.

          Show
          eli Eli Collins added a comment - Failing over to another node where a healthy replica of NN store exists and starting an NN instance will cause the NN to collect block information from every "new" and "unknown" DataNode for the first time. Check out HDFS-839 (NN forwards block reports to the BNN). Enabling high availability via fast automatic fail over to the backup name node is something HDFS developers are working on. You also might find Dhruba's recent post on HA of interest.
          Hide
          apurtell Andrew Purtell added a comment -

          @Berk: Thank you for all of the very helpful comment.

          Show
          apurtell Andrew Purtell added a comment - @Berk: Thank you for all of the very helpful comment.
          Hide
          bdd Berk D. Demir added a comment -

          Until HDFS comes up with a solution to eliminate NN SPoF, old fashioned HA measures are required to keep NameNode available.

          So far, best and seemingly reliable bet on Linux is to have network replicated block device, a heart beat providing messaging connection between HA nodes and a cluster resource manager software to keep track of infrastructural resource dependencies and moving them between machines in the requirement order.

          All in all, HBase's tolerance window for NN unavailability mostly depends on particular load at the time of failover and RSs requirements to create new files.

          Failing over to another node where a healthy replica of NN store exists and starting an NN instance will cause the NN to collect block information from every "new" and "unknown" DataNode for the first time. Additionally, default value for extension of safe mode after threshold reach is 30 seconds. (property: dfs.namenode.safemode.extension). This prolonged unavailability window can/will have bad effects on RSs. (jdcryans will comment his observations).

          We implemented a NameNode HA cluster with open source tools like OpenAIS, Pacemaker, Heartbeat and DRBD.

          • NameNode disk storage is replicated between two machines (adding a 3rd machine is possible with new DRBD).
          • OpenAIS provides intra-cluster messaging and heart beat availability layer.
          • Pacemaker is used to manage Cluster Resources. (DRBD disks, filesystem mount, NN service IP, NN daemon)
          • An OCF script to start, stop, validate and monitor (periodic calls) the subsystem (NN, JT, SNN).

          At the end of the day, this is applicable to not only NameNode but also to JobTracker and SecondaryNameNode.

          For a starting point, ClusterLabs (creators of Pacemaker) offer an e-book on creating clusters with DRBD, Pacemaker, OpenAIS called Clusters From Scratch (PDF Link)

          Show
          bdd Berk D. Demir added a comment - Until HDFS comes up with a solution to eliminate NN SPoF, old fashioned HA measures are required to keep NameNode available. So far, best and seemingly reliable bet on Linux is to have network replicated block device, a heart beat providing messaging connection between HA nodes and a cluster resource manager software to keep track of infrastructural resource dependencies and moving them between machines in the requirement order. All in all, HBase's tolerance window for NN unavailability mostly depends on particular load at the time of failover and RSs requirements to create new files. Failing over to another node where a healthy replica of NN store exists and starting an NN instance will cause the NN to collect block information from every "new" and "unknown" DataNode for the first time. Additionally, default value for extension of safe mode after threshold reach is 30 seconds. (property: dfs.namenode.safemode.extension ). This prolonged unavailability window can/will have bad effects on RSs. (jdcryans will comment his observations). We implemented a NameNode HA cluster with open source tools like OpenAIS, Pacemaker, Heartbeat and DRBD. NameNode disk storage is replicated between two machines (adding a 3rd machine is possible with new DRBD) . OpenAIS provides intra-cluster messaging and heart beat availability layer. Pacemaker is used to manage Cluster Resources. (DRBD disks, filesystem mount, NN service IP, NN daemon) An OCF script to start, stop, validate and monitor (periodic calls) the subsystem (NN, JT, SNN) . At the end of the day, this is applicable to not only NameNode but also to JobTracker and SecondaryNameNode. For a starting point, ClusterLabs (creators of Pacemaker) offer an e-book on creating clusters with DRBD, Pacemaker, OpenAIS called Clusters From Scratch (PDF Link)

            People

            • Assignee:
              Unassigned
              Reporter:
              apurtell Andrew Purtell
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development