Details

    • Type: Sub-task Sub-task
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: HA Branch (HDFS-1623)
    • Component/s: ha
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      In an HA cluster, when there are two NNs, the invariant that only one NN is active at a time has to be preserved in order to prevent "split brain syndrome." Thus, when a standby NN is transition to "active" state during a failover, it needs to somehow fence the formerly active NN to ensure that it can no longer perform edits. This JIRA is to discuss and implement NN fencing.

      1. hdfs-2179.txt
        41 kB
        Todd Lipcon
      2. hdfs-2179.txt
        46 kB
        Todd Lipcon
      3. hdfs-2179.txt
        46 kB
        Todd Lipcon
      4. hadoop-7961-delta.txt
        10 kB
        Eli Collins

        Activity

        Hide
        Todd Lipcon added a comment -

        Fencing overview

        In order to fence a NN, there are several different methods, at varying levels of nastiness:

        1) Cooperative active->standby transition or shutdown
        In the case of a manual failover, the old primary can gracefully either transition to a standby mode, or gracefully shut down. In this case, since we assume the software to be cooperative, no real "fencing" is necessary – the new NN just needs to unambiguously confirm that the old NN has dropped out of active mode.

        This method succeeds only if the old NN remains in full operation.

        2) Process killing or death verification (eg via ssh or a second daemon)
        In the case that the old primary has either hung (eg deadlock) or crashed (eg JVM segfault), but the host is OK, the new primary may contact that host and send SIGKILL to the NameNode JVM. This may be done either via ssh or via contacting some process which is still running on the node. It is also sufficient to verify that the NN process is no longer running in the case that its JVM crashed.

        This method succeeds only if the host of the old NN remains in full operation, despite the NN itself being deadlocked or crashed.

        3) Storage fencing
        Depending on the type of storage in which the old NN stores its edits directories, the new NN may explicitly fence the storage. This is typically accomplished using a vendor-specific extension. For example, NetApp filers support the command "exportfs -b enable save <nnhost.com> /vol/vol0" which can be remotely issued in order to disallow any further access to a particular mount by a particular host.

        In the case of edits stored on BookKeeper in the future, we may be able to implement some kind of lease revocation or fencing within that storage system.

        4) Network port fencing
        Many switches support remote management. One way to prevent a NameNode from responding to any further requests is to forcibly disable its network port. An alternative similar mechanism is to use something like a LOM card to remotely disable the NIC.

        5) Power port fencing (aka STONITH)
        Many power distribution units (PDUs) support remote management. The last ditch effort to fence a node is to literally "pull the power"

        Proposal

        Since methods 3-5 above are usually vendor-specific implementations, it does not make sense to try to implement a catch-all fencing mechanism within Hadoop. Instead, operators are likely to want to use commonly available shell scripts that work against their preferred hardware. Given this, I would propose that Hadoop's fencing behavior be:

        • Configure a list of "fence methods", each with an associated priority.
        • Each fence method returns an exit code indicating whether it has successfully fenced the target node.
        • If any method succeeds, no further method is attempted.
        • If a method fails, continue down the list to try the next method.
        • If all fence methods fail, then both nodes remain in "standby" state, and an administrator must manually force the transition after verifying that the other node is no longer active.

        The first fence method will always be the "cooperative" method. We can also ship with Hadoop an implementation of method #2 (shoot-the-other-process-in-the-head via ssh). Methods 3-5 would probably be fulfilled by custom site-specific shell scripts, example snippets on a wiki, or existing tools like the fence_* programs that are available from Red Hat.

        Open questions

        • do we need to have any kind of framework for unfencing built in to Hadoop? Or is it up to an administrator to "unfence"?
        • is it actually a good idea to include "Cooperative shutdown" in this same framework? or should we only call fence when we know it's uncooperative?
        Show
        Todd Lipcon added a comment - Fencing overview In order to fence a NN, there are several different methods, at varying levels of nastiness: 1) Cooperative active->standby transition or shutdown In the case of a manual failover, the old primary can gracefully either transition to a standby mode, or gracefully shut down. In this case, since we assume the software to be cooperative, no real "fencing" is necessary – the new NN just needs to unambiguously confirm that the old NN has dropped out of active mode. This method succeeds only if the old NN remains in full operation. 2) Process killing or death verification (eg via ssh or a second daemon) In the case that the old primary has either hung (eg deadlock) or crashed (eg JVM segfault), but the host is OK, the new primary may contact that host and send SIGKILL to the NameNode JVM. This may be done either via ssh or via contacting some process which is still running on the node. It is also sufficient to verify that the NN process is no longer running in the case that its JVM crashed. This method succeeds only if the host of the old NN remains in full operation, despite the NN itself being deadlocked or crashed. 3) Storage fencing Depending on the type of storage in which the old NN stores its edits directories, the new NN may explicitly fence the storage. This is typically accomplished using a vendor-specific extension. For example, NetApp filers support the command "exportfs -b enable save <nnhost.com> /vol/vol0" which can be remotely issued in order to disallow any further access to a particular mount by a particular host. In the case of edits stored on BookKeeper in the future, we may be able to implement some kind of lease revocation or fencing within that storage system. 4) Network port fencing Many switches support remote management. One way to prevent a NameNode from responding to any further requests is to forcibly disable its network port. An alternative similar mechanism is to use something like a LOM card to remotely disable the NIC. 5) Power port fencing (aka STONITH) Many power distribution units (PDUs) support remote management. The last ditch effort to fence a node is to literally "pull the power" Proposal Since methods 3-5 above are usually vendor-specific implementations, it does not make sense to try to implement a catch-all fencing mechanism within Hadoop. Instead, operators are likely to want to use commonly available shell scripts that work against their preferred hardware. Given this, I would propose that Hadoop's fencing behavior be: Configure a list of "fence methods", each with an associated priority. Each fence method returns an exit code indicating whether it has successfully fenced the target node. If any method succeeds, no further method is attempted. If a method fails, continue down the list to try the next method. If all fence methods fail, then both nodes remain in "standby" state, and an administrator must manually force the transition after verifying that the other node is no longer active. The first fence method will always be the "cooperative" method. We can also ship with Hadoop an implementation of method #2 (shoot-the-other-process-in-the-head via ssh). Methods 3-5 would probably be fulfilled by custom site-specific shell scripts, example snippets on a wiki, or existing tools like the fence_* programs that are available from Red Hat. Open questions do we need to have any kind of framework for unfencing built in to Hadoop? Or is it up to an administrator to "unfence"? is it actually a good idea to include "Cooperative shutdown" in this same framework? or should we only call fence when we know it's uncooperative?
        Hide
        Todd Lipcon added a comment -

        and one more open question I forgot: do we care about read fencing? ie, is it OK if the old NN can for some number of seconds service reads which are no longer up-to-date? If so, storage fencing is necessary but not sufficient.

        Show
        Todd Lipcon added a comment - and one more open question I forgot: do we care about read fencing? ie, is it OK if the old NN can for some number of seconds service reads which are no longer up-to-date? If so, storage fencing is necessary but not sufficient.
        Hide
        Todd Lipcon added a comment -

        I found this reference useful to see what kind of fencing methods are implemented by Red Hat Cluster Suite: https://access.redhat.com/kb/docs/DOC-30004

        Show
        Todd Lipcon added a comment - I found this reference useful to see what kind of fencing methods are implemented by Red Hat Cluster Suite: https://access.redhat.com/kb/docs/DOC-30004
        Hide
        Kihwal Lee added a comment -

        I think it is safe to serve reads as long as the new node is not serving writes. So there can be a period of service overlap if we can make sure the old node stops serving reads before the new node starts serving writes. I am assuming the both are serving the same content, but if the fs state has diverged between the two (e.g. the in-memory state of the old node is not in sync with the persistent one), even serving reads may not be safe. Although it is safe in terms of the data integrity at the file system level in this case, it may cause clients to make wrong decisions and lose data. Probably we should not trust the old node at all since it can have unexpected failure modes. Then serving reads is not safe.

        Show
        Kihwal Lee added a comment - I think it is safe to serve reads as long as the new node is not serving writes. So there can be a period of service overlap if we can make sure the old node stops serving reads before the new node starts serving writes. I am assuming the both are serving the same content, but if the fs state has diverged between the two (e.g. the in-memory state of the old node is not in sync with the persistent one), even serving reads may not be safe. Although it is safe in terms of the data integrity at the file system level in this case, it may cause clients to make wrong decisions and lose data. Probably we should not trust the old node at all since it can have unexpected failure modes. Then serving reads is not safe.
        Hide
        Eli Collins added a comment -

        Agree with your proposal, though we shouldn't need to fence the old NN in the cooperative case (because the old primary has confirmed that it's gone into standby, closed its storage dirs, stopped service threads, etc). Since we have to make the uncooperative case work anyway, and exercising it frequently/by default will help find the relevant bugs (eg place where we're not syncing the log but should be) we should start with it.

        Show
        Eli Collins added a comment - Agree with your proposal, though we shouldn't need to fence the old NN in the cooperative case (because the old primary has confirmed that it's gone into standby, closed its storage dirs, stopped service threads, etc). Since we have to make the uncooperative case work anyway, and exercising it frequently/by default will help find the relevant bugs (eg place where we're not syncing the log but should be) we should start with it.
        Hide
        Suresh Srinivas added a comment -

        Case 1), where active standby are in communication and co-operating does not require fencing at all. Fencing is required only when active/standby cannot communicate. So we should drop that out of cases to consider.

        When using solutions such as LinuxHA, a local process (LRM) kills the process to be fenced. This does not require ssh to the node. HDFS-2185 should consider this requirement. I might start with LinuxHA to play around with this, in the first phase, since I think getting a rock solid and correct fail-over controller is non-trivial.

        Show
        Suresh Srinivas added a comment - Case 1), where active standby are in communication and co-operating does not require fencing at all. Fencing is required only when active/standby cannot communicate. So we should drop that out of cases to consider. When using solutions such as LinuxHA, a local process (LRM) kills the process to be fenced. This does not require ssh to the node. HDFS-2185 should consider this requirement. I might start with LinuxHA to play around with this, in the first phase, since I think getting a rock solid and correct fail-over controller is non-trivial.
        Hide
        dhruba borthakur added a comment -

        Awesome, +1 to the proposal listed above.

        Show
        dhruba borthakur added a comment - Awesome, +1 to the proposal listed above.
        Hide
        Todd Lipcon added a comment -

        Here's a preliminary version of this. I've included the basic framework code, as well as two fencing implementations:
        1) shell-command based fencing
        2) ssh-based fencing that uses jsch to ssh into the target node and fuser to kill whatever process is holding onto the target port

        This isn't at all integrated into the NN as of yet, since it's not clear what the hook points will be. But if this looks like the right path, I'd like to commit it to the HA branch, and we can adapt it to its integration points (eg failover controller) later.

        Show
        Todd Lipcon added a comment - Here's a preliminary version of this. I've included the basic framework code, as well as two fencing implementations: 1) shell-command based fencing 2) ssh-based fencing that uses jsch to ssh into the target node and fuser to kill whatever process is holding onto the target port This isn't at all integrated into the NN as of yet, since it's not clear what the hook points will be. But if this looks like the right path, I'd like to commit it to the HA branch, and we can adapt it to its integration points (eg failover controller) later.
        Hide
        Aaron T. Myers added a comment -

        Patch looks pretty good, Todd. A few comments:

        1. Please add some comments to the FenceMethod interface
        2. I think FenceMethod should be public. Entirely possible (if not likely) end users will want to implement their own FenceMethods, and they shouldn't need to put them in o.a.h.hdfs.server.namenode.ha.
        3. Please add some class comments to NodeFencer.
        4. Seems to me like NodeFencer.fence should be catching Exception thrown by the individual methods. No reason not to try the other ones if some exception other than BadFencingConfigurationException is thrown.
        5. In SshFenceByTcpPort.getNNPort, won't this be getting the port of the NN from where the SSH is occurring, not necessarily of the NN which is being SSHed into? This sort of points to what may be a larger problem, which is that I believe it's presently impossible to configure the addresses of multiple NNs in a single configuration.
        Show
        Aaron T. Myers added a comment - Patch looks pretty good, Todd. A few comments: Please add some comments to the FenceMethod interface I think FenceMethod should be public. Entirely possible (if not likely) end users will want to implement their own FenceMethods , and they shouldn't need to put them in o.a.h.hdfs.server.namenode.ha . Please add some class comments to NodeFencer . Seems to me like NodeFencer.fence should be catching Exception thrown by the individual methods. No reason not to try the other ones if some exception other than BadFencingConfigurationException is thrown. In SshFenceByTcpPort.getNNPort , won't this be getting the port of the NN from where the SSH is occurring, not necessarily of the NN which is being SSHed into? This sort of points to what may be a larger problem, which is that I believe it's presently impossible to configure the addresses of multiple NNs in a single configuration.
        Hide
        Todd Lipcon added a comment -

        Thanks for the review. Here's a new revision:

        • Added javadoc to FenceMethod, NodeFencer, etc
        • Made FenceMethod a public interface, added audience/stability annotations
        • Added a catch clause for all Throwables around each fence method
        • Made SshFenceByTcpPort take a second parameter in order to configure the port of the target process. eg <code>sshfence(nn2.foo.com, 8020)</code> will make it ssh into that host and kill whatever process is listening on port 8020.

        I imagine we'll need to revisit some of this when we're farther along in other areas – in particular so we can have the same configuration on the two peers, but have them properly STONITH each other rather than themselves. But I think it's best to address that a little down the road.

        Show
        Todd Lipcon added a comment - Thanks for the review. Here's a new revision: Added javadoc to FenceMethod, NodeFencer, etc Made FenceMethod a public interface, added audience/stability annotations Added a catch clause for all Throwables around each fence method Made SshFenceByTcpPort take a second parameter in order to configure the port of the target process. eg <code>sshfence(nn2.foo.com, 8020)</code> will make it ssh into that host and kill whatever process is listening on port 8020. I imagine we'll need to revisit some of this when we're farther along in other areas – in particular so we can have the same configuration on the two peers, but have them properly STONITH each other rather than themselves. But I think it's best to address that a little down the road.
        Hide
        Aaron T. Myers added a comment -

        Latest patch looks great. One tiny comment

        In the loop in NodeFencer.fence, why do you continue in the event of BadFencingConfigurationException, but not in the case of an unknown Throwable? I can imagine a justification for continuing in both or neither cases, but not in only one.

        +1 once this is addressed.

        Show
        Aaron T. Myers added a comment - Latest patch looks great. One tiny comment In the loop in NodeFencer.fence , why do you continue in the event of BadFencingConfigurationException , but not in the case of an unknown Throwable ? I can imagine a justification for continuing in both or neither cases, but not in only one. +1 once this is addressed.
        Hide
        Todd Lipcon added a comment -

        The only difference the lack of continue statement makes should be logging output. But I see your point - I'll add a continue there to be consistent, and then commit this to the HA branch.

        Show
        Todd Lipcon added a comment - The only difference the lack of continue statement makes should be logging output. But I see your point - I'll add a continue there to be consistent, and then commit this to the HA branch.
        Hide
        Todd Lipcon added a comment -

        Updated patch with continue added. I'll commit this to the HA branch momentarily. Thanks for the review.

        If other committers have followup, happy to address later on branch.

        Show
        Todd Lipcon added a comment - Updated patch with continue added. I'll commit this to the HA branch momentarily. Thanks for the review. If other committers have followup, happy to address later on branch.
        Hide
        Suresh Srinivas added a comment -

        Todd, should we run test-patch before committing changes? My preference is to do that.

        Show
        Suresh Srinivas added a comment - Todd, should we run test-patch before committing changes? My preference is to do that.
        Hide
        Suresh Srinivas added a comment -

        I have not reviewed the patch yet. Just looking from the high level, why should this be in namenode package? Is this not generic enough that it should be in common?

        Show
        Suresh Srinivas added a comment - I have not reviewed the patch yet. Just looking from the high level, why should this be in namenode package? Is this not generic enough that it should be in common?
        Hide
        Todd Lipcon added a comment -

        Yes, sorry for not running test-patch before. I just ran it and it pointed out that I was missing the license on one of the new test files as well. Aside from that, it passes.

        As for why it goes in the NN package instead of common, I think it's better to start with building things specific to our current use case. Then, if we have need of this code from another spot (eg MapReduce HA) we can consider moving it to common. But let's not overly generalize until we have to – eg from what I've heard about MR HA, it stores all of its critical state in ZooKeeper so this sort of fencing is not necessary.

        Show
        Todd Lipcon added a comment - Yes, sorry for not running test-patch before. I just ran it and it pointed out that I was missing the license on one of the new test files as well. Aside from that, it passes. As for why it goes in the NN package instead of common, I think it's better to start with building things specific to our current use case. Then, if we have need of this code from another spot (eg MapReduce HA) we can consider moving it to common. But let's not overly generalize until we have to – eg from what I've heard about MR HA, it stores all of its critical state in ZooKeeper so this sort of fencing is not necessary.
        Hide
        Suresh Srinivas added a comment -

        My feeling is that this is not NN specific. Whether MR uses it or not, doing it in common gets the abstractions right.

        Show
        Suresh Srinivas added a comment - My feeling is that this is not NN specific. Whether MR uses it or not, doing it in common gets the abstractions right.
        Hide
        Eli Collins added a comment -

        Agree. Spoke to Todd, am going to move this to common for HADOOP-7938 (there's minimal HDFS dependency).

        Show
        Eli Collins added a comment - Agree. Spoke to Todd, am going to move this to common for HADOOP-7938 (there's minimal HDFS dependency).
        Hide
        Eli Collins added a comment -

        Patch attached that shows the delta required after svn move.

        Show
        Eli Collins added a comment - Patch attached that shows the delta required after svn move.
        Hide
        Todd Lipcon added a comment -

        the delta patch looks good to me.

        Show
        Todd Lipcon added a comment - the delta patch looks good to me.
        Hide
        Eli Collins added a comment -

        Thanks Todd, I've committed this.

        Show
        Eli Collins added a comment - Thanks Todd, I've committed this.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-HAbranch-build #40 (See https://builds.apache.org/job/Hadoop-Hdfs-HAbranch-build/40/)
        HADOOP-7961. Move HA fencing to common. Contributed by Eli Collins

        eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228510
        Files :

        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/pom.xml
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/BadFencingConfigurationException.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/StreamPumper.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestNodeFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestSshFenceByTcpPort.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/pom.xml
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BadFencingConfigurationException.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/FenceMethod.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/NodeFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ShellCommandFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/SshFenceByTcpPort.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StreamPumper.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNodeFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestShellCommandFencer.java
        • /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSshFenceByTcpPort.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-HAbranch-build #40 (See https://builds.apache.org/job/Hadoop-Hdfs-HAbranch-build/40/ ) HADOOP-7961 . Move HA fencing to common. Contributed by Eli Collins eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228510 Files : /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/pom.xml /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/BadFencingConfigurationException.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/StreamPumper.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestNodeFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestSshFenceByTcpPort.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BadFencingConfigurationException.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/FenceMethod.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/NodeFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ShellCommandFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/SshFenceByTcpPort.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StreamPumper.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNodeFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestShellCommandFencer.java /hadoop/common/branches/ HDFS-1623 /hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSshFenceByTcpPort.java

          People

          • Assignee:
            Todd Lipcon
            Reporter:
            Todd Lipcon
          • Votes:
            0 Vote for this issue
            Watchers:
            17 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development