Hadoop Common
  1. Hadoop Common
  2. HADOOP-8163

Improve ActiveStandbyElector to provide hooks for fencing old active

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.23.3, 0.24.0
    • Fix Version/s: 2.0.0-alpha
    • Component/s: ha
    • Labels:
      None

      Description

      When a new node becomes active in an HA setup, it may sometimes have to take fencing actions against the node that was formerly active. This JIRA extends the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK directory, which acts as a second copy of the active node's information. Then, if the active loses its ZK session, the next active to be elected may easily locate the unfenced node to take the appropriate actions.

      1. hadoop-8163.txt
        33 kB
        Todd Lipcon
      2. hadoop-8163.txt
        49 kB
        Todd Lipcon
      3. hadoop-8163.txt
        51 kB
        Todd Lipcon
      4. hadoop-8163.txt
        51 kB
        Todd Lipcon
      5. hadoop-8163.txt
        52 kB
        Todd Lipcon

        Activity

        Hide
        Todd Lipcon added a comment -

        The design here is pretty simple:

        In ZK:

        • add an additional znode (the "info" znode) next to the "lock" znode, which is a PERSISTENT node with the same data.

        Upon successfully acquiring the lock znode:

        • check if there exists an "info" znode
          • if so, the previous active did not exit cleanly. Call an application-provided fencing hook, providing the data from the "info" znode
          • If the fencing hook succeeds, delete the "info" znode
        • create an "info" znode with one's own app data
        • proceed to call the becomeActive API on the app

        Upon crashing:

        • the ephemeral node disappears
        • by the order of events above, if the application has become active, then it will have created an "info" znode so whoever recovers knows to fence it

        Upon graceful exit:

        • first transition out of "active" mode (e.g. shutdown the NN)
        • then delete the "info" node
        • then close the session (deleting the ephemeral node)
        Show
        Todd Lipcon added a comment - The design here is pretty simple: In ZK : add an additional znode (the "info" znode) next to the "lock" znode, which is a PERSISTENT node with the same data. Upon successfully acquiring the lock znode: check if there exists an "info" znode if so, the previous active did not exit cleanly. Call an application-provided fencing hook, providing the data from the "info" znode If the fencing hook succeeds, delete the "info" znode create an "info" znode with one's own app data proceed to call the becomeActive API on the app Upon crashing: the ephemeral node disappears by the order of events above, if the application has become active, then it will have created an "info" znode so whoever recovers knows to fence it Upon graceful exit: first transition out of "active" mode (e.g. shutdown the NN) then delete the "info" node then close the session (deleting the ephemeral node)
        Hide
        Todd Lipcon added a comment -

        Attached patch implements the above.
        I've been using this in the development of the ZK-based failover controller.

        I also took the liberty of cleaning up the test case which was unnecessarily complicated, and didn't actually bubble up test failures from the inner threads.

        Show
        Todd Lipcon added a comment - Attached patch implements the above. I've been using this in the development of the ZK-based failover controller. I also took the liberty of cleaning up the test case which was unnecessarily complicated, and didn't actually bubble up test failures from the inner threads.
        Hide
        Hari Mankude added a comment -

        Todd,

        I am not sure if this is simple. Going by your description, on the zookeeper, all NNs will create two types of znodes (lock znode and info znode) seperately. Basically, creation of two different znodes by the same NN cannot be done atomically. This would create different error scenarios.

        Show
        Hari Mankude added a comment - Todd, I am not sure if this is simple. Going by your description, on the zookeeper, all NNs will create two types of znodes (lock znode and info znode) seperately. Basically, creation of two different znodes by the same NN cannot be done atomically. This would create different error scenarios.
        Hide
        Todd Lipcon added a comment -

        Hi Hari,

        The creation is not atomic, but the creation of the "info" znode is done after the creation of the "lock" znode. So, only one NN can try to create an "info" node at a time (since only one can hold the lock at a time)

        The only other possibility is that the NN creates the "lock" znode, then the session expires, and then creates the "info" znode. But this isn't possible since, if the session expired, the creation of the "info" node would fail with an exception.

        Do you have a specific error scenario in mind that I'm not covering?

        Show
        Todd Lipcon added a comment - Hi Hari, The creation is not atomic, but the creation of the "info" znode is done after the creation of the "lock" znode. So, only one NN can try to create an "info" node at a time (since only one can hold the lock at a time) The only other possibility is that the NN creates the "lock" znode, then the session expires, and then creates the "info" znode. But this isn't possible since, if the session expired, the creation of the "info" node would fail with an exception. Do you have a specific error scenario in mind that I'm not covering?
        Hide
        Hari Mankude added a comment -

        Hi Todd,

        The question I had was how is the info znode creation prevented when the client does not have the ephemeral lock znode? Is this ensured in the zk client or at the zookeeper?

        Show
        Hari Mankude added a comment - Hi Todd, The question I had was how is the info znode creation prevented when the client does not have the ephemeral lock znode? Is this ensured in the zk client or at the zookeeper?
        Hide
        Todd Lipcon added a comment -

        The question I had was how is the info znode creation prevented when the client does not have the ephemeral lock znode? Is this ensured in the zk client or at the zookeeper?

        This is ensured by ZooKeeper. The only reason the ephemeral node would disappear is if the session was expired. This means the leader has marked the session as such – and thus, you can no longer issue commands under that same session.

        To be sure, I just double checked with Pat Hunt from the ZK team. Apparently there was a rare race condition bug ZOOKEEPER-1208 fixed in 3.3.4/3.4.0 about this exact case: https://issues.apache.org/jira/browse/ZOOKEEPER-1208?focusedCommentId=13149787&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13149787
        ... but since Hadoop will probably need the krb5 auth from ZK 3.4, it seems a reasonable requirement to need at least that version.

        Show
        Todd Lipcon added a comment - The question I had was how is the info znode creation prevented when the client does not have the ephemeral lock znode? Is this ensured in the zk client or at the zookeeper? This is ensured by ZooKeeper. The only reason the ephemeral node would disappear is if the session was expired. This means the leader has marked the session as such – and thus, you can no longer issue commands under that same session. To be sure, I just double checked with Pat Hunt from the ZK team. Apparently there was a rare race condition bug ZOOKEEPER-1208 fixed in 3.3.4/3.4.0 about this exact case: https://issues.apache.org/jira/browse/ZOOKEEPER-1208?focusedCommentId=13149787&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13149787 ... but since Hadoop will probably need the krb5 auth from ZK 3.4, it seems a reasonable requirement to need at least that version.
        Hide
        Hari Mankude added a comment -

        Thanks Todd for answers.

        I had a suggestion. One of the issues with automatic failover is how does a NN become active when the other node is not available or when both NNs are restarting. If there is a persistant info znode, then it provides a very powerful invariant.

        1. If info znode is present, then only the owner of info znode can be made active by preference. Alternatively, the other NN can be made active, provided it can capture all the state from the previously active znode. If the other NN cannot get all the edit logs from the previous active, it remains in standby state. An admin initiated action can force the takeover with data loss.

        2. If info znode is absent, then it is fair game and both nodes can be made active.

        For this, couple of high level changes would be necessary in the patch.
        1. info znode is not cleaned up by the active nn during clean shutdown. (sure, this results in unnecessary fencing)
        2. after an ephemeral lock takeover and before the previous info znode can be deleted, a state equalization by getting all the editlogs is done by the become_active() on the new node.

        Show
        Hari Mankude added a comment - Thanks Todd for answers. I had a suggestion. One of the issues with automatic failover is how does a NN become active when the other node is not available or when both NNs are restarting. If there is a persistant info znode, then it provides a very powerful invariant. 1. If info znode is present, then only the owner of info znode can be made active by preference. Alternatively, the other NN can be made active, provided it can capture all the state from the previously active znode. If the other NN cannot get all the edit logs from the previous active, it remains in standby state. An admin initiated action can force the takeover with data loss. 2. If info znode is absent, then it is fair game and both nodes can be made active. For this, couple of high level changes would be necessary in the patch. 1. info znode is not cleaned up by the active nn during clean shutdown. (sure, this results in unnecessary fencing) 2. after an ephemeral lock takeover and before the previous info znode can be deleted, a state equalization by getting all the editlogs is done by the become_active() on the new node.
        Hide
        Todd Lipcon added a comment -

        Hi Hari. I like your ideas about using this info znode for failover/restart preferences. But I don't think it's a requirement for a first draft, and it's not clear what you mean by 'state equalization' in your second point. We don't currently use this terminology.

        Are you OK with the current design for a first draft? We can add improvements later – I'm using a protobuf for the info in ZK so we can evolve the information contained within without breaking compatibility.

        Show
        Todd Lipcon added a comment - Hi Hari. I like your ideas about using this info znode for failover/restart preferences. But I don't think it's a requirement for a first draft, and it's not clear what you mean by 'state equalization' in your second point. We don't currently use this terminology. Are you OK with the current design for a first draft? We can add improvements later – I'm using a protobuf for the info in ZK so we can evolve the information contained within without breaking compatibility.
        Hide
        Hari Mankude added a comment -

        Design looks good. We can decide on whether info znode cleanup for a clean shutdown is required or not later on. By state equalization, I meant that in a situation where there is no shared storage, (something like journal daemon), there might be situations where standby might not be able take over since it does not have access to the entire set of latest editlogs. If this is a switchover situation, then these editlogs can be pulled over from the active. If active NN is dead, then standby cannot become active without loss of data. Using quorum on journals (with stopping of the NN service with loss of quorum) is another approach.

        info znode is very useful in solving some basic HA issues for automatic failover.

        patch looks good.

        Show
        Hari Mankude added a comment - Design looks good. We can decide on whether info znode cleanup for a clean shutdown is required or not later on. By state equalization, I meant that in a situation where there is no shared storage, (something like journal daemon), there might be situations where standby might not be able take over since it does not have access to the entire set of latest editlogs. If this is a switchover situation, then these editlogs can be pulled over from the active. If active NN is dead, then standby cannot become active without loss of data. Using quorum on journals (with stopping of the NN service with loss of quorum) is another approach. info znode is very useful in solving some basic HA issues for automatic failover. patch looks good.
        Hide
        Aaron T. Myers added a comment -

        Patch is looking pretty good, Todd. Here are some initial comments. I've so far only reviewed the non-test code.

        1. I find it odd that we use CamelCase for the lock file name, but dash-separated-words for the most recent file name.
        2. Not sure why we run the words "LOCKFILENAME" and "MOSTRECENT_FILE" together for constants. Underscores are cheap.
        3. Seems odd to have "Preconditions.checkState(zkClient != null);" inside the try/catch for KeeperException.
        4. Recommend adding something like "this method will create the znode and all parents if it does not exist" to the method comment of "ensureBaseZNode".
        5. Seems like we should make operationSuccess, operationNodeExists, operationNodeDoesNotExist, and operationRetry static functions. I also question the usefulness of the names of these methods. Feel free to punt this to another JIRA.
        6. Seems like ensureBaseZNode can potentially return without error, but not have actually created the desired znode, if we retry 3 times but continue to get the retry error code returned from ZK.
        7. You left in a "TODO: should handle retries" - do you intend to address this in a separate JIRA?
        8. Now that we have a new method which creates a different ZNode, seems like we should rename the createNode() method to make it clear that it only creates the lock file ZNode. Likewise monitorNode() should probably either be parameterized or renamed to be more specific.
        9. Seems a little odd to throw an AssertionError in the case of unexpected app data in the ZNode we're trying to delete, though I don't have a good suggestion for a better exception. Feel free to ignore this comment.
        10. I think it's reasonable to change the "Checking for any old active..." message to be moved from debug level to info, particularly since the "Old node exists..." message is info level.
        11. You left in a "// TODO: think carefully about this error handling." Will this be addressed in a subsequent JIRA?
        Show
        Aaron T. Myers added a comment - Patch is looking pretty good, Todd. Here are some initial comments. I've so far only reviewed the non-test code. I find it odd that we use CamelCase for the lock file name, but dash-separated-words for the most recent file name. Not sure why we run the words "LOCKFILENAME" and "MOSTRECENT_FILE" together for constants. Underscores are cheap. Seems odd to have "Preconditions.checkState(zkClient != null);" inside the try/catch for KeeperException. Recommend adding something like "this method will create the znode and all parents if it does not exist" to the method comment of "ensureBaseZNode". Seems like we should make operationSuccess, operationNodeExists, operationNodeDoesNotExist, and operationRetry static functions. I also question the usefulness of the names of these methods. Feel free to punt this to another JIRA. Seems like ensureBaseZNode can potentially return without error, but not have actually created the desired znode, if we retry 3 times but continue to get the retry error code returned from ZK. You left in a "TODO: should handle retries" - do you intend to address this in a separate JIRA? Now that we have a new method which creates a different ZNode, seems like we should rename the createNode() method to make it clear that it only creates the lock file ZNode. Likewise monitorNode() should probably either be parameterized or renamed to be more specific. Seems a little odd to throw an AssertionError in the case of unexpected app data in the ZNode we're trying to delete, though I don't have a good suggestion for a better exception. Feel free to ignore this comment. I think it's reasonable to change the "Checking for any old active..." message to be moved from debug level to info, particularly since the "Old node exists..." message is info level. You left in a "// TODO: think carefully about this error handling." Will this be addressed in a subsequent JIRA?
        Hide
        Aaron T. Myers added a comment -

        Finished reviewing the test code. Just a few more little nits:

        1. Recommend s/verifyCallsExist/verifyExistCalls/g
        2. Why not make instance variables in TestActiveStandbyElectorRealZK be private?
        Show
        Aaron T. Myers added a comment - Finished reviewing the test code. Just a few more little nits: Recommend s/verifyCallsExist/verifyExistCalls/g Why not make instance variables in TestActiveStandbyElectorRealZK be private?
        Hide
        Todd Lipcon added a comment -

        Hi Aaron. New patch addresses your feedback above. I also added retry functionality to a few other places where it was missed, and added a unit test for the bug you caught above with ensureBaseZNode

        Show
        Todd Lipcon added a comment - Hi Aaron. New patch addresses your feedback above. I also added retry functionality to a few other places where it was missed, and added a unit test for the bug you caught above with ensureBaseZNode
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12519297/hadoop-8163.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.ipc.TestRPCCallBenchmark

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/740//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/740//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519297/hadoop-8163.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.ipc.TestRPCCallBenchmark +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/740//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/740//console This message is automatically generated.
        Hide
        Aaron T. Myers added a comment -

        Thanks for addressing my comments, Todd. I think this patch looks a lot better. IllegalStateException is a good choice instead of AssertionError.

        A few nits follow. +1 once these are addressed:

        1. Am I missing something, or are ensureBaseZNode and baseNodeExists only called by the tests? If so, we should probably relocate them, or at least mark them @VisibleForTesting if they can't be moved for some reason.
        2. Maybe include the path of the ZNode in the log message "Writing record of this node as the most recent active..."
        3. We're now using CamelCase for all ZNode names, but two comments still reference a dash-separated name.
        4. I still think we should rename operationSuccess, etc. Feel free to punt to another JIRA.
        Show
        Aaron T. Myers added a comment - Thanks for addressing my comments, Todd. I think this patch looks a lot better. IllegalStateException is a good choice instead of AssertionError. A few nits follow. +1 once these are addressed: Am I missing something, or are ensureBaseZNode and baseNodeExists only called by the tests? If so, we should probably relocate them, or at least mark them @VisibleForTesting if they can't be moved for some reason. Maybe include the path of the ZNode in the log message "Writing record of this node as the most recent active..." We're now using CamelCase for all ZNode names, but two comments still reference a dash-separated name. I still think we should rename operationSuccess, etc. Feel free to punt to another JIRA.
        Hide
        Bikas Saha added a comment -

        Sorry for jumping on this late.
        I will try to look at the patch and get back within today.
        I noticed a comment about cleaning up the test case because it was complex. It was complex because it was trying to run the code through a variety of corner failure cases. It is important because this needs to be robust against these hard to repro cases. I agree that there are cases which it does not catch and we can add more tests to cover such holes.

        Show
        Bikas Saha added a comment - Sorry for jumping on this late. I will try to look at the patch and get back within today. I noticed a comment about cleaning up the test case because it was complex. It was complex because it was trying to run the code through a variety of corner failure cases. It is important because this needs to be robust against these hard to repro cases. I agree that there are cases which it does not catch and we can add more tests to cover such holes.
        Hide
        Todd Lipcon added a comment -

        Hi Bikas. To be clear, I did not remove any of your test cases. I just cleaned it up to be implemented much more simply. It looked like you had some confusion about the semantics of inner classes, etc – eg using static variables where unnecessary, etc (iirc you are new to Java, so perfectly understandable!). All of the same corner cases you tested are still tested, just with fewer lines of code and fitting our normal coding conventions.

        Show
        Todd Lipcon added a comment - Hi Bikas. To be clear, I did not remove any of your test cases. I just cleaned it up to be implemented much more simply. It looked like you had some confusion about the semantics of inner classes, etc – eg using static variables where unnecessary, etc (iirc you are new to Java, so perfectly understandable!). All of the same corner cases you tested are still tested, just with fewer lines of code and fitting our normal coding conventions.
        Hide
        Todd Lipcon added a comment -

        Am I missing something, or are ensureBaseZNode and baseNodeExists only called by the tests? If so, we should probably relocate them, or at least mark them @VisibleForTesting if they can't be moved for some reason.

        These are used by my forthcoming patch for the ZK-based automatic failover controller. The ZKFC has a "-formatZK" flag which calls through to ensureBaseZNode. Once this gets committed I'll move forward uploading the patch there.

        I fixed the other three of ATM's comments. I'll wait til tomorrow to commit this in case Bikas has any additional feedback.

        Show
        Todd Lipcon added a comment - Am I missing something, or are ensureBaseZNode and baseNodeExists only called by the tests? If so, we should probably relocate them, or at least mark them @VisibleForTesting if they can't be moved for some reason. These are used by my forthcoming patch for the ZK-based automatic failover controller. The ZKFC has a "-formatZK" flag which calls through to ensureBaseZNode. Once this gets committed I'll move forward uploading the patch there. I fixed the other three of ATM's comments. I'll wait til tomorrow to commit this in case Bikas has any additional feedback.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12519525/hadoop-8163.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/751//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/751//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519525/hadoop-8163.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/751//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/751//console This message is automatically generated.
        Hide
        Bikas Saha added a comment -

        Thanks for clarifying! I was afraid I would have to look out for code changes that might have been covered by the "cleaned up" tests. The new test code looks cleaner. Thanks! I still have to get my head around the changes to the real ZK test

        • consider renaming zkMostRecentFilePath to something like zkMostRecentActiveFilePath. zkMostRecentFilePath is open to being misunderstood. Same for MOST_RECENT_FILENAME.
        • actually MostRecent seems to be a misnomer to me. I think it actually is LockOwnerInfo/LeaderInfo. zkLockOwnerInfoPath/tryDeleteLeaderInfo etc.
        • why is ensureBaseNode() needed? In it we are creating a new set of znodes with the given zkAcls which may or may not be the correct thing. eg. if the admin simply forgot to create the appropriate znode path before starting the service it might be ok to fail. Instead of trying to create the path ourselves with permissions that may or may not be appropriate for the entire path. I would be wary of doing this. What is the use case?
        • consider renaming baseNodeExists() to parentNodeExists() or renaming the parentZnodeName parameter in the constructor to baseNode for consistency. Perhaps this could be called in the constructor to check that the znode exists and be done with config issues. No need for ensureBaseNode() above.
        • this must be my newbie java skills but I find something like - prefixPath.append("/").append(pathParts[index]) or znodeWorkingDir.subString(0, znodeWorkingDir.nextIndexOf('/')) - more readable than prefixPath = Joiner.on("/").join(Arrays.asList(pathParts).subList(0, i)). It might also be more efficient but thats not relevant for this situation.
        • public synchronized void quitElection(boolean needFence) - Dont we want to delete the permanent znode for standby's too? Why check if state is active. It anyways calls a tryDelete* method that should be harmless.
        • tryDeleteMostRecentNode() - From my understanding of "tryFunction" - this function should be not really be asserting that some state holds. If it should assert then we should remove "try" from the name.
        • in zkDoWithRetries there is a NUM_RETRIES field that could be used instead of 3.
        • why are we exposing "public synchronized ZooKeeper getZKClient()"? What operations does the HA service need to perform that needs direct access to the zkClient? I am stressing on this because the Elector is trying to provide an interface for leader election and management protocol. Hence, any necessary activity should be available through the interface. The client of the Elector should not manipulate/read/interact directly with ZK. If it needs ZK for some other activity then it could create another ZK client. Making this a public method adds it to the public API of the elector and I am wary of doing it.
        • the following code seems to have issues
          LOG.info("Old node exists: " + StringUtils.byteToHexString(data));

        appClient.fenceOldActive(data);

        try

        { deleteWithRetries(zkMostRecentFilePath, stat.getVersion()); }

        catch (KeeperException e)

        { LOG.error("Hit error " + e + " trying to delete info znode " + "after fencing!", e); throw e; }

        say appClient.fenceOldActive(data) is a long running operation (and likely to be so). While that is happening, the state of the world changes and this elector is not longer the lock owner. When appClient.fenceOldActive(data) will complete then the code will go ahead and delete the lockOwnerZnode at zkMostRecentFilePath. This node could be from the new leader who had successfully fenced and become active. The version number parameter might accidentally save us but would likely be 0 all the time.

        • what happens if the leader lost the lock, tried to delete its znode, failed to do so, exited anyways, then became the next owner and found the existing mostrecent znode. I think it will try to fence itself. We could add service side logic that checks against this but IMO since the fencing strategy via permanent znodes is an artifact of ActiveStandbyElector, the elector should itself be guarding against this.
        • It would be great to see the functional test with RealZK enhanced to cover some of the permanent znode use cases. I remember fixing some issues in my own code that were exposed to the reality of the real ZK
        Show
        Bikas Saha added a comment - Thanks for clarifying! I was afraid I would have to look out for code changes that might have been covered by the "cleaned up" tests. The new test code looks cleaner. Thanks! I still have to get my head around the changes to the real ZK test consider renaming zkMostRecentFilePath to something like zkMostRecentActiveFilePath. zkMostRecentFilePath is open to being misunderstood. Same for MOST_RECENT_FILENAME. actually MostRecent seems to be a misnomer to me. I think it actually is LockOwnerInfo/LeaderInfo. zkLockOwnerInfoPath/tryDeleteLeaderInfo etc. why is ensureBaseNode() needed? In it we are creating a new set of znodes with the given zkAcls which may or may not be the correct thing. eg. if the admin simply forgot to create the appropriate znode path before starting the service it might be ok to fail. Instead of trying to create the path ourselves with permissions that may or may not be appropriate for the entire path. I would be wary of doing this. What is the use case? consider renaming baseNodeExists() to parentNodeExists() or renaming the parentZnodeName parameter in the constructor to baseNode for consistency. Perhaps this could be called in the constructor to check that the znode exists and be done with config issues. No need for ensureBaseNode() above. this must be my newbie java skills but I find something like - prefixPath.append("/").append(pathParts [index] ) or znodeWorkingDir.subString(0, znodeWorkingDir.nextIndexOf('/')) - more readable than prefixPath = Joiner.on("/").join(Arrays.asList(pathParts).subList(0, i)). It might also be more efficient but thats not relevant for this situation. public synchronized void quitElection(boolean needFence) - Dont we want to delete the permanent znode for standby's too? Why check if state is active. It anyways calls a tryDelete* method that should be harmless. tryDeleteMostRecentNode() - From my understanding of "tryFunction" - this function should be not really be asserting that some state holds. If it should assert then we should remove "try" from the name. in zkDoWithRetries there is a NUM_RETRIES field that could be used instead of 3. why are we exposing "public synchronized ZooKeeper getZKClient()"? What operations does the HA service need to perform that needs direct access to the zkClient? I am stressing on this because the Elector is trying to provide an interface for leader election and management protocol. Hence, any necessary activity should be available through the interface. The client of the Elector should not manipulate/read/interact directly with ZK. If it needs ZK for some other activity then it could create another ZK client. Making this a public method adds it to the public API of the elector and I am wary of doing it. the following code seems to have issues LOG.info("Old node exists: " + StringUtils.byteToHexString(data)); appClient.fenceOldActive(data); try { deleteWithRetries(zkMostRecentFilePath, stat.getVersion()); } catch (KeeperException e) { LOG.error("Hit error " + e + " trying to delete info znode " + "after fencing!", e); throw e; } say appClient.fenceOldActive(data) is a long running operation (and likely to be so). While that is happening, the state of the world changes and this elector is not longer the lock owner. When appClient.fenceOldActive(data) will complete then the code will go ahead and delete the lockOwnerZnode at zkMostRecentFilePath. This node could be from the new leader who had successfully fenced and become active. The version number parameter might accidentally save us but would likely be 0 all the time. what happens if the leader lost the lock, tried to delete its znode, failed to do so, exited anyways, then became the next owner and found the existing mostrecent znode. I think it will try to fence itself. We could add service side logic that checks against this but IMO since the fencing strategy via permanent znodes is an artifact of ActiveStandbyElector, the elector should itself be guarding against this. It would be great to see the functional test with RealZK enhanced to cover some of the permanent znode use cases. I remember fixing some issues in my own code that were exposed to the reality of the real ZK
        Hide
        Todd Lipcon added a comment -

        ilePath. zkMostRecentFilePath is open to being misunderstood. Same for MOST_RECENT_FILENAME.
        Done

        actually MostRecent seems to be a misnomer to me. I think it actually is LockOwnerInfo/LeaderInfo. zkLockOwnerInfoPath/tryDeleteLeaderInfo etc.

        It's not always the lock owner, though. Basically, we go through the following states:

        Time step Lock node MostRecentActive Description
        1 - - Startup
        2 Node A - Node A acquires active lock
        3 Node A Node A ..and writes its own info
        4 - Node A A loses its ZK lease
        5 Node B Node A Node B acquires active lock
        6 Node B - Node B fences node A
        7 Node B Node B Node B writes its info

        So, in steps 3 and 7, calling it "LeaderInfo" or "LockOwnerInfo" makes sense. But in steps 4 and 5, it's the "PreviousLeaderInfo".

        Perhaps just renaming to "LeaderBreadcrumb" or something makes more sense, since it's basically a bread crumb left around by the previous leader so that future leaders know its info.

        why is ensureBaseNode() needed? In it we are creating a new set of znodes with the given zkAcls which may or may not be the correct thing. eg. if the admin simply forgot to create the appropriate znode path before starting the service it might be ok to fail. Instead of trying to create the path ourselves with permissions that may or may not be appropriate for the entire path. I would be wary of doing this. What is the use case?

        The use case is a "ZKFailoverController -formatZK" command line tool that I'm building into the ZKFC code. The thinking is that most administrators won't want to go into the ZK CLI to manually create the parent znode while installing HDFS. Instead, they'd rather just issue this simple command. In the case that they want to have varying permissions across the path, or some more complicated ACL, then they'll have to use the ZK CLI, but for the common case I think this will make deployment much simpler.

        consider renaming baseNodeExists() to parentNodeExists() or renaming the parentZnodeName parameter in the constructor to baseNode for consistency. Perhaps this could be called in the constructor to check that the znode exists and be done with config issues. No need for ensureBaseNode() above.

        Renamed to parentZNodeExists and ensureParentZNode

        this must be my newbie java skills but I find something like - prefixPath.append("/").append(pathParts[index]) or znodeWorkingDir.subString(0, znodeWorkingDir.nextIndexOf('/')) - more readable than prefixPath = Joiner.on("/").join(Arrays.asList(pathParts).subList(0, i)). It might also be more efficient but thats not relevant for this situation.

        Agreed, fixed.

        public synchronized void quitElection(boolean needFence) - Dont we want to delete the permanent znode for standby's too? Why check if state is active. It anyways calls a tryDelete* method that should be harmless.

        If the node is standby, then the permanent znode refers to the current lockholder. So deleting it would incorrectly signify that whoever is active doesn't need to be fenced if it crashes.

        tryDeleteMostRecentNode() - From my understanding of "tryFunction" - this function should be not really be asserting that some state holds. If it should assert then we should remove "try" from the name.

        The difference here is this: the assert() guards against programmer error. It is a mistake to call this function when you aren't active (see above comment). But if there is a ZK error (like the session got lost) it's OK to fail to delete it, since it just means that the node will get fenced.

        in zkDoWithRetries there is a NUM_RETRIES field that could be used instead of 3.

        Fixed

        why are we exposing "public synchronized ZooKeeper getZKClient()"?

        Removed

        the following code seems to have issues... <snip>... While that is happening, the state of the world changes and this elector is not longer the lock owner. When appClient.fenceOldActive(data) will complete then the code will go ahead and delete the lockOwnerZnode at zkMostRecentFilePath. This node could be from the new leader who had successfully fenced and become active. The version number parameter might accidentally save us but would likely be 0 all the time.

        This scenario is impossible for the following reason: If the state of the world changed and this node was no longer active, the only possible reason for that is that the node lost its ZK session lease. If that's the case, then it won't be able to issue any further commands from that client (see my conversation with Hari above)

        what happens if the leader lost the lock, tried to delete its znode, failed to do so, exited anyways, then became the next owner and found the existing mostrecent znode. I think it will try to fence itself

        I'm currently handling this situation at the ZKFailoverController level. Are you suggesting we determine "self" by comparing the actual bytewise data of the znode, and skip the fence call if it's the same as this elector instance's appData? Seems reasonable, just want to clarify.

        It would be great to see the functional test with RealZK enhanced to cover some of the permanent znode use cases. I remember fixing some issues in my own code that were exposed to the reality of the real ZK

        Added a small change to the test to make sure the fencing code got exercised. I also plan to add some more integration-level tests using real ZK and the ZKFailoverController with that patch.

        Show
        Todd Lipcon added a comment - ilePath. zkMostRecentFilePath is open to being misunderstood. Same for MOST_RECENT_FILENAME. Done actually MostRecent seems to be a misnomer to me. I think it actually is LockOwnerInfo/LeaderInfo. zkLockOwnerInfoPath/tryDeleteLeaderInfo etc. It's not always the lock owner, though. Basically, we go through the following states: Time step Lock node MostRecentActive Description 1 - - Startup 2 Node A - Node A acquires active lock 3 Node A Node A ..and writes its own info 4 - Node A A loses its ZK lease 5 Node B Node A Node B acquires active lock 6 Node B - Node B fences node A 7 Node B Node B Node B writes its info So, in steps 3 and 7, calling it "LeaderInfo" or "LockOwnerInfo" makes sense. But in steps 4 and 5, it's the "PreviousLeaderInfo". Perhaps just renaming to "LeaderBreadcrumb" or something makes more sense, since it's basically a bread crumb left around by the previous leader so that future leaders know its info. why is ensureBaseNode() needed? In it we are creating a new set of znodes with the given zkAcls which may or may not be the correct thing. eg. if the admin simply forgot to create the appropriate znode path before starting the service it might be ok to fail. Instead of trying to create the path ourselves with permissions that may or may not be appropriate for the entire path. I would be wary of doing this. What is the use case? The use case is a "ZKFailoverController -formatZK" command line tool that I'm building into the ZKFC code. The thinking is that most administrators won't want to go into the ZK CLI to manually create the parent znode while installing HDFS. Instead, they'd rather just issue this simple command. In the case that they want to have varying permissions across the path, or some more complicated ACL, then they'll have to use the ZK CLI, but for the common case I think this will make deployment much simpler. consider renaming baseNodeExists() to parentNodeExists() or renaming the parentZnodeName parameter in the constructor to baseNode for consistency. Perhaps this could be called in the constructor to check that the znode exists and be done with config issues. No need for ensureBaseNode() above. Renamed to parentZNodeExists and ensureParentZNode this must be my newbie java skills but I find something like - prefixPath.append("/").append(pathParts [index] ) or znodeWorkingDir.subString(0, znodeWorkingDir.nextIndexOf('/')) - more readable than prefixPath = Joiner.on("/").join(Arrays.asList(pathParts).subList(0, i)). It might also be more efficient but thats not relevant for this situation. Agreed, fixed. public synchronized void quitElection(boolean needFence) - Dont we want to delete the permanent znode for standby's too? Why check if state is active. It anyways calls a tryDelete* method that should be harmless. If the node is standby, then the permanent znode refers to the current lockholder. So deleting it would incorrectly signify that whoever is active doesn't need to be fenced if it crashes. tryDeleteMostRecentNode() - From my understanding of "tryFunction" - this function should be not really be asserting that some state holds. If it should assert then we should remove "try" from the name. The difference here is this: the assert() guards against programmer error. It is a mistake to call this function when you aren't active (see above comment). But if there is a ZK error (like the session got lost) it's OK to fail to delete it, since it just means that the node will get fenced. in zkDoWithRetries there is a NUM_RETRIES field that could be used instead of 3. Fixed why are we exposing "public synchronized ZooKeeper getZKClient()"? Removed the following code seems to have issues... <snip>... While that is happening, the state of the world changes and this elector is not longer the lock owner. When appClient.fenceOldActive(data) will complete then the code will go ahead and delete the lockOwnerZnode at zkMostRecentFilePath. This node could be from the new leader who had successfully fenced and become active. The version number parameter might accidentally save us but would likely be 0 all the time. This scenario is impossible for the following reason: If the state of the world changed and this node was no longer active, the only possible reason for that is that the node lost its ZK session lease. If that's the case, then it won't be able to issue any further commands from that client (see my conversation with Hari above) what happens if the leader lost the lock, tried to delete its znode, failed to do so, exited anyways, then became the next owner and found the existing mostrecent znode. I think it will try to fence itself I'm currently handling this situation at the ZKFailoverController level. Are you suggesting we determine "self" by comparing the actual bytewise data of the znode, and skip the fence call if it's the same as this elector instance's appData? Seems reasonable, just want to clarify. It would be great to see the functional test with RealZK enhanced to cover some of the permanent znode use cases. I remember fixing some issues in my own code that were exposed to the reality of the real ZK Added a small change to the test to make sure the fencing code got exercised. I also plan to add some more integration-level tests using real ZK and the ZKFailoverController with that patch.
        Hide
        Todd Lipcon added a comment -

        New patch has the fixes described above. I renamed the "info" node to the "breadcrumb" node.

        Show
        Todd Lipcon added a comment - New patch has the fixes described above. I renamed the "info" node to the "breadcrumb" node.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12519558/hadoop-8163.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests:
        org.apache.hadoop.fs.viewfs.TestViewFsTrash

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/753//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/753//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519558/hadoop-8163.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.fs.viewfs.TestViewFsTrash +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/753//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/753//console This message is automatically generated.
        Hide
        Bikas Saha added a comment -

        This scenario is impossible for the following reason: If the state of the world changed and this node was no longer active, the only possible reason for that is that the node lost its ZK session lease. If that's the case, then it won't be able to issue any further commands from that client (see my conversation with Hari above)

        May not be impossible. I will try to think of a better scenario. But of the top of my head I came up with this. Lets say A was active. B and C were standby's. (HDFS may have only A and B but this is a generic lib). B gets the lock but has delay in fencing A. Downtime is getting high. So paranoid admin deletes the lock hoping a new master might solve this. C gets lock and fences A. B did not write a lock. So C thinks it has fenced all masters and becomes master. Now dofence on B completes (fail/success does not matter). It deletes C's breadcrumb. Now it writes its own new breadcrumb. We have inconsistent znode state where lock znode and breadcrumb znode are from different sources. Then B asks its service to become active. We have 2 masters. Now B processes the ZK event that tells it that it has lost the lock. B asks service to become standby. And deletes its breadcrumb. Now there is no breadcrumb. Now C goes into bad state. C loses master lock but does not die. And A takes lock. No breadcrumb and so A becomes master without fencing bad C. A and C are concurrent masters.

        I'm currently handling this situation at the ZKFailoverController level. Are you suggesting we determine "self" by comparing the actual bytewise data of the znode, and skip the fence call if it's the same as this elector instance's appData? Seems reasonable, just want to clarify.

        Yes. I am suggesting to do this within the Elector and not at the ZKFailoverController level. The "self" compare approach would be reasonable as long as we can assure ourselves that appData will not be same across different candidates.

        Show
        Bikas Saha added a comment - This scenario is impossible for the following reason: If the state of the world changed and this node was no longer active, the only possible reason for that is that the node lost its ZK session lease. If that's the case, then it won't be able to issue any further commands from that client (see my conversation with Hari above) May not be impossible. I will try to think of a better scenario. But of the top of my head I came up with this. Lets say A was active. B and C were standby's. (HDFS may have only A and B but this is a generic lib). B gets the lock but has delay in fencing A. Downtime is getting high. So paranoid admin deletes the lock hoping a new master might solve this. C gets lock and fences A. B did not write a lock. So C thinks it has fenced all masters and becomes master. Now dofence on B completes (fail/success does not matter). It deletes C's breadcrumb. Now it writes its own new breadcrumb. We have inconsistent znode state where lock znode and breadcrumb znode are from different sources. Then B asks its service to become active. We have 2 masters. Now B processes the ZK event that tells it that it has lost the lock. B asks service to become standby. And deletes its breadcrumb. Now there is no breadcrumb. Now C goes into bad state. C loses master lock but does not die. And A takes lock. No breadcrumb and so A becomes master without fencing bad C. A and C are concurrent masters. I'm currently handling this situation at the ZKFailoverController level. Are you suggesting we determine "self" by comparing the actual bytewise data of the znode, and skip the fence call if it's the same as this elector instance's appData? Seems reasonable, just want to clarify. Yes. I am suggesting to do this within the Elector and not at the ZKFailoverController level. The "self" compare approach would be reasonable as long as we can assure ourselves that appData will not be same across different candidates.
        Hide
        Todd Lipcon added a comment -

        So paranoid admin deletes the lock hoping a new master might solve this

        If the admin is mucking about in ZK, then all bets are off. The proper thing for the admin to do is to kill B's failover controller, not to go delete a znode.

        Yes. I am suggesting to do this within the Elector and not at the ZKFailoverController level. The "self" compare approach would be reasonable as long as we can assure ourselves that appData will not be same across different candidates

        K, that's the approach in the latest patch I uploaded.

        Show
        Todd Lipcon added a comment - So paranoid admin deletes the lock hoping a new master might solve this If the admin is mucking about in ZK, then all bets are off. The proper thing for the admin to do is to kill B's failover controller, not to go delete a znode. Yes. I am suggesting to do this within the Elector and not at the ZKFailoverController level. The "self" compare approach would be reasonable as long as we can assure ourselves that appData will not be same across different candidates K, that's the approach in the latest patch I uploaded.
        Hide
        Bikas Saha added a comment -

        How would the admin know that B is the errant controller?

        Aside from the above.
        I have not seen a detailed design for the failover controller. So I cannot make a precise comment but let me post a general idea.

        I have a feeling that putting the fencing concept into the elector is diluting the distinctness between the elector and the failover controller. In my mind, the elector is a distributed leader election library that signals candidates about being made leader or standby. In the ideal world, where the HA service behaves perfectly and does not execute any instruction unless it is a leader, we only need the elector. But the world is not ideal and we can have errant leader who need to be fenced etc. Here is where the Failover controller comes in. It manages the HA service by using the elector to do distributed leader selection and get those notifications passed onto the HAservice. In addition is guards service sanity by making sure that the signal is passed only when it is safe to do so.
        How about this slightly different alternative flow. Elector gets leader lock. For all intents and purposes it is the new leader. It passes the signal to the failover controller with the breadcrumb of the last leader.
        appClient->becomeActive(breadcrumb);
        the failoverController now has to ensure that all previous master are fenced before making its service the master. the breadcrumb is an optimization that lets it know that such an operation may not be necessary. If it is necessary, then it performs fencing. If fencing is successful, it calls.
        elector->becameActive() or elector->transitionedToActive() at which point the elector can overwrite the breadcrumb with its own info. I havent thought through if this should be called before or after a successful call to HAService->transitionToActive() but my gut feeling is for the former.
        This keeps the notion of fencing inside the controller instead of being in both the elector and the controller.

        Secondly, we are performing blocking calls on the ZKClient callback that happens on the ZK threads. It is advisable to not block ZK client threads for long. The create and delete methods might be ok but I would try to move the fencing operation and transitioning to active operations away from the ZK thread. i.e. when the FailoverController is notified about becoming master, it returns the call and then processes fencing/transitioning on some other thread/threadpool. The above flow allows for this.

        Thirdly, how about using the setData(breadcrumb, appData, version)?
        This replaces a 2 step operation (delete+create) with a 1 step operation (set) which is always a desirable thing in distributed "transactions". The version number also gives an idea of the number of switches. The version also prevents a setData() from succeeding if someone else has already set it before you (may not be important here but is a good sanity check).

        Let me know your thoughts on the above.

        Show
        Bikas Saha added a comment - How would the admin know that B is the errant controller? Aside from the above. I have not seen a detailed design for the failover controller. So I cannot make a precise comment but let me post a general idea. I have a feeling that putting the fencing concept into the elector is diluting the distinctness between the elector and the failover controller. In my mind, the elector is a distributed leader election library that signals candidates about being made leader or standby. In the ideal world, where the HA service behaves perfectly and does not execute any instruction unless it is a leader, we only need the elector. But the world is not ideal and we can have errant leader who need to be fenced etc. Here is where the Failover controller comes in. It manages the HA service by using the elector to do distributed leader selection and get those notifications passed onto the HAservice. In addition is guards service sanity by making sure that the signal is passed only when it is safe to do so. How about this slightly different alternative flow. Elector gets leader lock. For all intents and purposes it is the new leader. It passes the signal to the failover controller with the breadcrumb of the last leader. appClient->becomeActive(breadcrumb); the failoverController now has to ensure that all previous master are fenced before making its service the master. the breadcrumb is an optimization that lets it know that such an operation may not be necessary. If it is necessary, then it performs fencing. If fencing is successful, it calls. elector->becameActive() or elector->transitionedToActive() at which point the elector can overwrite the breadcrumb with its own info. I havent thought through if this should be called before or after a successful call to HAService->transitionToActive() but my gut feeling is for the former. This keeps the notion of fencing inside the controller instead of being in both the elector and the controller. Secondly, we are performing blocking calls on the ZKClient callback that happens on the ZK threads. It is advisable to not block ZK client threads for long. The create and delete methods might be ok but I would try to move the fencing operation and transitioning to active operations away from the ZK thread. i.e. when the FailoverController is notified about becoming master, it returns the call and then processes fencing/transitioning on some other thread/threadpool. The above flow allows for this. Thirdly, how about using the setData(breadcrumb, appData, version)? This replaces a 2 step operation (delete+create) with a 1 step operation (set) which is always a desirable thing in distributed "transactions". The version number also gives an idea of the number of switches. The version also prevents a setData() from succeeding if someone else has already set it before you (may not be important here but is a good sanity check). Let me know your thoughts on the above.
        Hide
        Todd Lipcon added a comment -

        Hi Bikas. I think your ideas have some merit, especially with regard to a fully general election framework. But since we only have one user of this framework at this point (HDFS) and we currently only support a single standby node, I would prefer to punt these changes to another JIRA as additional improvements. This will let us move forward with the high priority task of auto failover for HA NNs, rather than getting distracted making this extremely general.

        Secondly, we are performing blocking calls on the ZKClient callback that happens on the ZK threads. It is advisable to not block ZK client threads for long

        This is only the case if you have other operations that are waiting on timely delivery of callbacks. In the case of the election framework, all of our notifications from ZK have to be received in-order and processed sequentially, or else we have a huge explosion of possible interactions to worry about. Doing blocking calls in the callbacks will not result in lost ZK leases, etc. To quote from the ZK programmer's guide:

        "All IO happens on the IO thread (using Java NIO). All event callbacks happen on the event thread. Session maintenance such as reconnecting to ZooKeeper servers and maintaining heartbeat is done on the IO thread. Responses for synchronous methods are also processed in the IO thread. All responses to asynchronous methods and watch events are processed on the event thread... Callbacks do not block the processing of the IO thread or the processing of the synchronous calls"

        Thirdly, how about using the setData(breadcrumb, appData, version)?

        Let me see about making this change. Like you said, it's a good safety check.

        Show
        Todd Lipcon added a comment - Hi Bikas. I think your ideas have some merit, especially with regard to a fully general election framework. But since we only have one user of this framework at this point (HDFS) and we currently only support a single standby node, I would prefer to punt these changes to another JIRA as additional improvements. This will let us move forward with the high priority task of auto failover for HA NNs, rather than getting distracted making this extremely general. Secondly, we are performing blocking calls on the ZKClient callback that happens on the ZK threads. It is advisable to not block ZK client threads for long This is only the case if you have other operations that are waiting on timely delivery of callbacks. In the case of the election framework, all of our notifications from ZK have to be received in-order and processed sequentially, or else we have a huge explosion of possible interactions to worry about. Doing blocking calls in the callbacks will not result in lost ZK leases, etc. To quote from the ZK programmer's guide: "All IO happens on the IO thread (using Java NIO). All event callbacks happen on the event thread. Session maintenance such as reconnecting to ZooKeeper servers and maintaining heartbeat is done on the IO thread. Responses for synchronous methods are also processed in the IO thread. All responses to asynchronous methods and watch events are processed on the event thread... Callbacks do not block the processing of the IO thread or the processing of the synchronous calls" Thirdly, how about using the setData(breadcrumb, appData, version)? Let me see about making this change. Like you said, it's a good safety check.
        Hide
        Bikas Saha added a comment -

        But since we only have one user of this framework at this point (HDFS) and we currently only support a single standby node, I would prefer to punt these changes to another JIRA as additional improvements.

        I would disagree here. The suggestion does not have much to do with HDFS or single standby or generality of the framework. It is about keeping fencing inside FailoverController instead of being shared with the elector. Clear separation of responsibilities.
        I agree that the NN work is more important and without knowing more about the FailoverController/Automatic NN HA I cannot say how much work it would take to change the control flow as described above. My guess is that it would not be big but I might be wrong. In my experience API's once made are hard to change. It would be hard for someone to change the control flow later once important services like NN HA depend on the current flow. So punting it for the future would be quite a distant future indeed

        Doing blocking calls in the callbacks will not result in lost ZK leases, etc. To quote from the ZK programmer's guide:

        I agree. The IO updates will be processed but the callback notification to the client might be impeded if the client is already blocking on the previous callbacks. I was more concerned about the later. That is why I was suggesting to not do fencing on the client callback. Though I agree that in the current patch these calls have to be made synchronously for correctness.

        Show
        Bikas Saha added a comment - But since we only have one user of this framework at this point (HDFS) and we currently only support a single standby node, I would prefer to punt these changes to another JIRA as additional improvements. I would disagree here. The suggestion does not have much to do with HDFS or single standby or generality of the framework. It is about keeping fencing inside FailoverController instead of being shared with the elector. Clear separation of responsibilities. I agree that the NN work is more important and without knowing more about the FailoverController/Automatic NN HA I cannot say how much work it would take to change the control flow as described above. My guess is that it would not be big but I might be wrong. In my experience API's once made are hard to change. It would be hard for someone to change the control flow later once important services like NN HA depend on the current flow. So punting it for the future would be quite a distant future indeed Doing blocking calls in the callbacks will not result in lost ZK leases, etc. To quote from the ZK programmer's guide: I agree. The IO updates will be processed but the callback notification to the client might be impeded if the client is already blocking on the previous callbacks. I was more concerned about the later. That is why I was suggesting to not do fencing on the client callback. Though I agree that in the current patch these calls have to be made synchronously for correctness.
        Hide
        Todd Lipcon added a comment -

        In my experience API's once made are hard to change. It would be hard for someone to change the control flow later once important services like NN HA depend on the current flow. So punting it for the future would be quite a distant future indeed

        Given this is an internal API, there shouldn't be any resistance to changing it in the future. It's marked Private/Evolving, meaning that there aren't guarantees of compatibility to external consumers, and that even for internal consumers it's likely to change as use cases evolve. I'll file a follow-up JIRA to consider your recommended API changes, OK?

        Show
        Todd Lipcon added a comment - In my experience API's once made are hard to change. It would be hard for someone to change the control flow later once important services like NN HA depend on the current flow. So punting it for the future would be quite a distant future indeed Given this is an internal API, there shouldn't be any resistance to changing it in the future. It's marked Private/Evolving, meaning that there aren't guarantees of compatibility to external consumers, and that even for internal consumers it's likely to change as use cases evolve. I'll file a follow-up JIRA to consider your recommended API changes, OK?
        Hide
        Todd Lipcon added a comment -

        New patch uses setData rather than delete/create for updating the breadcrumb node after fencing

        Show
        Todd Lipcon added a comment - New patch uses setData rather than delete/create for updating the breadcrumb node after fencing
        Hide
        Bikas Saha added a comment -

        Looks good!
        +1
        Thanks!

        Show
        Bikas Saha added a comment - Looks good! +1 Thanks!
        Hide
        Aaron T. Myers added a comment -

        I just reviewed the diff from the latest patch to the one I last reviewed, and the changes look good.

        +1 pending Jenkins.

        Show
        Aaron T. Myers added a comment - I just reviewed the diff from the latest patch to the one I last reviewed, and the changes look good. +1 pending Jenkins.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12519696/hadoop-8163.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 6 new or modified tests.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/759//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/759//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12519696/hadoop-8163.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/759//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/759//console This message is automatically generated.
        Hide
        Todd Lipcon added a comment -

        Committed to 0.23 and trunk. Thanks for the reviews, Bikas and Aaron. I filed HADOOP-8205 to further discuss Bikas's suggestions.

        Show
        Todd Lipcon added a comment - Committed to 0.23 and trunk. Thanks for the reviews, Bikas and Aaron. I filed HADOOP-8205 to further discuss Bikas's suggestions.
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-trunk-Commit #1923 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1923/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #1923 (See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1923/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Commit #712 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/712/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Commit #712 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/712/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk-Commit #1997 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1997/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #1997 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1997/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Common-0.23-Commit #722 (See https://builds.apache.org/job/Hadoop-Common-0.23-Commit/722/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Common-0.23-Commit #722 (See https://builds.apache.org/job/Hadoop-Common-0.23-Commit/722/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk-Commit #1932 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1932/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675)

        Result = ABORTED
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #1932 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1932/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675) Result = ABORTED todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-0.23-Commit #730 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/730/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676)

        Result = ABORTED
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-0.23-Commit #730 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/730/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676) Result = ABORTED todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #994 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/994/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #994 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/994/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-0.23-Build #207 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/207/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676)

        Result = UNSTABLE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #207 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/207/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676) Result = UNSTABLE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-0.23-Build #235 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/235/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676)

        Result = FAILURE
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676
        Files :

        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-0.23-Build #235 (See https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/235/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304676) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304676 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1029 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1029/)
        HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675)

        Result = SUCCESS
        todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1029 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1029/ ) HADOOP-8163 . Improve ActiveStandbyElector to provide hooks for fencing old active. Contributed by Todd Lipcon. (Revision 1304675) Result = SUCCESS todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1304675 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java

          People

          • Assignee:
            Todd Lipcon
            Reporter:
            Todd Lipcon
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development