Hadoop Common
  1. Hadoop Common
  2. HADOOP-8217

Edge case split-brain race in ZK-based auto-failover

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.24.0
    • Fix Version/s: None
    • Component/s: auto-failover, ha
    • Labels:
      None

      Description

      As discussed in HADOOP-8206, the current design for automatic failover has the following race:

      • ZKFC1 gets active lock
      • ZKFC1 is about to send transitionToActive() and machine freezes (eg GC pause + swapping)
      • ZKFC1 loses its ZK lock, ZKFC2 gets ZK lock
      • ZKFC2 calls transitionToStandby on NN1, and transitions NN2 to active
      • ZKFC1 wakes up from pause, calls transitionToActive(), now we have a bad situation

      This is rare, since it requires ZKFC1 to freeze longer than its ZK session timeout, but worth fixing, since the results can be disastrous.

        Issue Links

          Activity

          Todd Lipcon created issue -
          Hide
          Todd Lipcon added a comment -

          My thinking for the solution is the following:

          • add a parameter to transitionToStandby/transitionToActive which is a long logicalTime
          • when the ZKFC acquires the lock znode, it makes a note of the zxid (ZK transaction ID)
          • when it then asks the old active to go to standby, or asks its own node to go active, it includes the zxid
          • the NN itself maintains a record of the highest zxid it has heard. If it receives a state transition request with an older zxid, it ignores it.

          This would solve the race as described, since when ZKFC2 calls NN1.transitionToStandby(), it hands NN1 a higher zxid than ZKFC1 saw. So when ZKFC1 then asks it to go active, the request is denied.

          There is still potentially some race involving the NNs restarting quickly and "forgetting" the highest zxid. I'm not sure whether the right solution there is to record the info persistently, or to attach a UUID to each NN startup, and use that to make sure we don't target a newer instance of a NN with an RPC that was meant for an earlier one.

          Other creative solutions appreciated

          Show
          Todd Lipcon added a comment - My thinking for the solution is the following: add a parameter to transitionToStandby/transitionToActive which is a long logicalTime when the ZKFC acquires the lock znode, it makes a note of the zxid (ZK transaction ID) when it then asks the old active to go to standby, or asks its own node to go active, it includes the zxid the NN itself maintains a record of the highest zxid it has heard. If it receives a state transition request with an older zxid, it ignores it. This would solve the race as described, since when ZKFC2 calls NN1.transitionToStandby(), it hands NN1 a higher zxid than ZKFC1 saw. So when ZKFC1 then asks it to go active, the request is denied. There is still potentially some race involving the NNs restarting quickly and "forgetting" the highest zxid. I'm not sure whether the right solution there is to record the info persistently, or to attach a UUID to each NN startup, and use that to make sure we don't target a newer instance of a NN with an RPC that was meant for an earlier one. Other creative solutions appreciated
          Todd Lipcon made changes -
          Field Original Value New Value
          Link This issue relates to HADOOP-8206 [ HADOOP-8206 ]
          Todd Lipcon made changes -
          Link This issue relates to HDFS-3042 [ HDFS-3042 ]
          Hide
          Suresh Srinivas added a comment -

          Todd, I think this has been brought up in other comments as well. I am very uncomfortable with just creating jiras for each scenarios as we think of, instead of capturing it in a design document. I cannot keep up with jiras, that are created and committed in short periods of time. I do not want to comment on jiras asking for time and impede progress. A design document helps here. Perhaps, we should setup a meeting to go over the design, and discuss issues in more detail.

          Show
          Suresh Srinivas added a comment - Todd, I think this has been brought up in other comments as well. I am very uncomfortable with just creating jiras for each scenarios as we think of, instead of capturing it in a design document. I cannot keep up with jiras, that are created and committed in short periods of time. I do not want to comment on jiras asking for time and impede progress. A design document helps here. Perhaps, we should setup a meeting to go over the design, and discuss issues in more detail.
          Hide
          Todd Lipcon added a comment -

          Suresh: we've already had a meeting ostensibly for this purpose, I think. There is also a design document posted to HDFS-2185. The document doesn't include every possible scenario, because I don't have infinite foresight. I don't think having meetings or more reviews of the design doc will help that.

          For example, with the original manual-failover project, we had several design meetings as well as a design document posted on HDFS-1623. Looking back at that project, the design document captured the overall idea (like the HDFS-2185 one does here) but did not foresee some of the trickiest issues we dealt with during implementation (for example, how to deal with invalidations with regard to datanode fencing, how to handle safe mode, how to deal with delegation tokens, etc).

          In that project, as we came upon each new scenario to deal with, we opened a JIRA and had a discussion on the design solution for that particular scenario. I don't see why we can't do the same here. Nor do I see why we are likely to be able to foresee all the corner cases a priori here better than we were able to with HDFS-1623.

          So, I am not going to pause work to wait for meetings or more design discussion. If you see problems with the design, please comment on the design doc on HDFS-2185, or on the individual JIRAs which seem to have problems. I'm happy to address them, even after commit (eg I'm currently addressing Bikas's review comments on HADOOP-8212)

          Since there seems to be concern that we are moving too fast, I will create an auto-failover branch later tonight to continue working on implementing this design. I'll also create a new auto-failover component on JIRA so it's easier to follow them. If you have concerns about the implementation or the design when it comes time to merge it, please do vote against the merge, voicing whatever objections you might have. And please do comment along the way if you see issues.

          Thanks.

          Show
          Todd Lipcon added a comment - Suresh: we've already had a meeting ostensibly for this purpose, I think. There is also a design document posted to HDFS-2185 . The document doesn't include every possible scenario, because I don't have infinite foresight. I don't think having meetings or more reviews of the design doc will help that. For example, with the original manual-failover project, we had several design meetings as well as a design document posted on HDFS-1623 . Looking back at that project, the design document captured the overall idea (like the HDFS-2185 one does here) but did not foresee some of the trickiest issues we dealt with during implementation (for example, how to deal with invalidations with regard to datanode fencing, how to handle safe mode, how to deal with delegation tokens, etc). In that project, as we came upon each new scenario to deal with, we opened a JIRA and had a discussion on the design solution for that particular scenario. I don't see why we can't do the same here. Nor do I see why we are likely to be able to foresee all the corner cases a priori here better than we were able to with HDFS-1623 . So, I am not going to pause work to wait for meetings or more design discussion. If you see problems with the design, please comment on the design doc on HDFS-2185 , or on the individual JIRAs which seem to have problems. I'm happy to address them, even after commit (eg I'm currently addressing Bikas's review comments on HADOOP-8212 ) Since there seems to be concern that we are moving too fast, I will create an auto-failover branch later tonight to continue working on implementing this design. I'll also create a new auto-failover component on JIRA so it's easier to follow them. If you have concerns about the implementation or the design when it comes time to merge it, please do vote against the merge, voicing whatever objections you might have. And please do comment along the way if you see issues. Thanks.
          Todd Lipcon made changes -
          Target Version/s 0.24.0, 0.23.3 [ 12317652, 12320059 ] Auto Failover (HDFS-3042) [ 12320350 ]
          Component/s auto-failover [ 12317908 ]
          Hide
          Suresh Srinivas added a comment -

          we've already had a meeting ostensibly for this purpose, I think.

          The way I understood the meeting we had was more about the next steps and not design details.

          with the original manual-failover project...

          HDFS-1623 was not a manual-failover project. It did talk about automatic failover. It is just that we decided to merge the branch post manual failover. Any way that is orthogonal.

          While HDFS-1623 did give high level direction, some of the design could have been hashed out in more detail. It would have helped people follow what is happening, instead of having to piece together the design through numerous jiras. Any way that is my opinion. I also heard concerns from folks following that branch, that the development looked chaotic...

          So, I am not going to pause work to wait for meetings or more design discussion.

          Well it us up to you. Complex design such as FailoverController could benefit from meeting of folks than doing it in comments over jiras. At least, some of our own internal discussion on this (for example ZK library we did and other design we are doing) greatly benefited from real time discussions.

          Since there seems to be concern that we are moving too fast, I will create an auto-failover branch later tonight to continue working on implementing this design.

          Thanks for doing that.

          HDFS-2185...

          Will review the design and post the comments.

          Show
          Suresh Srinivas added a comment - we've already had a meeting ostensibly for this purpose, I think. The way I understood the meeting we had was more about the next steps and not design details. with the original manual-failover project... HDFS-1623 was not a manual-failover project. It did talk about automatic failover. It is just that we decided to merge the branch post manual failover. Any way that is orthogonal. While HDFS-1623 did give high level direction, some of the design could have been hashed out in more detail. It would have helped people follow what is happening, instead of having to piece together the design through numerous jiras. Any way that is my opinion. I also heard concerns from folks following that branch, that the development looked chaotic... So, I am not going to pause work to wait for meetings or more design discussion. Well it us up to you. Complex design such as FailoverController could benefit from meeting of folks than doing it in comments over jiras. At least, some of our own internal discussion on this (for example ZK library we did and other design we are doing) greatly benefited from real time discussions. Since there seems to be concern that we are moving too fast, I will create an auto-failover branch later tonight to continue working on implementing this design. Thanks for doing that. HDFS-2185 ... Will review the design and post the comments.
          Hide
          Todd Lipcon added a comment -

          Here's a test case which produces the issue as described. This builds on top of the test infrastructure introduced in HADOOP-8228. I also introduced a simple fault injector class to make it possible to deterministically introduce this issue (this is similar to the fault injection technique we use in HDFS for checkpointing).

          This patch also copies GenericTestUtils.DelayAnswer from HDFS into Common. We can later do a followup patch on the HDFS side to remove the copy in that project.

          Show
          Todd Lipcon added a comment - Here's a test case which produces the issue as described. This builds on top of the test infrastructure introduced in HADOOP-8228 . I also introduced a simple fault injector class to make it possible to deterministically introduce this issue (this is similar to the fault injection technique we use in HDFS for checkpointing). This patch also copies GenericTestUtils.DelayAnswer from HDFS into Common. We can later do a followup patch on the HDFS side to remove the copy in that project.
          Todd Lipcon made changes -
          Attachment hadoop-8217-testcase.txt [ 12520570 ]
          Hide
          Eli Collins added a comment -

          I think the sequence number approach makes sense. This is effectively two transactions (one to make NN1 active, one to make NN2 active), each with it's own zxid, and each time we execute a new transaction we need to fence previous ones.

          Show
          Eli Collins added a comment - I think the sequence number approach makes sense. This is effectively two transactions (one to make NN1 active, one to make NN2 active), each with it's own zxid, and each time we execute a new transaction we need to fence previous ones.
          Hide
          Hari Mankude added a comment -

          Todd,

          I don't think zxid will fix the problem. Caveat is that I don't know the exact design that is being implemented here.

          Consider the scenario

          1. ZKFC1 goes to gc sleep and loses the active lock
          2. NN1 also goes to gc sleep. (NN1 was already active)
          3. ZKFC2 tries to do transitionToStandby() on NN1. RPC times out.
          4. Don't know what happens now in your design
          5. Assuming ZKFC2 continues to make NN2 active.
          6. NN1 wakes up, assumes that it is active.
          7. both NN1 and NN2 are active.

          Without some sort of persistent fencing across all shared resources, it will not work.

          Show
          Hari Mankude added a comment - Todd, I don't think zxid will fix the problem. Caveat is that I don't know the exact design that is being implemented here. Consider the scenario 1. ZKFC1 goes to gc sleep and loses the active lock 2. NN1 also goes to gc sleep. (NN1 was already active) 3. ZKFC2 tries to do transitionToStandby() on NN1. RPC times out. 4. Don't know what happens now in your design 5. Assuming ZKFC2 continues to make NN2 active. 6. NN1 wakes up, assumes that it is active. 7. both NN1 and NN2 are active. Without some sort of persistent fencing across all shared resources, it will not work.
          Hide
          Bikas Saha added a comment -

          ZKFC1 is about to send transitionToActive() and machine freezes (eg GC pause + swapping)

          This would solve the race as described, since when ZKFC2 calls NN1.transitionToStandby(), it hands NN1 a higher zxid than ZKFC1 saw. So when ZKFC1 then asks it to go active, the request is denied.

          Given, the above, how will NN1 receive the zxid from ZKFC2? If it does not then the solution is invalid. Hari's scenario exemplifies this.

          Show
          Bikas Saha added a comment - ZKFC1 is about to send transitionToActive() and machine freezes (eg GC pause + swapping) This would solve the race as described, since when ZKFC2 calls NN1.transitionToStandby(), it hands NN1 a higher zxid than ZKFC1 saw. So when ZKFC1 then asks it to go active, the request is denied. Given, the above, how will NN1 receive the zxid from ZKFC2? If it does not then the solution is invalid. Hari's scenario exemplifies this.
          Hide
          Todd Lipcon added a comment -

          3. ZKFC2 tries to do transitionToStandby() on NN1. RPC times out.

          4. Don't know what happens now in your design

          As has been the case in all of the HA work up to and including this point, it initiates the fence method at this point. The fence method has to do persistent fencing of the shared resource (eg. disable access to the SAN or STONITH the node). Please refer to the code in which I think this is fairly clear.

          The solution here is to improve the ability to do failover when "graceful fencing" suffices. In many failover cases it's preferable to not have to invoke STONITH or storage fencing, since those mechanisms will often require administrative intervention to un-fence.

          Given, the above, how will NN1 receive the zxid from ZKFC2? If it does not then the solution is invalid. Hari's scenario exemplifies this.

          All transitionToActive/transitionToStandby calls would include the zxid. So, the sequence becomes:

          1. ZKFC1 gets active lock (zxid=1)
          2. ZKFC1 is about to send transitionToActive(1) and machine freezes (eg GC pause + swapping)
          3. ZKFC1 loses its ZK lock, ZKFC2 gets ZK lock (zxid=2)
          4. ZKFC2 calls NN1.transitionToStandby(2) and NN2.transitionToActive(2).
          5. ZKFC1 wakes up from pause, calls NN1.transitionToActive(1). NN1 rejects the request because it previously accepted zxid=2 in step 4 above.

          or the failure case:
          4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated (same as in existing HA code for the last several months)
          5(failure case with storage fencing): ZKFC1 wakes up from pause, and calls NN1.transitionToActive(1). NN1 tries to access the shared edits storage and fails, because it has been fenced. So, there is no split-brain.
          5(failure case with STONITH): ZKFC1 never wakes up from pause, because its power plug has been pulled. So, there is no split-brain.

          Show
          Todd Lipcon added a comment - 3. ZKFC2 tries to do transitionToStandby() on NN1. RPC times out. 4. Don't know what happens now in your design As has been the case in all of the HA work up to and including this point, it initiates the fence method at this point. The fence method has to do persistent fencing of the shared resource (eg. disable access to the SAN or STONITH the node). Please refer to the code in which I think this is fairly clear. The solution here is to improve the ability to do failover when "graceful fencing" suffices. In many failover cases it's preferable to not have to invoke STONITH or storage fencing, since those mechanisms will often require administrative intervention to un-fence. Given, the above, how will NN1 receive the zxid from ZKFC2? If it does not then the solution is invalid. Hari's scenario exemplifies this. All transitionToActive/transitionToStandby calls would include the zxid. So, the sequence becomes: 1. ZKFC1 gets active lock (zxid=1) 2. ZKFC1 is about to send transitionToActive(1) and machine freezes (eg GC pause + swapping) 3. ZKFC1 loses its ZK lock, ZKFC2 gets ZK lock (zxid=2) 4. ZKFC2 calls NN1.transitionToStandby(2) and NN2.transitionToActive(2). 5. ZKFC1 wakes up from pause, calls NN1.transitionToActive(1). NN1 rejects the request because it previously accepted zxid=2 in step 4 above. or the failure case: 4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated (same as in existing HA code for the last several months) 5(failure case with storage fencing): ZKFC1 wakes up from pause, and calls NN1.transitionToActive(1). NN1 tries to access the shared edits storage and fails, because it has been fenced. So, there is no split-brain. 5(failure case with STONITH): ZKFC1 never wakes up from pause, because its power plug has been pulled. So, there is no split-brain.
          Hide
          Bikas Saha added a comment -

          4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated (same as in existing HA code for the last several months)

          Can you please point me to the existing HA code for last several months? I thought we have manual HA in which admin does fencing.

          Show
          Bikas Saha added a comment - 4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated (same as in existing HA code for the last several months) Can you please point me to the existing HA code for last several months? I thought we have manual HA in which admin does fencing.
          Hide
          Todd Lipcon added a comment -

          Can you please point me to the existing HA code for last several months? I thought we have manual HA in which admin does fencing.

          See HDFS-2179 (committed last August), which added the fencing code, and HADOOP-7938, which added the fencing behavior to the manual failover controller (committed in January).

          The HA guide (hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/HDFSHighAvailability.apt.vm) also details the configuration and operation of the fencing:

          • <<failover>> - initiate a failover between two NameNodes

          This subcommand causes a failover from the first provided NameNode to the
          second. If the first NameNode is in the Standby state, this command simply
          transitions the second to the Active state without error. If the first NameNode
          is in the Active state, an attempt will be made to gracefully transition it to
          the Standby state. If this fails, the fencing methods (as configured by
          <<dfs.ha.fencing.methods>>) will be attempted in order until one
          succeeds. Only after this process will the second NameNode be transitioned to
          the Active state. If no fencing method succeeds, the second NameNode will not
          be transitioned to the Active state, and an error will be returned.

          Show
          Todd Lipcon added a comment - Can you please point me to the existing HA code for last several months? I thought we have manual HA in which admin does fencing. See HDFS-2179 (committed last August), which added the fencing code, and HADOOP-7938 , which added the fencing behavior to the manual failover controller (committed in January). The HA guide ( hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/HDFSHighAvailability.apt.vm ) also details the configuration and operation of the fencing: <<failover>> - initiate a failover between two NameNodes This subcommand causes a failover from the first provided NameNode to the second. If the first NameNode is in the Standby state, this command simply transitions the second to the Active state without error. If the first NameNode is in the Active state, an attempt will be made to gracefully transition it to the Standby state. If this fails, the fencing methods (as configured by <<dfs.ha.fencing.methods>>) will be attempted in order until one succeeds. Only after this process will the second NameNode be transitioned to the Active state. If no fencing method succeeds, the second NameNode will not be transitioned to the Active state, and an error will be returned.
          Hide
          Bikas Saha added a comment -

          Ah. The confusion was caused by

          4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated (same as in existing HA code for the last several months)

          It seemed like non-graceful fencing existed in HA code for several months. You were referring to fencing methods.

          I think the piece that was missing from the solution was

          4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated

          I think this is what confused me (and perhaps Hari too) into thinking that NN1 would behave badly. On HDFS-2185 I have commented on ZKFC state diagram missing the arcs for transitionToActive/Standby() failing. It looks like ZKFC does takes specific action there. Its just missing from the transition diagram posted on that jira.

          In this case, the problem is happening because FC2 is calling NN1.transitionToStandby() and then FC1 is calling NN1.transitionToActive().
          I would like to question the value of FC2 calling NN1.transitionToStandby() in general. FC1 on NN1 is supposed to call NN1.transitionToStandby() because thats is FC1's responsibility upon losing the leader lock.
          Secondly, based on the recent work done to add breadcrumbs to the ActiveStandbyElector, FC2 is going to fence NN1 if NN1 has not gracefully given up the lock, which is clearly the case here. So the problem is already solved unless I am mistaken.

          Show
          Bikas Saha added a comment - Ah. The confusion was caused by 4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated (same as in existing HA code for the last several months) It seemed like non-graceful fencing existed in HA code for several months. You were referring to fencing methods. I think the piece that was missing from the solution was 4(failure case): if NN1.transitionToStandby() times out or fails, the non-graceful fencing is initiated I think this is what confused me (and perhaps Hari too) into thinking that NN1 would behave badly. On HDFS-2185 I have commented on ZKFC state diagram missing the arcs for transitionToActive/Standby() failing. It looks like ZKFC does takes specific action there. Its just missing from the transition diagram posted on that jira. In this case, the problem is happening because FC2 is calling NN1.transitionToStandby() and then FC1 is calling NN1.transitionToActive(). I would like to question the value of FC2 calling NN1.transitionToStandby() in general. FC1 on NN1 is supposed to call NN1.transitionToStandby() because thats is FC1's responsibility upon losing the leader lock. Secondly, based on the recent work done to add breadcrumbs to the ActiveStandbyElector, FC2 is going to fence NN1 if NN1 has not gracefully given up the lock, which is clearly the case here. So the problem is already solved unless I am mistaken.
          Hide
          Todd Lipcon added a comment -

          I would like to question the value of FC2 calling NN1.transitionToStandby() in general. FC1 on NN1 is supposed to call NN1.transitionToStandby() because thats is FC1's responsibility upon losing the leader lock.

          This doesn't work, since FC1 can take arbitrarily long to notice that it has lost its lock.

          Secondly, based on the recent work done to add breadcrumbs to the ActiveStandbyElector, FC2 is going to fence NN1 if NN1 has not gracefully given up the lock, which is clearly the case here. So the problem is already solved unless I am mistaken.

          But the first stage of "fencing" is to gracefully ask the NN to go to standby. This is exactly the problem here. If, instead, we always required that we always use an aggressive fencing mechanism (STONITH/NAS fencing), you're right that there would not be a problem. But we can avoid that in many cases – for example, imagine that the active node loses its connection to the ZK quorum, but still has a connection to the other NN (eg by a crossover cable). In this case it will leave its breadcrumb znode there, but the new active can easily transition it to standby.

          Here's another way of looking at this JIRA:

          • the "aggressive" fencing mechanisms have the property of being "persistent". i.e after fencing, the node cannot become active, even if asked to.
          • the "graceful" fencing mechanism (transitionToStandby() RPC) does not currently have the property of being "persistent". If another older node asks it to become active after it's been "gracefully fenced", it will do so incorrectly.
          • This JIRA makes "graceful fencing" persistent, so it can be used correctly.

          Regarding the ActiveStandbyElector callback for becomeStandby, I actually think it's redundant. There are two cases in which it could be called:

          • If already standby, it's a no-op
          • If active, then this indicates that the elector lost its znode. Since it lost its znode (rather than quitting the election gracefully), it will leave its breadcrumb behind. Thus, the other node will fence it. So, calling transitionToStandby is redundant with fencing which the other node will have to perform anyway.
          Show
          Todd Lipcon added a comment - I would like to question the value of FC2 calling NN1.transitionToStandby() in general. FC1 on NN1 is supposed to call NN1.transitionToStandby() because thats is FC1's responsibility upon losing the leader lock. This doesn't work, since FC1 can take arbitrarily long to notice that it has lost its lock. Secondly, based on the recent work done to add breadcrumbs to the ActiveStandbyElector, FC2 is going to fence NN1 if NN1 has not gracefully given up the lock, which is clearly the case here. So the problem is already solved unless I am mistaken. But the first stage of "fencing" is to gracefully ask the NN to go to standby. This is exactly the problem here. If, instead, we always required that we always use an aggressive fencing mechanism (STONITH/NAS fencing), you're right that there would not be a problem. But we can avoid that in many cases – for example, imagine that the active node loses its connection to the ZK quorum, but still has a connection to the other NN (eg by a crossover cable). In this case it will leave its breadcrumb znode there, but the new active can easily transition it to standby. Here's another way of looking at this JIRA: the "aggressive" fencing mechanisms have the property of being "persistent". i.e after fencing, the node cannot become active, even if asked to. the "graceful" fencing mechanism (transitionToStandby() RPC) does not currently have the property of being "persistent". If another older node asks it to become active after it's been "gracefully fenced", it will do so incorrectly. This JIRA makes "graceful fencing" persistent, so it can be used correctly. Regarding the ActiveStandbyElector callback for becomeStandby , I actually think it's redundant. There are two cases in which it could be called: If already standby, it's a no-op If active, then this indicates that the elector lost its znode. Since it lost its znode (rather than quitting the election gracefully), it will leave its breadcrumb behind. Thus, the other node will fence it. So, calling transitionToStandby is redundant with fencing which the other node will have to perform anyway.

            People

            • Assignee:
              Todd Lipcon
              Reporter:
              Todd Lipcon
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:

                Development