Cassandra
  1. Cassandra
  2. CASSANDRA-3620

Proposal for distributed deletes - fully automatic "Reaper Model" rather than GCSeconds and manual repairs

    Details

      Description

      Proposal for an improved system for handling distributed deletes, which removes the requirement to regularly run repair processes to maintain performance and data integrity.

      The Problem

      There are various issues with repair:

      • Repair is expensive to run
      • Repair jobs are often made more expensive than they should be by other issues (nodes dropping requests, hinted handoff not working, downtime etc)
      • Repair processes can often fail and need restarting, for example in cloud environments where network issues make a node disappear from the ring for a brief moment
      • When you fail to run repair within GCSeconds, either by error or because of issues with Cassandra, data written to a node that did not see a later delete can reappear (and a node might miss a delete for several reasons including being down or simply dropping requests during load shedding)
      • If you cannot run repair and have to increase GCSeconds to prevent deleted data reappearing, in some cases the growing tombstone overhead can significantly degrade performance

      Because of the foregoing, in high throughput environments it can be very difficult to make repair a cron job. It can be preferable to keep a terminal open and run repair jobs one by one, making sure they succeed and keeping and eye on overall load to reduce system impact. This isn't desirable, and problems are exacerbated when there are lots of column families in a database or it is necessary to run a column family with a low GCSeconds to reduce tombstone load (because there are many write/deletes to that column family). The database owner must run repair within the GCSeconds window, or increase GCSeconds, to avoid potentially losing delete operations.

      It would be much better if there was no ongoing requirement to run repair to ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be an optional maintenance utility used in special cases, or to ensure ONE reads get consistent data.

      "Reaper Model" Proposal

      1. Tombstones do not expire, and there is no GCSeconds
      2. Tombstones have associated ACK lists, which record the replicas that have acknowledged them
      3. Tombstones are deleted (or marked for compaction) when they have been acknowledged by all replicas
      4. When a tombstone is deleted, it is added to a "relic" index. The relic index makes it possible for a reaper to acknowledge a tombstone after it is deleted
      5. The ACK lists and relic index are held in memory for speed
      6. Background "reaper" threads constantly stream ACK requests to other nodes, and stream back ACK responses back to requests they have received (throttling their usage of CPU and bandwidth so as not to affect performance)
      7. If a reaper receives a request to ACK a tombstone that does not exist, it creates the tombstone and adds an ACK for the requestor, and replies with an ACK. This is the worst that can happen, and does not cause data corruption.

      ADDENDUM

      The proposal to hold the ACK and relic lists in memory was added after the first posting. Please see comments for full reasons. Furthermore, a proposal for enhancements to repair was posted to comments, which would cause tombstones to be scavenged when repair completes (the author had assumed this was the case anyway, but it seems at time of writing they are only scavenged during compaction on GCSeconds timeout). The proposals are not exclusive and this proposal is extended to include the possible enhancements to repair described.

      NOTES

      • If a node goes down for a prolonged period, the worst that can happen is that some tombstones are recreated across the cluster when it restarts, which does not corrupt data (and this will only occur with a very small number of tombstones)
      • The system is simple to implement and predictable
      • With the reaper model, repair would become an optional process for optimizing the database to increase the consistency seen by ConsistencyLevel.ONE reads, and for fixing up nodes, for example after an sstable was lost

      Planned Benefits

      • Reaper threads can utilize "spare" cycles to constantly scavenge tombstones in the background thereby greatly reducing tombstone load, improving query performance, reducing the system resources needed by processes such as compaction, and making performance generally more predictable
      • The reaper model means that GCSeconds is no longer necessary, which removes the threat of data corruption if repair can't be run successfully within that period (for example if repair can't be run because of a new adopter's lack of Cassandra expertise, a cron script failing, or Cassandra bugs or other technical issues)
      • Reaper threads are fully automatic, work in the background and perform finely grained operations where interruption has little effect. This is much better for database administrators than having to manually run and manage repair, whether for the purposes of preventing data corruption or for optimizing performance, which in addition to wasting operator time also often creates load spikes and has to be restarted after failure.

        Issue Links

          Activity

          Hide
          Brandon Williams added a comment -

          No, he meant gc_grace. The problem with max_hint_window_in_ms is it can be changed at any time, and I do know of people changing it during some planned operations (both increasing and decreasing it) and then returning to their normal size when done.

          Show
          Brandon Williams added a comment - No, he meant gc_grace. The problem with max_hint_window_in_ms is it can be changed at any time, and I do know of people changing it during some planned operations (both increasing and decreasing it) and then returning to their normal size when done.
          Hide
          André Cruz added a comment -

          Don't you mean "max_hint_window_in_ms" instead of "gc_grace"?

          Anyway, what I was trying to suggest was that instead of making a tombstone safe to delete immediately when all replicas have ACKed, make it safe to delete on NOW() + max_hint_window_in_ms.

          Show
          André Cruz added a comment - Don't you mean "max_hint_window_in_ms" instead of "gc_grace"? Anyway, what I was trying to suggest was that instead of making a tombstone safe to delete immediately when all replicas have ACKed, make it safe to delete on NOW() + max_hint_window_in_ms.
          Hide
          Jonathan Ellis added a comment -

          By definition, gc_grace is the time window within which you are confident you will get your dead nodes up and running again. (If you have a node down for longer than gc_grace, you should wipe it and re-bootstrap instead of just bringing it back up to prevent this scenario.)

          Show
          Jonathan Ellis added a comment - By definition, gc_grace is the time window within which you are confident you will get your dead nodes up and running again. (If you have a node down for longer than gc_grace, you should wipe it and re-bootstrap instead of just bringing it back up to prevent this scenario.)
          Hide
          André Cruz added a comment -

          So what stops a hint from arriving after a tombstone has been removed because gc_grace has passed? Is it implied that max_hint_window_in_ms << gc_grace?

          Show
          André Cruz added a comment - So what stops a hint from arriving after a tombstone has been removed because gc_grace has passed? Is it implied that max_hint_window_in_ms << gc_grace?
          Hide
          Jonathan Ellis added a comment -

          Ah, right. Damn it again.

          Show
          Jonathan Ellis added a comment - Ah, right. Damn it again.
          Hide
          Jeremiah Jordan added a comment -

          They all ack and get the delete. They did not all get the write. Since the delete was acked everywhere it can be cleaned up. Later the hint finally comes in and resurrects the data. Hint replay is not immediate, so it can happen after the delete.

          Show
          Jeremiah Jordan added a comment - They all ack and get the delete. They did not all get the write. Since the delete was acked everywhere it can be cleaned up. Later the hint finally comes in and resurrects the data. Hint replay is not immediate, so it can happen after the delete.
          Hide
          André Cruz added a comment -

          But a tombstone should only be safe to delete before gc_grace if ALL replicas have ACKed the delete. A hint would not be enough.

          Show
          André Cruz added a comment - But a tombstone should only be safe to delete before gc_grace if ALL replicas have ACKed the delete. A hint would not be enough.
          Hide
          Aleksey Yeschenko added a comment -

          This made sense to me at the time but six months later it's not obvious. Shouldn't hint creation (only done if a write is unsuccessful) and "early delete gcgs short-circuit" (only done if all writes are successful) be mutually exclusive?

          I think the argument was about different writes:
          1. Cell 'foo' makes it to replicas A and B, and becomes a hint for C (C was unavailable).
          2. C becomes available
          3. We delete 'foo' successfully from all the replicas, and clean up the tombstones immediately
          4. C gets its original 'foo' hint - and the delete is 'undone'

          Show
          Aleksey Yeschenko added a comment - This made sense to me at the time but six months later it's not obvious. Shouldn't hint creation (only done if a write is unsuccessful) and "early delete gcgs short-circuit" (only done if all writes are successful) be mutually exclusive? I think the argument was about different writes: 1. Cell 'foo' makes it to replicas A and B, and becomes a hint for C (C was unavailable). 2. C becomes available 3. We delete 'foo' successfully from all the replicas, and clean up the tombstones immediately 4. C gets its original 'foo' hint - and the delete is 'undone'
          Hide
          Jonathan Ellis added a comment -

          a delivered hint can undo a deletion - if we gc it away too fast

          This made sense to me at the time but six months later it's not obvious. Shouldn't hint creation (only done if a write is unsuccessful) and "early delete gcgs short-circuit" (only done if all writes are successful) be mutually exclusive?

          Show
          Jonathan Ellis added a comment - a delivered hint can undo a deletion - if we gc it away too fast This made sense to me at the time but six months later it's not obvious. Shouldn't hint creation (only done if a write is unsuccessful) and "early delete gcgs short-circuit" (only done if all writes are successful) be mutually exclusive?
          Hide
          Aleksey Yeschenko added a comment -

          Unfortunately none of the suggested options can be implemented because of hinted handoff and logged batches.

          Show
          Aleksey Yeschenko added a comment - Unfortunately none of the suggested options can be implemented because of hinted handoff and logged batches.
          Hide
          Jonathan Ellis added a comment -

          Right.

          Show
          Jonathan Ellis added a comment - Right.
          Hide
          Aleksey Yeschenko added a comment -

          I think we could do something with the default TTL feature we have for 2.0 though – if we make that the maximum TTL, then we don't need to worry about resurrections like this.

          I'm not exactly following. If we make that the maximum TTL then all we can do (safely) is to reduce effective gcgs to that TTL, assuming it's lower than the configured gcgs, that is - to min(maxTTL, gcgs). But that's not really related to 3620.. assuming you meant what I think you meant.

          Show
          Aleksey Yeschenko added a comment - I think we could do something with the default TTL feature we have for 2.0 though – if we make that the maximum TTL, then we don't need to worry about resurrections like this. I'm not exactly following. If we make that the maximum TTL then all we can do (safely) is to reduce effective gcgs to that TTL, assuming it's lower than the configured gcgs, that is - to min(maxTTL, gcgs). But that's not really related to 3620.. assuming you meant what I think you meant.
          Hide
          Jonathan Ellis added a comment -

          Well, hell. Even if we moved hints back to replicas (so we could tell the replicas, "drop any hints older than the deletion time for this column") it doesn't really work. You could have the batchlog generate new hints from a dropped delivery attempt, for instance.

          I think we could do something with the default TTL feature we have for 2.0 though – if we make that the maximum TTL, then we don't need to worry about resurrections like this.

          Show
          Jonathan Ellis added a comment - Well, hell. Even if we moved hints back to replicas (so we could tell the replicas, "drop any hints older than the deletion time for this column") it doesn't really work. You could have the batchlog generate new hints from a dropped delivery attempt, for instance. I think we could do something with the default TTL feature we have for 2.0 though – if we make that the maximum TTL, then we don't need to worry about resurrections like this.
          Hide
          Aleksey Yeschenko added a comment -

          (referring to "extend the coordinator's ack-wait callback" idea here, not the "tracking full-repair" one, which is blocked by CASSANDRA-2405 atm and may or may not be possible)

          Show
          Aleksey Yeschenko added a comment - (referring to "extend the coordinator's ack-wait callback" idea here, not the "tracking full-repair" one, which is blocked by CASSANDRA-2405 atm and may or may not be possible)
          Hide
          Aleksey Yeschenko added a comment -

          I've been trying to find a workable solution, but now I'm almost certain that it can't be done safely.. unless you disable HH that is. Otherwise a delivered hint can undo a deletion - if we gc it away too fast.

          So this would only work safely if commitlog mode is sync AND hh is disabled - which doesn't seem like a common configuration.

          Or am I missing something obvious?

          Show
          Aleksey Yeschenko added a comment - I've been trying to find a workable solution, but now I'm almost certain that it can't be done safely.. unless you disable HH that is. Otherwise a delivered hint can undo a deletion - if we gc it away too fast. So this would only work safely if commitlog mode is sync AND hh is disabled - which doesn't seem like a common configuration. Or am I missing something obvious?
          Hide
          Brandon Williams added a comment -

          Not that it would be hard to gossip the commit log mode btw

          I'd be fine with gossiping that as a safety check in addition to saying "use batch everywhere" since that would be a difficult thing to troubleshoot if they weren't.

          Show
          Brandon Williams added a comment - Not that it would be hard to gossip the commit log mode btw I'd be fine with gossiping that as a safety check in addition to saying "use batch everywhere" since that would be a difficult thing to troubleshoot if they weren't.
          Hide
          Jonathan Ellis added a comment -

          we could say "if you use batch, use it on all nodes"

          Also fine with this.

          Show
          Jonathan Ellis added a comment - we could say "if you use batch, use it on all nodes" Also fine with this.
          Hide
          Sylvain Lebresne added a comment -

          Right. I'm willing to live with that.

          I'm not. This means that when a node fails, you have a very big chance of having data resurrect if there was deletes in the last 10 seconds before the crash (or whatever you've set for periodic). Pretty sure an "optimization" that breaks correctness is not what people wants.

          we could just just check for BCL and only enable this if they're in batch mode.

          I'd be fine with that, though I not that to do that properly a node would have to know the commit log mode of other nodes. Of course we could say "if you use batch, use it on all nodes", but I'm always a bit reluctant in assuming people will do what we consider "the right thing" without any validation. Not that it would be hard to gossip the commit log mode btw, just pointing it out.

          But just saying it may be worth spending a bit more time thinking about this issue before rushing into a solution that might not be useful to everyone today.

          Show
          Sylvain Lebresne added a comment - Right. I'm willing to live with that. I'm not. This means that when a node fails, you have a very big chance of having data resurrect if there was deletes in the last 10 seconds before the crash (or whatever you've set for periodic). Pretty sure an "optimization" that breaks correctness is not what people wants. we could just just check for BCL and only enable this if they're in batch mode. I'd be fine with that, though I not that to do that properly a node would have to know the commit log mode of other nodes. Of course we could say "if you use batch, use it on all nodes", but I'm always a bit reluctant in assuming people will do what we consider "the right thing" without any validation. Not that it would be hard to gossip the commit log mode btw, just pointing it out. But just saying it may be worth spending a bit more time thinking about this issue before rushing into a solution that might not be useful to everyone today.
          Hide
          Jonathan Ellis added a comment -

          Without batch commit log, we cannot guarantee that an acknowledged write won't be lost by a node.

          Right. I'm willing to live with that.

          If you're not, we could just just check for BCL and only enable this if they're in batch mode. That's Good Enough for me. And in a couple years everyone will be on SSD and we can make BCL the default.

          Show
          Jonathan Ellis added a comment - Without batch commit log, we cannot guarantee that an acknowledged write won't be lost by a node. Right. I'm willing to live with that. If you're not, we could just just check for BCL and only enable this if they're in batch mode. That's Good Enough for me. And in a couple years everyone will be on SSD and we can make BCL the default.
          Hide
          Sylvain Lebresne added a comment -

          extend the coordinator's ack-wait callback (which we currently use to write hints if a replica times out) to write a "delete successful" message

          There is one problem I'm afraid. Without batch commit log, we cannot guarantee that an acknowledged write won't be lost by a node.

          Don't get me wrong, it's sad, because otherwise it's a fairly simple solution to implement. Typically, the "delete successful" message could just be rewriting the same tombstone(s) we just wrote but with a localDeletionTime set to 0 (of Integer.MIN_VALUE) to make them readily gcable (which may be what you had in mind).

          Show
          Sylvain Lebresne added a comment - extend the coordinator's ack-wait callback (which we currently use to write hints if a replica times out) to write a "delete successful" message There is one problem I'm afraid. Without batch commit log, we cannot guarantee that an acknowledged write won't be lost by a node. Don't get me wrong, it's sad, because otherwise it's a fairly simple solution to implement. Typically, the "delete successful" message could just be rewriting the same tombstone(s) we just wrote but with a localDeletionTime set to 0 (of Integer.MIN_VALUE) to make them readily gcable (which may be what you had in mind).
          Hide
          Jonathan Ellis added a comment -

          The problem with the repair idea is that while it does establish an upper bound for how many tombstones can accrue, it's a pretty high upper bound.

          Another idea: extend the coordinator's ack-wait callback (which we currently use to write hints if a replica times out) to write a "delete successful" message if all replicas do ack in time. Similar to Dominic's original idea, but by optimizing for the common case (success) we only increase the impact of deletes by a constant factor.

          Not immediately clear to me how to extend this to HH and AES, but even as a partial solution (leaving gcgs around for AES) I think this would be a big improvement in practice.

          Show
          Jonathan Ellis added a comment - The problem with the repair idea is that while it does establish an upper bound for how many tombstones can accrue, it's a pretty high upper bound. Another idea: extend the coordinator's ack-wait callback (which we currently use to write hints if a replica times out) to write a "delete successful" message if all replicas do ack in time. Similar to Dominic's original idea, but by optimizing for the common case (success) we only increase the impact of deletes by a constant factor. Not immediately clear to me how to extend this to HH and AES, but even as a partial solution (leaving gcgs around for AES) I think this would be a big improvement in practice.
          Hide
          Peter Schuller added a comment -

          I certainly completely agree with the goal of eliminating gc_grace, and I have no overall opinion as of yet, but I do want to point out one thing: If you're running nodes with periodic (as opposed to batch wise) commit log mode, a node couldn't "trust" an ACK from another node unless they were special cased to wait for commit log sync (or have their own separate commit log).

          Show
          Peter Schuller added a comment - I certainly completely agree with the goal of eliminating gc_grace, and I have no overall opinion as of yet, but I do want to point out one thing: If you're running nodes with periodic (as opposed to batch wise) commit log mode, a node couldn't "trust" an ACK from another node unless they were special cased to wait for commit log sync (or have their own separate commit log).
          Hide
          Dominic Williams added a comment - - edited

          Ok I got it and +1 on that idea. I had actually assumed tombstones were compacted away after repair anyway. So as I understand GCSeconds would be removed, and tombstones would be marked for deletion once a repair operation was successfully run.

          That would be a cool first step and improve the current situation.

          But I think a reaper system is still needed: although this feature would take some of the current pressure off, there would still be the issue of tombstone build up between repairs, which means performance will degrade between invocations, the load spikes from repair itself and the manual nature of the process.

          I guess I'm on the sharp end of this - we have several column families where columns represent game objects or messages owned by users where there is a high delete and insert load. Various operations need to perform slices of user rows and these can get much slower as tombstones build up, so GCSeconds has been brought right down, but this leads to the constant pain of "omg how long left before need to run repair or increase GCSeconds" etc.. improving repair as described would remove the Sword of Damocles threat of data corruption but we'd still need to make sure it was run regularly, performance would degrade between invocations and repair would create load spikes. The reaping model can take away those problems.

          Show
          Dominic Williams added a comment - - edited Ok I got it and +1 on that idea. I had actually assumed tombstones were compacted away after repair anyway. So as I understand GCSeconds would be removed, and tombstones would be marked for deletion once a repair operation was successfully run. That would be a cool first step and improve the current situation. But I think a reaper system is still needed: although this feature would take some of the current pressure off, there would still be the issue of tombstone build up between repairs, which means performance will degrade between invocations, the load spikes from repair itself and the manual nature of the process. I guess I'm on the sharp end of this - we have several column families where columns represent game objects or messages owned by users where there is a high delete and insert load. Various operations need to perform slices of user rows and these can get much slower as tombstones build up, so GCSeconds has been brought right down, but this leads to the constant pain of "omg how long left before need to run repair or increase GCSeconds" etc.. improving repair as described would remove the Sword of Damocles threat of data corruption but we'd still need to make sure it was run regularly, performance would degrade between invocations and repair would create load spikes. The reaping model can take away those problems.
          Hide
          Jonathan Ellis added a comment -

          If we finish a repair at time Y, then any tombstone written at X < Y can be discarded.

          Show
          Jonathan Ellis added a comment - If we finish a repair at time Y, then any tombstone written at X < Y can be discarded.
          Hide
          Dominic Williams added a comment -

          Sounds cool but I don't understand how it works.. You can't remove a tombstone until you know all replicas have it right? i.e. you need to be sure all replicas have a tombstone irrespective of whether repair has previously completed?

          Show
          Dominic Williams added a comment - Sounds cool but I don't understand how it works.. You can't remove a tombstone until you know all replicas have it right? i.e. you need to be sure all replicas have a tombstone irrespective of whether repair has previously completed?
          Hide
          Jonathan Ellis added a comment -

          Sylvain had an interesting alternative:

          If we just track when a full repair completes (CASSANDRA-2405), we can purge tombstones "early" after that (or maybe get rid of gc_grace entirely) without having to track acks per-delete.

          Show
          Jonathan Ellis added a comment - Sylvain had an interesting alternative: If we just track when a full repair completes ( CASSANDRA-2405 ), we can purge tombstones "early" after that (or maybe get rid of gc_grace entirely) without having to track acks per-delete.
          Hide
          Dominic Williams added a comment - - edited

          OK.. the complete solution: The whole tombstone reaping process could be performed in memory because it fails safe.

          PROPOSAL ADJUSTMENTS

          • The tombstone acknowledgements and also the relic list are held in memory
          • A node's reaper thread only requests tombstone acknowledgements when it can see all replicas in the ring
          • The reaper works within configurable memory limit, and if there's a problem getting a tombstone acknowledgement, for example because a replica goes offline, or Cassandra exception, it simply kicks it out of memory

          NOTES

          • The reaping process now has no disk/storage overhead
          • The memory and CPU savings achieved by not having to include tombstones in query processing, compaction etc will greatly exceed the reaper's overhead
          • The bandwidth savings achieved by nodes not having to send each other tombstones to calculate query results will greatly exceed the reaper's overhead (requesting/sending ACKs)

          SPECIAL CASES

          If there is a large replication factor such as RF=9 savings should mainly predominate. For example as regards overall bandwidth consumed, the requirement to request/send ACKs is probably more than offset by the need to share tombstones amongst nodes (5 nodes if RF=9) to process QUORUM reads. Furthermore the reaper bandwidth overhead shouldn't impede query processing, whereas sharing tombstones as part of query processing always does.

          Also this needn't be an either/or. Administrators could simply turn off reaping and fall back to using the repair process (the Sword of Damocles cough) if necessary.

          Doing the calcs I'd say for the majority tombstone reaping will

          • Dramatically improve query performance
          • Greatly reduce administration overhead and complexity (+remove big cause of consistency issues)
          • Reduce memory and processor pressure by preventing tombstone buildup thus indirectly reducing other issues
          • Avoid the load spikes and associated problems caused by running repair.
          Show
          Dominic Williams added a comment - - edited OK.. the complete solution: The whole tombstone reaping process could be performed in memory because it fails safe. PROPOSAL ADJUSTMENTS The tombstone acknowledgements and also the relic list are held in memory A node's reaper thread only requests tombstone acknowledgements when it can see all replicas in the ring The reaper works within configurable memory limit, and if there's a problem getting a tombstone acknowledgement, for example because a replica goes offline, or Cassandra exception, it simply kicks it out of memory NOTES The reaping process now has no disk/storage overhead The memory and CPU savings achieved by not having to include tombstones in query processing, compaction etc will greatly exceed the reaper's overhead The bandwidth savings achieved by nodes not having to send each other tombstones to calculate query results will greatly exceed the reaper's overhead (requesting/sending ACKs) SPECIAL CASES If there is a large replication factor such as RF=9 savings should mainly predominate. For example as regards overall bandwidth consumed, the requirement to request/send ACKs is probably more than offset by the need to share tombstones amongst nodes (5 nodes if RF=9) to process QUORUM reads. Furthermore the reaper bandwidth overhead shouldn't impede query processing, whereas sharing tombstones as part of query processing always does. Also this needn't be an either/or. Administrators could simply turn off reaping and fall back to using the repair process (the Sword of Damocles cough) if necessary. Doing the calcs I'd say for the majority tombstone reaping will Dramatically improve query performance Greatly reduce administration overhead and complexity (+remove big cause of consistency issues) Reduce memory and processor pressure by preventing tombstone buildup thus indirectly reducing other issues Avoid the load spikes and associated problems caused by running repair.
          Hide
          Dominic Williams added a comment -

          Make it optional per column family? Repair would still need to exist anyway so could fall back to that for cases like this.

          Show
          Dominic Williams added a comment - Make it optional per column family? Repair would still need to exist anyway so could fall back to that for cases like this.
          Hide
          Jonathan Ellis added a comment -

          At a high level, I think it's worth trying. One big drawback is making deletes O(N**2) expensive: N acks must be written to each of the N replicas. That's 81 writes for a single delete in a cluster with 9 total replicas across 3 DCs, which is not a hypothetical situation.

          Show
          Jonathan Ellis added a comment - At a high level, I think it's worth trying. One big drawback is making deletes O(N**2) expensive: N acks must be written to each of the N replicas. That's 81 writes for a single delete in a cluster with 9 total replicas across 3 DCs, which is not a hypothetical situation.

            People

            • Assignee:
              Aleksey Yeschenko
              Reporter:
              Dominic Williams
            • Votes:
              5 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Time Tracking

                Estimated:
                Original Estimate - 504h
                504h
                Remaining:
                Remaining Estimate - 504h
                504h
                Logged:
                Time Spent - Not Specified
                Not Specified

                  Development