Cassandra
  1. Cassandra
  2. CASSANDRA-4740

Phantom TCP connections, failing hinted handoff

    Details

      Description

      IP addresses in report anonymized:

      Had a server running cassandra (1.1.1.10) reboot ungracefully. Reboot and startup was successful and uneventful. cassandra went back into service ok.

      From that point onwards however, several (but not all) machines in the cassandra cluster started having difficulty with hinted handoff to that machine. This was despite nodetool ring showing Up across the board.

      Here's an example of an attempt, every 10 minutes, by a node (1.1.1.11) to replay hints to the node that was rebooted:

      INFO [HintedHandoff:1] 2012-10-01 11:07:23,293 HintedHandOffManager.java (line 294) Started hinted handoff for token: 122879743610338889583996386017027409691 with IP: /1.1.1.10
      INFO [HintedHandoff:1] 2012-10-01 11:07:33,295 HintedHandOffManager.java (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
      INFO [HintedHandoff:1] 2012-10-01 11:07:33,295 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
      
      INFO [HintedHandoff:1] 2012-10-01 11:17:23,312 HintedHandOffManager.java (line 294) Started hinted handoff for token: 122879743610338889583996386017027409691 with IP: /1.1.1.10
      INFO [HintedHandoff:1] 2012-10-01 11:17:33,319 HintedHandOffManager.java (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
      INFO [HintedHandoff:1] 2012-10-01 11:17:33,319 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
      
      INFO [HintedHandoff:1] 2012-10-01 11:27:23,335 HintedHandOffManager.java (line 294) Started hinted handoff for token: 122879743610338889583996386017027409691 with IP: /1.1.1.10
      INFO [HintedHandoff:1] 2012-10-01 11:27:33,337 HintedHandOffManager.java (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
      INFO [HintedHandoff:1] 2012-10-01 11:27:33,337 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
      
      INFO [HintedHandoff:1] 2012-10-01 11:37:23,357 HintedHandOffManager.java (line 294) Started hinted handoff for token: 122879743610338889583996386017027409691 with IP: /1.1.1.10
      INFO [HintedHandoff:1] 2012-10-01 11:37:33,358 HintedHandOffManager.java (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
      INFO [HintedHandoff:1] 2012-10-01 11:37:33,359 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
      
      INFO [HintedHandoff:1] 2012-10-01 11:47:23,412 HintedHandOffManager.java (line 294) Started hinted handoff for token: 122879743610338889583996386017027409691 with IP: /1.1.1.10
      INFO [HintedHandoff:1] 2012-10-01 11:47:33,414 HintedHandOffManager.java (line 372) Timed out replaying hints to /1.1.1.10; aborting further deliveries
      INFO [HintedHandoff:1] 2012-10-01 11:47:33,414 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /1.1.1.10
      

      I started poking around, and discovered that several nodes held ESTABLISHED TCP connections that didn't have a live endpoint on the rebooted node. My guess is they were live prior to the reboot, and after the reboot the nodes still see them as live and unsuccessfully try to use them.

      Example, on the node that was rebooted:

      .10 ~ # netstat -tn | grep 1.1.1.11
      tcp        0      0 1.1.1.10:7000        1.1.1.11:40960        ESTABLISHED
      tcp        0      0 1.1.1.10:34370       1.1.1.11:7000         ESTABLISHED
      tcp        0      0 1.1.1.10:45518       1.1.1.11:7000         ESTABLISHED
      

      While on the node that's failing to hint to it:

      .11 ~ # netstat -tn | grep 1.1.1.10
      tcp        0      0 1.1.1.11:7000         1.1.1.10:34370       ESTABLISHED
      tcp        0      0 1.1.1.11:7000         1.1.1.10:45518       ESTABLISHED
      tcp        0      0 1.1.1.11:7000         1.1.1.10:53316       ESTABLISHED
      tcp        0      0 1.1.1.11:7000         1.1.1.10:43239       ESTABLISHED
      tcp        0      0 1.1.1.11:40960        1.1.1.10:7000        ESTABLISHED
      

      Notice the phantom connections on :53316 and :43239 which do not appear on the remote 1.1.1.10

      On .11 I tried disabling and enabling gossip, but that did not restart :7000 nor clean up the 2 phantom connections. For good measure I also tried disabling and enabling thrift (long shot), and that didn't help either.

      The only thing that helped was to actually stop and start cassandra, in a rolling fashion, on each node that was having trouble hinting to the machine that was rebooted. The phantom connections naturally disappeared, write volume on 1.1.1.10 rose for a while, and all the hints were sent successfully.

      I'm unsure whether the phantom TCP connections are a cause, or just loosely correlated, to the hinted handoff failure every 10 minutes.

        Activity

        Hide
        Jonathan Ellis added a comment -

        Sounds like it's not a problem w/ appropriate stack size.

        Show
        Jonathan Ellis added a comment - Sounds like it's not a problem w/ appropriate stack size.
        Hide
        Jackson Chung added a comment -

        we ran into the issue without having the stacksize problem (since our jvm is < 1.6.0_32 and the stacksize problem is >= _32)

        Show
        Jackson Chung added a comment - we ran into the issue without having the stacksize problem (since our jvm is < 1.6.0_32 and the stacksize problem is >= _32)
        Hide
        Edward Capriolo added a comment -

        I have set the Xss to its default which is 256. The cost of trying to micro optimize this setting is outweighed by the trouble it causes imho. You saving a couple k a socket but give yourself many hard to debug problems.

        Show
        Edward Capriolo added a comment - I have set the Xss to its default which is 256. The cost of trying to micro optimize this setting is outweighed by the trouble it causes imho. You saving a couple k a socket but give yourself many hard to debug problems.
        Hide
        Brandon Williams added a comment -

        It's also clearly beyond the scope of any java code, and would be an issue between the JVM itself and the OS.

        I agree with this, and since we've already bumped the default stack to 180k, if it indeed solves the issue for you as well I'm inclined to resolve this as not a problem.

        Show
        Brandon Williams added a comment - It's also clearly beyond the scope of any java code, and would be an issue between the JVM itself and the OS. I agree with this, and since we've already bumped the default stack to 180k, if it indeed solves the issue for you as well I'm inclined to resolve this as not a problem.
        Hide
        Mina Naguib added a comment -

        FWIW

        I've had this issue pop up several times since I initially reported it.

        When I enable TRACE logging when that happens, unfortunately at our volume, the log entries are truly an avalanche and I couldn't fish much useful info out of it.

        The last time it happened, most of our nodes in the cluster suffered and required restarting. Before restarting I took the opportunity to bump up the stack size to 180k.

        This was a few days ago and so far I haven't seen the issue re-occur (but then again I haven't, since, had a node go down ungracefully).

        I'll wait for the next couple of ungraceful shutdowns, then will report back here whether I've seen HH failures or not.

        Having said that however, at the end of the day, having orphan TCP connections is still not good, though perhaps the relationship between it and failing HH isn't causative. It's also clearly beyond the scope of any java code, and would be an issue between the JVM itself and the OS.

        Show
        Mina Naguib added a comment - FWIW I've had this issue pop up several times since I initially reported it. When I enable TRACE logging when that happens, unfortunately at our volume, the log entries are truly an avalanche and I couldn't fish much useful info out of it. The last time it happened, most of our nodes in the cluster suffered and required restarting. Before restarting I took the opportunity to bump up the stack size to 180k. This was a few days ago and so far I haven't seen the issue re-occur (but then again I haven't, since, had a node go down ungracefully). I'll wait for the next couple of ungraceful shutdowns, then will report back here whether I've seen HH failures or not. Having said that however, at the end of the day, having orphan TCP connections is still not good, though perhaps the relationship between it and failing HH isn't causative. It's also clearly beyond the scope of any java code, and would be an issue between the JVM itself and the OS.
        Hide
        Rick Branson added a comment -

        These problems have completely disappeared with the 180k+ stack size.

        Show
        Rick Branson added a comment - These problems have completely disappeared with the 180k+ stack size.
        Hide
        Jackson Chung added a comment -

        we too see a similar thing:

        On box 192.168.13.56 , looking for all ESTABLISHED connection for others connecting to this:7000

        $ netstat -ant | grep "192.168.13.56:7000.*EST" | cut -d ':' -f 1-2 | sort | uniq -c
        1 tcp 0 0 192.168.13.56:7000 192.168.12.13
        2 tcp 0 0 192.168.13.56:7000 192.168.14.145
        217 tcp 0 0 192.168.13.56:7000 192.168.44.237
        202 tcp 0 0 192.168.13.56:7000 192.168.45.67
        198 tcp 0 0 192.168.13.56:7000 192.168.46.141
        11 tcp 0 0 192.168.13.56:7000 192.168.76.156
        10 tcp 0 0 192.168.13.56:7000 192.168.77.72
        11 tcp 0 0 192.168.13.56:7000 192.168.78.153

        On 192.168.44.237 , it just shows 1 ESTABLISHED to 192.168.13.56:7000:

        $ sudo netstat -antp | grep "192.168.44.237.*192.168.13.56:7000"
        tcp 0 0 192.168.44.237:35252 192.168.13.56:7000 ESTABLISHED 14398/java

        We too have HH problem similar to the above (though I don't see in the logs on the above 2 nodes that the timedout happen to these 2 nodes). We also have nodes flapping. And it also turned out the firewall rule wasn't opened on some nodes to communicate to all nodes on port 7000.

        restarting the node fix the issue.

        version:

        uname -a
        Linux kca06apigee 3.2.21-1.32.6.amzn1.x86_64 #1 SMP Sat Jun 23 02:32:15 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

        $ /usr/java/latest/bin/java -version
        java version "1.6.0_31"
        Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
        Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

        How does netstat on 1 box shows 200+ ESTABLISHED conn to the other box while the other box only show 1....

        Show
        Jackson Chung added a comment - we too see a similar thing: On box 192.168.13.56 , looking for all ESTABLISHED connection for others connecting to this:7000 $ netstat -ant | grep "192.168.13.56:7000.*EST" | cut -d ':' -f 1-2 | sort | uniq -c 1 tcp 0 0 192.168.13.56:7000 192.168.12.13 2 tcp 0 0 192.168.13.56:7000 192.168.14.145 217 tcp 0 0 192.168.13.56:7000 192.168.44.237 202 tcp 0 0 192.168.13.56:7000 192.168.45.67 198 tcp 0 0 192.168.13.56:7000 192.168.46.141 11 tcp 0 0 192.168.13.56:7000 192.168.76.156 10 tcp 0 0 192.168.13.56:7000 192.168.77.72 11 tcp 0 0 192.168.13.56:7000 192.168.78.153 On 192.168.44.237 , it just shows 1 ESTABLISHED to 192.168.13.56:7000: $ sudo netstat -antp | grep "192.168.44.237.*192.168.13.56:7000" tcp 0 0 192.168.44.237:35252 192.168.13.56:7000 ESTABLISHED 14398/java We too have HH problem similar to the above (though I don't see in the logs on the above 2 nodes that the timedout happen to these 2 nodes). We also have nodes flapping. And it also turned out the firewall rule wasn't opened on some nodes to communicate to all nodes on port 7000. restarting the node fix the issue. version: uname -a Linux kca06apigee 3.2.21-1.32.6.amzn1.x86_64 #1 SMP Sat Jun 23 02:32:15 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ /usr/java/latest/bin/java -version java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) How does netstat on 1 box shows 200+ ESTABLISHED conn to the other box while the other box only show 1....
        Hide
        Brandon Williams added a comment -

        FWIW, 180k should be enough for your case, as we changed it in CASSANDRA-4631.

        Show
        Brandon Williams added a comment - FWIW, 180k should be enough for your case, as we changed it in CASSANDRA-4631 .
        Hide
        Rick Branson added a comment -

        The cassandra-env.sh on these nodes was from the 1.1.5 packaging, so it was using 160k. I've bumped them all to 256k for good measure. I looked back through logs and they were peppered with a dozen or so of those exceptions. Hoping this was either the cause or a contributing factor to this issue. Will report in if we stop seeing the issues.

        Show
        Rick Branson added a comment - The cassandra-env.sh on these nodes was from the 1.1.5 packaging, so it was using 160k. I've bumped them all to 256k for good measure. I looked back through logs and they were peppered with a dozen or so of those exceptions. Hoping this was either the cause or a contributing factor to this issue. Will report in if we stop seeing the issues.
        Hide
        Brandon Williams added a comment -

        Hrm, well, the solution is the same either way.

        Show
        Brandon Williams added a comment - Hrm, well, the solution is the same either way.
        Hide
        Rick Branson added a comment -

        The largest writes we do are maybe 1K. This would have to be hints or repair.

        Show
        Rick Branson added a comment - The largest writes we do are maybe 1K. This would have to be hints or repair.
        Hide
        Brandon Williams added a comment -

        That indicates you're sending large messages and might need to increase the Xss

        Show
        Brandon Williams added a comment - That indicates you're sending large messages and might need to increase the Xss
        Hide
        Rick Branson added a comment -

        1.1.6.

        Got one of these when I restarted the partitioned node (from a working node):

        ERROR [WRITE-/10.1.1.1] 2012-10-30 20:03:49,348 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[WRITE-/10.1.1.1,5,main]
        java.lang.StackOverflowError
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
        at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
        at java.io.DataOutputStream.flush(DataOutputStream.java:106)
        at org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:156)
        at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:126)

        Show
        Rick Branson added a comment - 1.1.6. Got one of these when I restarted the partitioned node (from a working node): ERROR [WRITE-/10.1.1.1] 2012-10-30 20:03:49,348 AbstractCassandraDaemon.java (line 135) Exception in thread Thread [WRITE-/10.1.1.1,5,main] java.lang.StackOverflowError at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at java.io.DataOutputStream.flush(DataOutputStream.java:106) at org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:156) at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:126)
        Hide
        Brandon Williams added a comment -

        Rick Branson good to know your problem wasn't inherently related to gossip. The repeating log for HH sounds like CASSANDRA-3955, what version are you on?

        Show
        Brandon Williams added a comment - Rick Branson good to know your problem wasn't inherently related to gossip. The repeating log for HH sounds like CASSANDRA-3955 , what version are you on?
        Hide
        Rick Branson added a comment -

        INFO [HintedHandoff:1] 2012-10-30 18:31:30,672 HintedHandOffManager.java (line 296) Started hinted handoff for token: 56713727820156410577229101238628035242 with IP: /10.1.1.2
        INFO [HintedHandoff:1] 2012-10-30 18:31:30,673 HintedHandOffManager.java (line 392) Finished hinted handoff of 0 rows to endpoint /10.1.1.2

        Seeing these every 10 minutes on the dot as well.

        Show
        Rick Branson added a comment - INFO [HintedHandoff:1] 2012-10-30 18:31:30,672 HintedHandOffManager.java (line 296) Started hinted handoff for token: 56713727820156410577229101238628035242 with IP: /10.1.1.2 INFO [HintedHandoff:1] 2012-10-30 18:31:30,673 HintedHandOffManager.java (line 392) Finished hinted handoff of 0 rows to endpoint /10.1.1.2 Seeing these every 10 minutes on the dot as well.
        Hide
        Rick Branson added a comment -

        Also see large numbers of pending commands in netstats.

        Pool Name Active Pending Completed
        Commands n/a 1176 51745
        Responses n/a 0 741136

        Show
        Rick Branson added a comment - Also see large numbers of pending commands in netstats. Pool Name Active Pending Completed Commands n/a 1176 51745 Responses n/a 0 741136
        Hide
        Rick Branson added a comment -

        Seeing this on Oracle 1.6.0_35-b10 / 3.2.0 as well. This was the "gossip" issue I ran into the other day Brandon.

        Show
        Rick Branson added a comment - Seeing this on Oracle 1.6.0_35-b10 / 3.2.0 as well. This was the "gossip" issue I ran into the other day Brandon.
        Hide
        Mina Naguib added a comment -

        Unfortunately logging level TRACE is too verbose. On this node it produces 1.7MB of logs per second. I switched it back to INFO.

        I discovered however that you don't need to restart cassandra for log level changes in log4j-server.properties to take effect. Next time I see the problem I'll switch to TRACE and post back anything interesting I find.

        Show
        Mina Naguib added a comment - Unfortunately logging level TRACE is too verbose. On this node it produces 1.7MB of logs per second. I switched it back to INFO. I discovered however that you don't need to restart cassandra for log level changes in log4j-server.properties to take effect. Next time I see the problem I'll switch to TRACE and post back anything interesting I find.
        Hide
        Mina Naguib added a comment -

        No clear picture yet, however I had the issue pop up again.

        All nodes run the same java version ( 1.6.0_35-b10 ), however the phantom connection and timeouts in HH only appear on the node that's running kernel 3.4.9. Nodes running earlier kernels (2.6.39, 3.0, 3.1) haven't exhibited this.

        Perhaps kernels 3.2 upwards (3.2 observed by John and BRandon, 3.4 by myself) have a bad interaction with the JVM.

        I'm restarting that node with log4j.rootLogger set to TRACE to see if there's more info next time.

        Show
        Mina Naguib added a comment - No clear picture yet, however I had the issue pop up again. All nodes run the same java version ( 1.6.0_35-b10 ), however the phantom connection and timeouts in HH only appear on the node that's running kernel 3.4.9. Nodes running earlier kernels (2.6.39, 3.0, 3.1) haven't exhibited this. Perhaps kernels 3.2 upwards (3.2 observed by John and BRandon, 3.4 by myself) have a bad interaction with the JVM. I'm restarting that node with log4j.rootLogger set to TRACE to see if there's more info next time.
        Hide
        John Watson added a comment -

        Not exactly the same way, but very similar. I noticed any nodes having HH timeouts only had 3 connections open on port 7000 whereas stable nodes had 4. Didn't notice any phantom connections, but I didn't really look very hard

        Show
        John Watson added a comment - Not exactly the same way, but very similar. I noticed any nodes having HH timeouts only had 3 connections open on port 7000 whereas stable nodes had 4. Didn't notice any phantom connections, but I didn't really look very hard
        Hide
        Mina Naguib added a comment -

        @Jonathan Unfortunately I don't have a multi-node test cluster to see if I can reproduce this

        @John & @Brandon Were you able to reproduce this ? Does it manifest itself the exact same way (10-minute hinted handoff failures) ? Do you find the same telltale TCP sockets without matching ends ?

        Show
        Mina Naguib added a comment - @Jonathan Unfortunately I don't have a multi-node test cluster to see if I can reproduce this @John & @Brandon Were you able to reproduce this ? Does it manifest itself the exact same way (10-minute hinted handoff failures) ? Do you find the same telltale TCP sockets without matching ends ?
        Hide
        Brandon Williams added a comment -

        To summarize: u34/2.6.32 did not exhibit, u34/3.2.0 did, so it looks like we have some kind of jvm/kernel interaction going on.

        Show
        Brandon Williams added a comment - To summarize: u34/2.6.32 did not exhibit, u34/3.2.0 did, so it looks like we have some kind of jvm/kernel interaction going on.
        Hide
        John Watson added a comment -

        Before/after switching Sun Java nodes to OpenJDK.

        The nodes already at the bottom were accidentally running OpenJDK already.

        Show
        John Watson added a comment - Before/after switching Sun Java nodes to OpenJDK. The nodes already at the bottom were accidentally running OpenJDK already.
        Hide
        John Watson added a comment - - edited

        This started after upgrading from (to the above info):

        Cassandra 1.1.5
        Ubuntu 10.04.4
        2.6.32-41-server x86_64
        Java 6u34

        Show
        John Watson added a comment - - edited This started after upgrading from (to the above info): Cassandra 1.1.5 Ubuntu 10.04.4 2.6.32-41-server x86_64 Java 6u34
        Hide
        Brandon Williams added a comment -

        Just to expand a bit here, John switched to u34 first and still had the issue, so it really is due to switching to openjdk and not just an artifact of restarting the jvm. FWIW, I don't see this on u26 (oracle), either.

        Show
        Brandon Williams added a comment - Just to expand a bit here, John switched to u34 first and still had the issue, so it really is due to switching to openjdk and not just an artifact of restarting the jvm. FWIW, I don't see this on u26 (oracle), either.
        Hide
        John Watson added a comment - - edited

        After removing sun-java6-jre from all nodes (leaving OpenJDK 6b24) both HH and write latency/timeout issues have been resolved.

        Running nodetool repair -pr and nodetool cleanup on all nodes then going to increase load on the cluster to see how it holds up

        Show
        John Watson added a comment - - edited After removing sun-java6-jre from all nodes (leaving OpenJDK 6b24) both HH and write latency/timeout issues have been resolved. Running nodetool repair -pr and nodetool cleanup on all nodes then going to increase load on the cluster to see how it holds up
        Hide
        John Watson added a comment - - edited

        Actually having a very similar issue ourselves in our 12 node cluster

        Cassandra 1.1.5
        Ubuntu 12.04.1
        3.2.0-31-generic x86_64
        Java 6u35

        Only way to solve is to restart both nodes (source/destination) for the HH. Which then sometimes causes the same issue with another pair.
        Also, these nodes are experiencing high write latencies/timeouts.

        Machines running OpenJDK 6b24 aren't not experiencing this issue. (Just happen to not switch to the Sun Java when configuring these nodes)

        Show
        John Watson added a comment - - edited Actually having a very similar issue ourselves in our 12 node cluster Cassandra 1.1.5 Ubuntu 12.04.1 3.2.0-31-generic x86_64 Java 6u35 Only way to solve is to restart both nodes (source/destination) for the HH. Which then sometimes causes the same issue with another pair. Also, these nodes are experiencing high write latencies/timeouts. Machines running OpenJDK 6b24 aren't not experiencing this issue. (Just happen to not switch to the Sun Java when configuring these nodes)
        Hide
        Jonathan Ellis added a comment -

        is it reproducible? if so would be interesting to see what other nodes log in OutboundTCPConnection at TRACE.

        Show
        Jonathan Ellis added a comment - is it reproducible? if so would be interesting to see what other nodes log in OutboundTCPConnection at TRACE.
        Hide
        Mina Naguib added a comment -

        Nope. .10 logged absolutely nothing out of the ordinary. Just regular memtable flushes, compactions, keycache saving, occasional slow GC.

        Show
        Mina Naguib added a comment - Nope. .10 logged absolutely nothing out of the ordinary. Just regular memtable flushes, compactions, keycache saving, occasional slow GC.
        Hide
        Jonathan Ellis added a comment -

        Did .10 ever log any errors?

        Show
        Jonathan Ellis added a comment - Did .10 ever log any errors?

          People

          • Assignee:
            Unassigned
            Reporter:
            Mina Naguib
          • Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development