Details
-
Bug
-
Status: Resolved
-
Low
-
Resolution: Fixed
-
2.2.10, 3.0.13, 3.11.0, 4.0-alpha1, 4.0
-
None
-
Low
Description
Bootstrapping or replacing a node in the cluster requires to gather and check some host IDs or tokens by doing a gossip "shadow round" once before joining the cluster. This is done by sending a gossip SYN to all seeds until we receive a response with the cluster state, from where we can move on in the bootstrap process. Receiving a response will call the shadow round done and calls Gossiper.resetEndpointStateMap for cleaning up the received state again.
The issue here is that at this point there might be other in-flight requests and it's very likely that shadow round responses from other seeds will be received afterwards, while the current state of the bootstrap process doesn't expect this to happen (e.g. gossiper may or may not be enabled).
One side effect will be that MigrationTasks are spawned for each shadow round reply except the first. Tasks might or might not execute based on whether at execution time Gossiper.resetEndpointStateMap had been called, which effects the outcome of FailureDetector.instance.isAlive(endpoint)) at start of the task. You'll see error log messages such as follows when this happend:
INFO [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - InetAddress /xx.xx.xx.xx is now UP ERROR [MigrationStage:1] 2016-09-08 08:36:39,255 FailureDetector.java:223 - unknown endpoint /xx.xx.xx.xx
Although is isn't pretty, I currently don't see any serious harm from this, but it would be good to get a second opinion (feel free to close as "wont fix").
Attachments
Issue Links
- breaks
-
CASSANDRA-11689 dtest failures in internode_ssl_test tests
-
- Resolved
-
- is related to
-
CASSANDRA-9032 Reduce logging level for MigrationTask abort due to down node from ERROR to INFO
-
- Resolved
-
- links to