Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
7.4, 8.0
-
None
-
None
Description
The following issue occasionally appears when running TestLargeCluster.testNodeLost.
The test kills a large number of nodes, waiting for a certain time between the kills. Depending on the sequence and the length of waitFor it may happen that when ExecutePlanAction processes MOVEREPLICA the target node may just have been killed. This results in an exception and a FAILED status of the action.
However, this failure is not reported back to the trigger as unprocessed event because it happens asynchronously in the action executor (in ScheduledTriggers) - so the trigger happily resets its internal state to no longer track the lost node. As a result, replicas remain lost and even if there’s a Policy violation the event will not be generated again, and the number of replicas won’t go back to the original number.
Also, ScheduledTriggers:311 and 323 only logs the exception but doesn’t fire listeners with FAILED status, which is a bug.
Attachments
Attachments
Issue Links
- relates to
-
SOLR-12479 TriggerAction failures may cause inconsistent trigger behavior
- Resolved