Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-10889

Stale zookeper information is used during failover check

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 7.0
    • None
    • None
    • None

    Description

      In OverseerAutoReplicaFailoverThread it goes over each and every replica to check if it needs to be reloaded on a new node. In each such round it reads cluster state just in the beginning. Especially in case of big clusters, cluster state may change during the process of iterating through the replicas. As a result false decisions may be made: restarting a healthy core, or not handling a bad node.

      The code fragment in question:

              for (Slice slice : slices) {
                if (slice.getState() == Slice.State.ACTIVE) {
                  final Collection<DownReplica> downReplicas = new ArrayList<DownReplica>();
                  int goodReplicas = findDownReplicasInSlice(clusterState, docCollection, slice, downReplicas);
      

      The solution seems rather straightforward, reading the state every time:

                  int goodReplicas = findDownReplicasInSlice(zkStateReader.getClusterState(), docCollection, slice, downReplicas);
      

      The only counter argument that comes into my mind is too frequent reading of the cluster state. We can enhance this naive solution so that re-reading is done only if a bad node is found. But I am not sure if such a read optimization is necessary.

      I have done some unit tests around this class, mocking out even the time factor. It runs in a second. I am interested in getting feedback about such an approach. I will upload a patch with this shortly.

      Attachments

        1. SOLR-10889.patch
          25 kB
          Mihaly Toth

        Activity

          mihaly.toth Mihaly Toth added a comment -

          Here is the unit test and the implementation (first one is bigger)

          • Time is "mocked" out: interface introduced for getting nanoseconds. In test it is overwritten.
          • Each doWork loop is invoked separately from test, forever looping is not used
          • Hamcrest matchers for collection asserts
          • updateExecutor basically executes the code in the same Thread context, no problems in waiting for background thread to complete
          • Core Create Requests are not actually executed, just collected into a list, and verified from test

          Comments are welcome.

          mihaly.toth Mihaly Toth added a comment - Here is the unit test and the implementation (first one is bigger) Time is "mocked" out: interface introduced for getting nanoseconds. In test it is overwritten. Each doWork loop is invoked separately from test, forever looping is not used Hamcrest matchers for collection asserts updateExecutor basically executes the code in the same Thread context, no problems in waiting for background thread to complete Core Create Requests are not actually executed, just collected into a list, and verified from test Comments are welcome.
          mihaly.toth Mihaly Toth added a comment -

          The only counter argument that comes into my mind is too frequent reading of the cluster state. We can enhance this naive solution so that re-reading is done only if a bad node is found. But I am not sure if such a read optimization is necessary.

          Actually, looking into ZkStateReader there is no network activity involved when reading the cluster state. So there is not much counter argument against using the most latest cluster state instead of a stale one.

          mihaly.toth Mihaly Toth added a comment - The only counter argument that comes into my mind is too frequent reading of the cluster state. We can enhance this naive solution so that re-reading is done only if a bad node is found. But I am not sure if such a read optimization is necessary. Actually, looking into ZkStateReader there is no network activity involved when reading the cluster state. So there is not much counter argument against using the most latest cluster state instead of a stale one.
          mihaly.toth Mihaly Toth added a comment -

          As I see in Solr 7 this will not be an issue because of SOLR-10397. In this case affects version should be modified to 6.0. And those will not be fixed I guess.

          mihaly.toth Mihaly Toth added a comment - As I see in Solr 7 this will not be an issue because of SOLR-10397 . In this case affects version should be modified to 6.0. And those will not be fixed I guess.
          markrmiller@gmail.com Mark Miller added a comment -

          SOLR-10397 has not landed yet - we should probably get the current implementation back into shape, if not just to make sure the current testing is in good shape.

          markrmiller@gmail.com Mark Miller added a comment - SOLR-10397 has not landed yet - we should probably get the current implementation back into shape, if not just to make sure the current testing is in good shape.

          People

            markrmiller@gmail.com Mark Miller
            mihaly.toth Mihaly Toth
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: