Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-9504

A replica with an empty index becomes the leader even when other more qualified replicas are in line

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • 7.0
    • 6.3, 7.0
    • SolrCloud

    Description

      I haven't tried branch_6x or any release yet. But this is trivially reproducible on master with the following steps:

      1. Start two solr nodes
      2. Create a collection with 1 shard, 1 replica so that one node is empty.
      3. Index some documents
      4. Shutdown the leader node
      5. Use addreplica API to create a replica of the collection on the still-running node. For some reason this API hangs until you restart the other node (possibly a bug itself) but do not wait for the API to complete.
      6. Restart the former leader node

      You'll find that the replica with 0 docs has become the leader. The former leader recovers from the leader without replicating any index files. It still has the old index which has some docs.

      This is from the logs of the 0 doc replica:

      713102 INFO  (zkCallback-4-thread-5-processing-n:127.0.1.1:7574_solr) [   ] o.a.s.c.c.ZkStateReader Updating data for [gettingstarted] from [9] to [10]
      714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
      714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
      714377 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.SyncStrategy Sync replicas to http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/
      714380 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.u.PeerSync PeerSync: core=gettingstarted_shard1_replica2 url=http://127.0.1.1:7574/solr START replicas=[http://127.0.1.1:8983/solr/gettingstarted_shard1_replica1/] nUpdates=100
      714381 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.u.PeerSync PeerSync: core=gettingstarted_shard1_replica2 url=http://127.0.1.1:7574/solr DONE.  We have no versions.  sync failed.
      714382 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.SyncStrategy Leader's attempt to sync with shard failed, moving to the next candidate
      714382 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we can't sync in that case - we were active before, so become leader anyway
      714387 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/gettingstarted/leaders/shard1/leader after winning as /collections/gettingstarted/leader_elect/shard1/election/96579592334475268-core_node2-n_0000000001
      714398 INFO  (qtp110456297-15) [c:gettingstarted s:shard1 r:core_node2 x:gettingstarted_shard1_replica2] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.1.1:7574/solr/gettingstarted_shard1_replica2/ shard1
      

      It basically tries to sync but has no versions and because it was active before (it is a new core starting up for the first time), it becomes the leader and publishes itself as active.

      Attachments

        1. SOLR-9504.patch
          19 kB
          Shalin Shekhar Mangar

        Activity

          People

            shalin Shalin Shekhar Mangar
            shalin Shalin Shekhar Mangar
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: