Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-12305

When a replica is applying updates, some kind of updates can skip buffering for faster recovery



    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 7.5, 8.0
    • Component/s: None
    • Labels:


      The current recovery process has 2 main problems (pointed out by [~shalinmangar] ) which make it may never finish.

      1. The replay updates process is too slow, we do it in a single-thread fashion. Therefore if the more updates get appended at a faster rate, the replay process will be never finished
      2. The buffering tlog is unbounded, we keep adding more entries to buffering tlog and waiting for them to get replayed. If we have a way to reduce the number of updates in buffering tlog, even when replay process is slow it will eventually finish.

      I come up with a solution for the second problem which is described on this link:


      In short, the document presents a modification for current recovery process (section 3: algorithm) and also proof the correctness of the modification (section 1 and 2). There are some pros in this approach

      • Making buffering tlog bounded.
      • It will automatically throttle updates from the leader, imagine this case
        • We have a shard with a leader and a replica. When leader sends replica an update.
        • If the replica is healthy, the leader will have to wait for the replica to finish process that updates before return to users. Let's call the total time for an update is T0
        • If the replica is recovering, in the current code, the replica will only append that update to its buffering tlog (which is much faster than indexing), so the total time for an update is T1 < T0. Therefore the rate of incoming updates will be higher in this case.
        • In above design, T1 will be subequal to T0.


        1. SOLR-12305.patch
          2 kB
          Cao Manh Dat



            • Assignee:
              caomanhdat Cao Manh Dat
              caomanhdat Cao Manh Dat
            • Votes:
              0 Vote for this issue
              5 Start watching this issue


              • Created: