Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-5860

Logging around core wait for state during startup / recovery is confusing



    • Improvement
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • None
    • 4.8, 6.0
    • SolrCloud
    • None


      I'm seeing some log messages like this:

      I was asked to wait on state recovering for HOST:8984_solr but I still do not see the requested state. I see state: recovering live:true

      This is very confusing because from the log, it seems like it's waiting to see the state it's in ... After digging through the code, it appears that it is really waiting for a leader to become active so that it has a leader to recover from.

      I'd like to improve the logging around this critical wait loop to give better context to what is happening.

      Also, I would like to change the following so that we force state updates every 15 seconds for the entire wait period.

      • if (retry == 15 || retry == 60) {
        + if (retry % 15 == 0) {

      As-is, it's waiting 120 seconds but only forcing the state to update twice, once after 15 seconds and again after 60 … might be good to force updates for the full wait period.

      Lastly, I think it would be good to use the leaderConflictResolveWait setting (from ZkController) here as well since 120 may not be enough for a leader to become active in a busy cluster, esp. after the node the Overseer is running on. Maybe leaderConflictResolveWait + 5 seconds?


        1. SOLR-5860.patch
          4 kB
          Timothy Potter



            shalin Shalin Shekhar Mangar
            tim.potter Timothy Potter
            0 Vote for this issue
            4 Start watching this issue