CouchDB
  1. CouchDB
  2. COUCHDB-641

Should replication of recently purged documents keep trying? (0.9 release)

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.9
    • Fix Version/s: None
    • Component/s: Replication
    • Labels:
      None
    • Environment:

      couchdb 0.9.0.r766883 CentOS x86_64

    • Skill Level:
      Regular Contributors Level (Easy to Medium)

      Description

      We had a large doc, with 100000s of revisions which was having trouble replicating. (Let's ignore the why which is probably down to our networking). We use pull replication on this 0.9 installation.

      We wanted to remove that particular doc (as we could do) from teh databases so that the replicater would not have to keep trying to replicate it.
      First we deleted it. Of course this meant that the replication record for the doc had yet another entry for it.
      We then compacted the database - hoping to reduce the number of revisions - but of course this wouldn't work either.
      We then purged all open revisions of the doc from the source database, but the target still tried to replicate this doc. And tried. And tried, eventually causing the server to crash.

      Mon, 08 Feb 2010 09:56:10 GMT] [error] [<0.3542.0>] couch_rep HTTP get request failed after 10 retries:
      http://kv101.back.live.telhc.local:5986/madcache/MAD__mutex
        ?revs=true&latest=true&open_revs=["15799-4207095478",....,"7286-464196713"]
      
      [Mon, 08 Feb 2010 09:56:11 GMT] [error] [<0.3542.0>] replicator terminating with reason {http_request_failed,
                                             [104,116,116,112,58,47,47,107,118,49,
                                              48,49,46,98,97,99,107,46,108,105,118, etc etc etc

      In the above there were 900+ open revisions.

      The question is this: should the replicater still try to replicate docs which have been purged from the source?

      • It is possible that this bug is invalid on 0.9+/0.10.x/0.11 - we haven't the ability to re-create the scenario.
      • It is also possible that the COUCHDB-416 fix has also fixed this - we haven't upgraded enough environments yet to verify, even if we could re-create the scenario
      • It's OK that the replicater had tried to replicate all those revs as they did indeed once exist - it's only whether it should recognise that it cannot access them any longer and therefore stop requesting it.

      Our work around was to delete the target database entirely, restart the CouchDB instance, re-create a new database of the same name and re-replicate. Such a process is not always going to be available as an option on live production environments.

        Issue Links

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              Enda Farrell
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:

                Development