1. CouchDB
  2. COUCHDB-1257

Reduction detection in share/server/views.js runReduce to primitive for some use cases and overall reduce architecture


    • Type: Bug Bug
    • Status: Open
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: 1.1
    • Fix Version/s: None
    • Component/s: JavaScript View Server
    • Labels:
    • Environment:

      Architecture independent

    • Skill Level:
      New Contributors Level (Easy)


      There is two parts to this report, one is my question I have in regards to the protocol in which couchdb actually talks to the view server (server/main.js), the second part as discussed here [1] in short is the growth detection [2] done in share/server/views.js.

      First issue:
      While troubleshooting and tracking down this issue, I saw that when running a reduce function for every doc called, a separate read line is delegated a reset call. I do not know the internals of couchdb but it seems like it could be a performance improvement if on a rereduce call, as it appears in the server Loop function every time reduce or rereduce is called the state is first reset with "reset", to make a minor re-factor and reset the state within the Views dispatcher itself. This saves some likely blocking io back and forth.

      Reduction detection:
      The portion of this issue that actually has affected us is the detection done, I think that it should be changed to require a sampling of data, the easiest fix is to simply change (the very arbitrary) [2] into a large length, such as * 10 or something, but the upper bounds will always be reached for some data set. A new algorithm could take in consideration the total reduce calls thus far, as well as the initial size of the very first reduce call, to see if a large aggregation of some sort is being created.

      Suggested change:
      Change the signature of rereduce call to main.js to include the total count of rereduce calls thus far, or give some kind of handle where this logic could be reliably kept track of inside of main.js, as it stands now your only hope would be to use the reduce function itself as a lookup, which is not safe under all edge cases. With this information you can keep the initial rereduce calls result size, and the total number of calls, to make sure that aside from initial aggregation or structure initialization the structure is remaining constant.

      There would be several other routes to go to fix this, but I think it is valid to be considered. I could volunteer my time to write a patch if a consensus could be reached on behavior, if not denied all together.


      [2] if (State.query_config && State.query_config.reduce_limit &&
      [2] reduce_length > 200 && ((reduce_length * 2) > State.line_length)) {


        Chris Stockton created issue -


          • Assignee:
            Chris Stockton
          • Votes:
            0 Vote for this issue
            1 Start watching this issue


            • Created: