Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-659

Explicitly set start and rows per shard for more efficient bulk queries across distributed Solr

Details

    • Improvement
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 1.3
    • 1.4
    • search
    • None

    Description

      The default behavior of setting start and rows on distributed solr (SOLR-303) is to set start at 0 across all shards and set rows to start+rows across each shard. This ensures all results are returned for any arbitrary start and rows setting, but during "bulk queries" (where start is incrementally increased and rows is kept consistent) the client would need finer control of the per-shard start and rows parameter as retrieving many thousands of documents becomes intractable as start grows higher.

      Attaching a patch that creates a &shards.start and &shards.rows parameter. If used, the logic that sets rows to start+rows per shard is overridden and each shard gets the exact start and rows set in shards.start and shards.rows. The client will receive up to shards.rows * nShards results and should set rows accordingly. This makes bulk queries across distributed solr possible.

      Attachments

        1. shards.start_rows.patch
          3 kB
          Brian Whitman
        2. SOLR-659.patch
          3 kB
          Brian Whitman

        Activity

          bwhitman Brian Whitman added a comment -

          Attaching patch.

          bwhitman Brian Whitman added a comment - Attaching patch.
          bwhitman Brian Whitman added a comment -

          An example of a bulk query using this patch. Without this patch such bulk queries will eventually time out or cause exceptions in the server as too much data is passed back and forth.

          public SolrDocumentList blockQuery(SolrQuery q, int blockSize, int maxResults) {
              SolrDocumentList allResults = new SolrDocumentList();
              if(blockSize > maxResults) { blockSize = maxResults;  }
              for(int i=0; i<maxResults; i=i+blockSize) {
                // Sets rows of this query to the most results that could ever come back - the blockSize * the number of shards
                q.setRows(blockSize * getNumberOfHosts());
                // Don't set a start on the main query
                q.setStart(0);
                // But do set start and rows on the individual shards. 
                q.set("shards.start", String.valueOf(i));
                q.set("shards.rows", String.valueOf(blockSize));
                // Perform the query.
                QueryResponse sub = query(q);
                // For each returned document (up to blockSize*numberOfHosts() of them), append them to the main result
                for(SolrDocument s : sub.getResults()) {
                  allResults.add(s);
                  // Break if we've reached our requested limit
                  if(allResults.size() > maxResults) { break; }
                }
                if(allResults.size() > maxResults) { break; }
              }
              return allResults;
            }
          
          bwhitman Brian Whitman added a comment - An example of a bulk query using this patch. Without this patch such bulk queries will eventually time out or cause exceptions in the server as too much data is passed back and forth. public SolrDocumentList blockQuery(SolrQuery q, int blockSize, int maxResults) { SolrDocumentList allResults = new SolrDocumentList(); if (blockSize > maxResults) { blockSize = maxResults; } for ( int i=0; i<maxResults; i=i+blockSize) { // Sets rows of this query to the most results that could ever come back - the blockSize * the number of shards q.setRows(blockSize * getNumberOfHosts()); // Don't set a start on the main query q.setStart(0); // But do set start and rows on the individual shards. q.set( "shards.start" , String .valueOf(i)); q.set( "shards.rows" , String .valueOf(blockSize)); // Perform the query. QueryResponse sub = query(q); // For each returned document (up to blockSize*numberOfHosts() of them), append them to the main result for (SolrDocument s : sub.getResults()) { allResults.add(s); // Break if we've reached our requested limit if (allResults.size() > maxResults) { break ; } } if (allResults.size() > maxResults) { break ; } } return allResults; }
          klaasm Mike Klaas added a comment -

          IMO it is too late in the release process for new features.

          klaasm Mike Klaas added a comment - IMO it is too late in the release process for new features.

          This looks simple enough. I haven't tried it. Brian, do you have a unit test you could attach?

          Or would it make more sense to have a custom QueryComponent for something like this? (I don't know yet)

          otis Otis Gospodnetic added a comment - This looks simple enough. I haven't tried it. Brian, do you have a unit test you could attach? Or would it make more sense to have a custom QueryComponent for something like this? (I don't know yet)
          bwhitman Brian Whitman added a comment -

          New patch syncs w/ trunk

          bwhitman Brian Whitman added a comment - New patch syncs w/ trunk

          If I understand this correctly, it makes bulk queries cheaper at the expense of less precise scoring. But if I'm paging through some results and you modify the shard.start and shard.rows then I'll get inconsistent results. Is that correct?

          The client will receive up to shards.rows * nShards results and should set rows accordingly. This makes bulk queries across distributed solr possible.

          I do not understand that. Why will the client get more than rows? Or by client, did you mean the solr server to which the initial request is sent?

          shalin Shalin Shekhar Mangar added a comment - If I understand this correctly, it makes bulk queries cheaper at the expense of less precise scoring. But if I'm paging through some results and you modify the shard.start and shard.rows then I'll get inconsistent results. Is that correct? The client will receive up to shards.rows * nShards results and should set rows accordingly. This makes bulk queries across distributed solr possible. I do not understand that. Why will the client get more than rows? Or by client, did you mean the solr server to which the initial request is sent?
          yseeley@gmail.com Yonik Seeley added a comment -

          I agree this makes sense to enable efficient bulk operations, and also fits in with a past idea I had about mapping shards.param=foo to param=foo during a sub-request.

          I'll give it a couple of days and commit if there are no objections.

          yseeley@gmail.com Yonik Seeley added a comment - I agree this makes sense to enable efficient bulk operations, and also fits in with a past idea I had about mapping shards.param=foo to param=foo during a sub-request. I'll give it a couple of days and commit if there are no objections.
          yseeley@gmail.com Yonik Seeley added a comment -

          Thanks Brian, I just committed this.

          yseeley@gmail.com Yonik Seeley added a comment - Thanks Brian, I just committed this.
          johnsoncr johnson.hong added a comment -

          This is really helpful to bulk queries ,but how to handle the pagination of query results.
          e.g.at the first query,I set shards.start to 0 and set shards.rows to 30,it may return 50 documents,and i get 30 documents to show ,the other 20 documents is discarded ;then how to get the next 30 documents ?

          johnsoncr johnson.hong added a comment - This is really helpful to bulk queries ,but how to handle the pagination of query results. e.g.at the first query,I set shards.start to 0 and set shards.rows to 30,it may return 50 documents,and i get 30 documents to show ,the other 20 documents is discarded ;then how to get the next 30 documents ?

          Bulk close for Solr 1.4

          gsingers Grant Ingersoll added a comment - Bulk close for Solr 1.4

          People

            yseeley@gmail.com Yonik Seeley
            bwhitman Brian Whitman
            Votes:
            1 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: