Lucene - Core
  1. Lucene - Core
  2. LUCENE-6458

MultiTermQuery's FILTER rewrite method should support skipping whenever possible

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 5.2, 6.0
    • Component/s: None
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Today MultiTermQuery's FILTER rewrite always builds a bit set fom all matching terms. This means that we need to consume the entire postings lists of all matching terms. Instead we should try to execute like regular disjunctions when there are few terms.

      1. LUCENE-6458.patch
        7 kB
        Adrien Grand
      2. LUCENE-6458.patch
        7 kB
        Adrien Grand
      3. LUCENE-6458-2.patch
        4 kB
        Adrien Grand
      4. wikimedium.10M.nostopwords.tasks
        4 kB
        Adrien Grand

        Activity

        Hide
        Adrien Grand added a comment - - edited

        Here is a patch, it is quite similar to the old "auto" rewrite except that it rewrites per segment and only consumes the filtered terms enum once. Queries are executed as regular disjunctions when there are 50 matching terms or less.

                            TaskQPS baseline      StdDev   QPS patch      StdDev                Pct diff
                        Wildcard       76.93      (1.7%)       66.55      (4.1%)  -13.5% ( -18% -   -7%)
                         Prefix3       99.69      (1.8%)       88.80      (2.3%)  -10.9% ( -14% -   -6%)
                       OrHighMed       76.77      (3.7%)       76.26      (3.7%)   -0.7% (  -7% -    6%)
                   OrHighNotHigh       37.88      (1.7%)       37.73      (2.2%)   -0.4% (  -4% -    3%)
                         MedTerm      306.74      (1.4%)      305.54      (1.5%)   -0.4% (  -3% -    2%)
                      OrHighHigh       36.17      (4.5%)       36.05      (4.0%)   -0.3% (  -8% -    8%)
                        HighTerm      120.67      (1.2%)      120.37      (1.7%)   -0.2% (  -3% -    2%)
                 MedSloppyPhrase       36.30      (2.9%)       36.25      (2.8%)   -0.1% (  -5% -    5%)
                          IntNRQ        8.64      (2.9%)        8.63      (2.6%)   -0.1% (  -5% -    5%)
                     LowSpanNear       70.11      (1.8%)       70.13      (2.2%)    0.0% (  -3% -    4%)
                    HighSpanNear       17.55      (2.8%)       17.56      (3.0%)    0.1% (  -5% -    6%)
                    OrHighNotMed       81.45      (1.8%)       81.51      (2.2%)    0.1% (  -3% -    4%)
                       LowPhrase       14.47      (2.7%)       14.50      (3.0%)    0.2% (  -5% -    6%)
                     MedSpanNear      120.55      (2.2%)      120.86      (2.0%)    0.3% (  -3% -    4%)
                     AndHighHigh       58.08      (2.5%)       58.24      (2.6%)    0.3% (  -4% -    5%)
                 LowSloppyPhrase       62.42      (4.3%)       62.60      (4.4%)    0.3% (  -8% -    9%)
                    OrHighNotLow       76.06      (1.9%)       76.36      (2.4%)    0.4% (  -3% -    4%)
                         Respell       72.86      (3.9%)       73.17      (2.9%)    0.4% (  -6% -    7%)
                   OrNotHighHigh       50.07      (1.5%)       50.30      (1.2%)    0.5% (  -2% -    3%)
                HighSloppyPhrase       24.92      (6.4%)       25.05      (6.5%)    0.5% ( -11% -   14%)
                       OrHighLow       68.75      (4.6%)       69.17      (4.1%)    0.6% (  -7% -    9%)
                      HighPhrase       20.89      (2.5%)       21.04      (1.8%)    0.7% (  -3% -    5%)
                    OrNotHighMed      179.02      (1.9%)      180.37      (1.4%)    0.8% (  -2% -    4%)
                        PKLookup      263.21      (2.8%)      265.42      (3.0%)    0.8% (  -4% -    6%)
                       MedPhrase       34.60      (3.6%)       34.94      (3.4%)    1.0% (  -5% -    8%)
                         LowTerm      780.71      (3.2%)      790.04      (4.2%)    1.2% (  -5% -    8%)
                    OrNotHighLow     1459.46      (3.5%)     1480.76      (5.0%)    1.5% (  -6% -   10%)
                      AndHighMed      255.15      (2.6%)      258.93      (2.4%)    1.5% (  -3% -    6%)
                          Fuzzy1       77.69      (8.9%)       79.12      (7.7%)    1.8% ( -13% -   20%)
                      AndHighLow      961.32      (3.9%)      980.23      (3.5%)    2.0% (  -5% -    9%)
                          Fuzzy2       24.48      (7.9%)       25.19      (7.4%)    2.9% ( -11% -   19%)
        

        I was hoping it would kick in for numeric range queries but unfortunately they often need to match hundreds of terms. I'm wondering if it would be different for auto-prefix.

        Prefix3 and Wildcard are a bit slower because these ones get actually executed as regular disjunctions. I think the slowdown is fair given that it also requires less memory and provides true skipping support (which the benchmark doesn't use).

        Show
        Adrien Grand added a comment - - edited Here is a patch, it is quite similar to the old "auto" rewrite except that it rewrites per segment and only consumes the filtered terms enum once. Queries are executed as regular disjunctions when there are 50 matching terms or less. TaskQPS baseline StdDev QPS patch StdDev Pct diff Wildcard 76.93 (1.7%) 66.55 (4.1%) -13.5% ( -18% - -7%) Prefix3 99.69 (1.8%) 88.80 (2.3%) -10.9% ( -14% - -6%) OrHighMed 76.77 (3.7%) 76.26 (3.7%) -0.7% ( -7% - 6%) OrHighNotHigh 37.88 (1.7%) 37.73 (2.2%) -0.4% ( -4% - 3%) MedTerm 306.74 (1.4%) 305.54 (1.5%) -0.4% ( -3% - 2%) OrHighHigh 36.17 (4.5%) 36.05 (4.0%) -0.3% ( -8% - 8%) HighTerm 120.67 (1.2%) 120.37 (1.7%) -0.2% ( -3% - 2%) MedSloppyPhrase 36.30 (2.9%) 36.25 (2.8%) -0.1% ( -5% - 5%) IntNRQ 8.64 (2.9%) 8.63 (2.6%) -0.1% ( -5% - 5%) LowSpanNear 70.11 (1.8%) 70.13 (2.2%) 0.0% ( -3% - 4%) HighSpanNear 17.55 (2.8%) 17.56 (3.0%) 0.1% ( -5% - 6%) OrHighNotMed 81.45 (1.8%) 81.51 (2.2%) 0.1% ( -3% - 4%) LowPhrase 14.47 (2.7%) 14.50 (3.0%) 0.2% ( -5% - 6%) MedSpanNear 120.55 (2.2%) 120.86 (2.0%) 0.3% ( -3% - 4%) AndHighHigh 58.08 (2.5%) 58.24 (2.6%) 0.3% ( -4% - 5%) LowSloppyPhrase 62.42 (4.3%) 62.60 (4.4%) 0.3% ( -8% - 9%) OrHighNotLow 76.06 (1.9%) 76.36 (2.4%) 0.4% ( -3% - 4%) Respell 72.86 (3.9%) 73.17 (2.9%) 0.4% ( -6% - 7%) OrNotHighHigh 50.07 (1.5%) 50.30 (1.2%) 0.5% ( -2% - 3%) HighSloppyPhrase 24.92 (6.4%) 25.05 (6.5%) 0.5% ( -11% - 14%) OrHighLow 68.75 (4.6%) 69.17 (4.1%) 0.6% ( -7% - 9%) HighPhrase 20.89 (2.5%) 21.04 (1.8%) 0.7% ( -3% - 5%) OrNotHighMed 179.02 (1.9%) 180.37 (1.4%) 0.8% ( -2% - 4%) PKLookup 263.21 (2.8%) 265.42 (3.0%) 0.8% ( -4% - 6%) MedPhrase 34.60 (3.6%) 34.94 (3.4%) 1.0% ( -5% - 8%) LowTerm 780.71 (3.2%) 790.04 (4.2%) 1.2% ( -5% - 8%) OrNotHighLow 1459.46 (3.5%) 1480.76 (5.0%) 1.5% ( -6% - 10%) AndHighMed 255.15 (2.6%) 258.93 (2.4%) 1.5% ( -3% - 6%) Fuzzy1 77.69 (8.9%) 79.12 (7.7%) 1.8% ( -13% - 20%) AndHighLow 961.32 (3.9%) 980.23 (3.5%) 2.0% ( -5% - 9%) Fuzzy2 24.48 (7.9%) 25.19 (7.4%) 2.9% ( -11% - 19%) I was hoping it would kick in for numeric range queries but unfortunately they often need to match hundreds of terms. I'm wondering if it would be different for auto-prefix. Prefix3 and Wildcard are a bit slower because these ones get actually executed as regular disjunctions. I think the slowdown is fair given that it also requires less memory and provides true skipping support (which the benchmark doesn't use).
        Hide
        David Smiley added a comment -

        Adrien, how did you arrive at BOOLEAN_REWRITE_THRESHOLD=50 ? This reminds me of when I was working on the Solr "Terms" QParser that supports 3-4 different options, to include BooleanQuery & TermsQuery. I wanted to have it automatically use a BooleanQuery at a low term threshold but I wasn't sure what to use so I didn't bother, and I didn't have time to do benchmarks then. In hind-site, any hunch value (64?) would have been better then always choosing TermsQuery no matter what. I have a feeling that the appropriate threshold is a function of the number of indexed terms, instead of just a constant.

        Show
        David Smiley added a comment - Adrien, how did you arrive at BOOLEAN_REWRITE_THRESHOLD=50 ? This reminds me of when I was working on the Solr "Terms" QParser that supports 3-4 different options, to include BooleanQuery & TermsQuery. I wanted to have it automatically use a BooleanQuery at a low term threshold but I wasn't sure what to use so I didn't bother, and I didn't have time to do benchmarks then. In hind-site, any hunch value (64?) would have been better then always choosing TermsQuery no matter what. I have a feeling that the appropriate threshold is a function of the number of indexed terms, instead of just a constant.
        Hide
        Adrien Grand added a comment -

        I did some benchmarking and higher values (several hundreds) could make luceneutil run slower so I ended up with the same value that we are currently using for FuzzyQuery's max expansions.

        This reminds me of when I was working on the Solr "Terms" QParser that supports 3-4 different options, to include BooleanQuery & TermsQuery

        Maybe we could change TermsQuery.rewrite to rewrite to a boolean query (wrapped in a CSQ) when there are few terms? This would avoid having to worry about this in every query parser.

        I have a feeling that the appropriate threshold is a function of the number of indexed terms, instead of just a constant.

        Hmm, what makes you think so? In my opinion, the issue with rewriting to a BooleanQuery is that its scorer needs to rebalance the priority queue whenever it advances, which is O(log(#clauses)). So it gets slower as you add new optional clauses while the way TermsQuery works doesn't care much about the number of matching terms. I don't think the total number of index terms is relevant?

        Show
        Adrien Grand added a comment - I did some benchmarking and higher values (several hundreds) could make luceneutil run slower so I ended up with the same value that we are currently using for FuzzyQuery's max expansions. This reminds me of when I was working on the Solr "Terms" QParser that supports 3-4 different options, to include BooleanQuery & TermsQuery Maybe we could change TermsQuery.rewrite to rewrite to a boolean query (wrapped in a CSQ) when there are few terms? This would avoid having to worry about this in every query parser. I have a feeling that the appropriate threshold is a function of the number of indexed terms, instead of just a constant. Hmm, what makes you think so? In my opinion, the issue with rewriting to a BooleanQuery is that its scorer needs to rebalance the priority queue whenever it advances, which is O(log(#clauses)). So it gets slower as you add new optional clauses while the way TermsQuery works doesn't care much about the number of matching terms. I don't think the total number of index terms is relevant?
        Hide
        David Smiley added a comment -

        Maybe we could change TermsQuery.rewrite to rewrite to a boolean query (wrapped in a CSQ) when there are few terms? This would avoid having to worry about this in every query parser.

        You read my mind I had that thought right after I posted.

        In my opinion, the issue with rewriting to a BooleanQuery is that its scorer needs to rebalance the priority queue whenever it advances, which is O(log(#clauses)). So it gets slower as you add new optional clauses while the way TermsQuery works doesn't care much about the number of matching terms. I don't think the total number of index terms is relevant?

        Ah; I haven't looked in a while so I didn't know about the priority-queue over there in BooleanQuery (DisjunctionScorer actually). Never mind.

        Show
        David Smiley added a comment - Maybe we could change TermsQuery.rewrite to rewrite to a boolean query (wrapped in a CSQ) when there are few terms? This would avoid having to worry about this in every query parser. You read my mind I had that thought right after I posted. In my opinion, the issue with rewriting to a BooleanQuery is that its scorer needs to rebalance the priority queue whenever it advances, which is O(log(#clauses)). So it gets slower as you add new optional clauses while the way TermsQuery works doesn't care much about the number of matching terms. I don't think the total number of index terms is relevant? Ah; I haven't looked in a while so I didn't know about the priority-queue over there in BooleanQuery (DisjunctionScorer actually). Never mind.
        Hide
        Adrien Grand added a comment -

        I did some more benchmarking of the change with filters (see attached tasks file) and various thresholds (and a fixed seed):

        16
                            TaskQPS baseline      StdDev   QPS patch      StdDev                Pct diff
                             MTQ       24.33      (7.5%)       20.67      (7.3%)  -15.1% ( -27% -    0%)
                          IntNRQ       20.38      (7.3%)       17.85     (11.9%)  -12.4% ( -29% -    7%)
                       IntNRQ_50        8.94     (10.1%)        8.67      (8.6%)   -3.0% ( -19% -   17%)
                          MTQ_50        9.05      (7.9%)        8.93      (5.3%)   -1.3% ( -13% -   12%)
                       IntNRQ_10       13.72     (12.7%)       13.60     (11.9%)   -0.9% ( -22% -   27%)
                        IntNRQ_1       17.53     (17.1%)       17.53     (16.3%)    0.0% ( -28% -   40%)
                          MTQ_10       13.70     (11.2%)       13.89      (8.7%)    1.4% ( -16% -   23%)
                           MTQ_1       19.11     (15.8%)       21.43     (18.0%)   12.1% ( -18% -   54%)
        
        64
                            TaskQPS baseline      StdDev   QPS patch      StdDev                Pct diff
                          IntNRQ       20.53      (6.9%)       16.42      (5.3%)  -20.0% ( -30% -   -8%)
                             MTQ       24.31      (7.3%)       20.34      (6.4%)  -16.3% ( -27% -   -2%)
                       IntNRQ_50        8.87      (9.2%)        8.31      (6.5%)   -6.3% ( -20% -   10%)
                       IntNRQ_10       13.55     (12.7%)       12.80     (10.2%)   -5.6% ( -25% -   19%)
                        IntNRQ_1       17.27     (16.3%)       16.38     (13.1%)   -5.2% ( -29% -   28%)
                          MTQ_50        9.00      (7.6%)        9.02      (4.5%)    0.3% ( -10% -   13%)
                          MTQ_10       13.65     (11.1%)       14.73      (8.2%)    7.9% ( -10% -   30%)
                           MTQ_1       18.95     (15.1%)       25.32     (17.2%)   33.6% (   1% -   77%)
        
        256
                            TaskQPS baseline      StdDev   QPS patch      StdDev                Pct diff
                          IntNRQ       20.43      (9.4%)       12.69      (1.7%)  -37.9% ( -44% -  -29%)
                             MTQ       24.13      (9.3%)       19.32      (5.3%)  -19.9% ( -31% -   -5%)
                        IntNRQ_1       17.21     (19.5%)       13.90      (7.7%)  -19.2% ( -38% -    9%)
                       IntNRQ_10       13.49     (12.7%)       10.95      (5.7%)  -18.8% ( -33% -    0%)
                       IntNRQ_50        8.85     (10.5%)        7.40      (3.8%)  -16.4% ( -27% -   -2%)
                          MTQ_50        8.94      (8.3%)        8.82      (4.4%)   -1.3% ( -12% -   12%)
                          MTQ_10       13.53     (12.6%)       14.64      (5.9%)    8.2% (  -9% -   30%)
                           MTQ_1       18.88     (15.6%)       26.52     (14.2%)   40.5% (   9% -   83%)
        
        1024
                            TaskQPS baseline      StdDev   QPS patch      StdDev                Pct diff
                          IntNRQ       20.40      (7.7%)        6.54      (1.5%)  -67.9% ( -71% -  -63%)
                        IntNRQ_1       17.57     (17.2%)        8.27      (2.9%)  -52.9% ( -62% -  -39%)
                       IntNRQ_10       13.66     (13.0%)        6.72      (2.4%)  -50.8% ( -58% -  -40%)
                       IntNRQ_50        8.96     (10.4%)        5.01      (1.5%)  -44.1% ( -50% -  -35%)
                             MTQ       24.41      (8.2%)       18.07      (4.4%)  -26.0% ( -35% -  -14%)
                          MTQ_50        9.05      (8.1%)        8.65      (3.5%)   -4.5% ( -14% -    7%)
                          MTQ_10       13.60     (11.5%)       14.41      (3.9%)    6.0% (  -8% -   24%)
                           MTQ_1       19.11     (15.6%)       27.32     (12.9%)   43.0% (  12% -   84%)
        

        Rewriting to a BooleanQuery never helps when there is no filter, but something that the benchmark doesn't capture is that at least BooleanQuery does not allocate O(maxDoc) memory which can matter for large datasets.

        When there are filters, it's more complicated, it depends on the density of the filter, on the number of terms and also apparently on how frequencies of the different terms compare (this is my current theory for why WildcardQuery performs better than NRQ).

        Net/net I think this validates that 64 would be a good threshold to rewrite, with a minimum slowdown when filters are dense, and interesting speedups when filters are sparse?

        Show
        Adrien Grand added a comment - I did some more benchmarking of the change with filters (see attached tasks file) and various thresholds (and a fixed seed): 16 TaskQPS baseline StdDev QPS patch StdDev Pct diff MTQ 24.33 (7.5%) 20.67 (7.3%) -15.1% ( -27% - 0%) IntNRQ 20.38 (7.3%) 17.85 (11.9%) -12.4% ( -29% - 7%) IntNRQ_50 8.94 (10.1%) 8.67 (8.6%) -3.0% ( -19% - 17%) MTQ_50 9.05 (7.9%) 8.93 (5.3%) -1.3% ( -13% - 12%) IntNRQ_10 13.72 (12.7%) 13.60 (11.9%) -0.9% ( -22% - 27%) IntNRQ_1 17.53 (17.1%) 17.53 (16.3%) 0.0% ( -28% - 40%) MTQ_10 13.70 (11.2%) 13.89 (8.7%) 1.4% ( -16% - 23%) MTQ_1 19.11 (15.8%) 21.43 (18.0%) 12.1% ( -18% - 54%) 64 TaskQPS baseline StdDev QPS patch StdDev Pct diff IntNRQ 20.53 (6.9%) 16.42 (5.3%) -20.0% ( -30% - -8%) MTQ 24.31 (7.3%) 20.34 (6.4%) -16.3% ( -27% - -2%) IntNRQ_50 8.87 (9.2%) 8.31 (6.5%) -6.3% ( -20% - 10%) IntNRQ_10 13.55 (12.7%) 12.80 (10.2%) -5.6% ( -25% - 19%) IntNRQ_1 17.27 (16.3%) 16.38 (13.1%) -5.2% ( -29% - 28%) MTQ_50 9.00 (7.6%) 9.02 (4.5%) 0.3% ( -10% - 13%) MTQ_10 13.65 (11.1%) 14.73 (8.2%) 7.9% ( -10% - 30%) MTQ_1 18.95 (15.1%) 25.32 (17.2%) 33.6% ( 1% - 77%) 256 TaskQPS baseline StdDev QPS patch StdDev Pct diff IntNRQ 20.43 (9.4%) 12.69 (1.7%) -37.9% ( -44% - -29%) MTQ 24.13 (9.3%) 19.32 (5.3%) -19.9% ( -31% - -5%) IntNRQ_1 17.21 (19.5%) 13.90 (7.7%) -19.2% ( -38% - 9%) IntNRQ_10 13.49 (12.7%) 10.95 (5.7%) -18.8% ( -33% - 0%) IntNRQ_50 8.85 (10.5%) 7.40 (3.8%) -16.4% ( -27% - -2%) MTQ_50 8.94 (8.3%) 8.82 (4.4%) -1.3% ( -12% - 12%) MTQ_10 13.53 (12.6%) 14.64 (5.9%) 8.2% ( -9% - 30%) MTQ_1 18.88 (15.6%) 26.52 (14.2%) 40.5% ( 9% - 83%) 1024 TaskQPS baseline StdDev QPS patch StdDev Pct diff IntNRQ 20.40 (7.7%) 6.54 (1.5%) -67.9% ( -71% - -63%) IntNRQ_1 17.57 (17.2%) 8.27 (2.9%) -52.9% ( -62% - -39%) IntNRQ_10 13.66 (13.0%) 6.72 (2.4%) -50.8% ( -58% - -40%) IntNRQ_50 8.96 (10.4%) 5.01 (1.5%) -44.1% ( -50% - -35%) MTQ 24.41 (8.2%) 18.07 (4.4%) -26.0% ( -35% - -14%) MTQ_50 9.05 (8.1%) 8.65 (3.5%) -4.5% ( -14% - 7%) MTQ_10 13.60 (11.5%) 14.41 (3.9%) 6.0% ( -8% - 24%) MTQ_1 19.11 (15.6%) 27.32 (12.9%) 43.0% ( 12% - 84%) Rewriting to a BooleanQuery never helps when there is no filter, but something that the benchmark doesn't capture is that at least BooleanQuery does not allocate O(maxDoc) memory which can matter for large datasets. When there are filters, it's more complicated, it depends on the density of the filter, on the number of terms and also apparently on how frequencies of the different terms compare (this is my current theory for why WildcardQuery performs better than NRQ). Net/net I think this validates that 64 would be a good threshold to rewrite, with a minimum slowdown when filters are dense, and interesting speedups when filters are sparse?
        Hide
        Adrien Grand added a comment -

        If there are no objections, I plan on committing the patch with a threshold of 16 to be on the safe side, and we can revisit later if we want to increase it.

        Show
        Adrien Grand added a comment - If there are no objections, I plan on committing the patch with a threshold of 16 to be on the safe side, and we can revisit later if we want to increase it.
        Hide
        David Smiley added a comment -

        +1. And patch looks good.

        Show
        David Smiley added a comment - +1. And patch looks good.
        Hide
        ASF subversion and git services added a comment -

        Commit 1680468 from Adrien Grand in branch 'dev/trunk'
        [ https://svn.apache.org/r1680468 ]

        LUCENE-6458: Multi-term queries matching few terms per segment now execute like a disjunction.

        Show
        ASF subversion and git services added a comment - Commit 1680468 from Adrien Grand in branch 'dev/trunk' [ https://svn.apache.org/r1680468 ] LUCENE-6458 : Multi-term queries matching few terms per segment now execute like a disjunction.
        Hide
        ASF subversion and git services added a comment -

        Commit 1680474 from Adrien Grand in branch 'dev/branches/branch_5x'
        [ https://svn.apache.org/r1680474 ]

        LUCENE-6458: Multi-term queries matching few terms per segment now execute like a disjunction.

        Show
        ASF subversion and git services added a comment - Commit 1680474 from Adrien Grand in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1680474 ] LUCENE-6458 : Multi-term queries matching few terms per segment now execute like a disjunction.
        Hide
        Adrien Grand added a comment -

        Committed, thanks David for the feedback/reviews!

        Show
        Adrien Grand added a comment - Committed, thanks David for the feedback/reviews!
        Hide
        Adrien Grand added a comment -

        I just noticed that while my changes helped rewrite to a BooleanQuery and use BS2 whenever possible, they never propagate BS1 as a bulk scorer.

        Show
        Adrien Grand added a comment - I just noticed that while my changes helped rewrite to a BooleanQuery and use BS2 whenever possible, they never propagate BS1 as a bulk scorer.
        Hide
        Adrien Grand added a comment -

        Here is a simple patch that also propagates the bulk scorer.

        Show
        Adrien Grand added a comment - Here is a simple patch that also propagates the bulk scorer.
        Hide
        David Smiley added a comment -

        +1 Patch looks good.

        Show
        David Smiley added a comment - +1 Patch looks good.
        Hide
        ASF subversion and git services added a comment -

        Commit 1680893 from Adrien Grand in branch 'dev/trunk'
        [ https://svn.apache.org/r1680893 ]

        LUCENE-6458: Propagate the bulk scorer too.

        Show
        ASF subversion and git services added a comment - Commit 1680893 from Adrien Grand in branch 'dev/trunk' [ https://svn.apache.org/r1680893 ] LUCENE-6458 : Propagate the bulk scorer too.
        Hide
        ASF subversion and git services added a comment -

        Commit 1680894 from Adrien Grand in branch 'dev/branches/branch_5x'
        [ https://svn.apache.org/r1680894 ]

        LUCENE-6458: Propagate the bulk scorer too.

        Show
        ASF subversion and git services added a comment - Commit 1680894 from Adrien Grand in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1680894 ] LUCENE-6458 : Propagate the bulk scorer too.
        Hide
        Anshum Gupta added a comment -

        Bulk close for 5.2.0.

        Show
        Anshum Gupta added a comment - Bulk close for 5.2.0.

          People

          • Assignee:
            Adrien Grand
            Reporter:
            Adrien Grand
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development