Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 2.9.4, 3.0.3, 3.1, 4.0-ALPHA
    • Fix Version/s: 3.1, 4.0-ALPHA
    • Component/s: None
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      I've faced this problem recently. I'll attach a program to reproduce the problem soon. The program outputs the following:

      ** score = 0.10003257
      ** explain
      0.050016284 = (MATCH) product of:
        0.15004885 = (MATCH) sum of:
          0.15004885 = weight(f1:"note book" in 0), product of:
            0.3911943 = queryWeight(f1:"note book"), product of:
              0.61370564 = idf(f1: note=1 book=1)
              0.6374299 = queryNorm
            0.38356602 = fieldWeight(f1:"note book" in 0), product of:
              1.0 = tf(phraseFreq=1.0)
              0.61370564 = idf(f1: note=1 book=1)
              0.625 = fieldNorm(field=f1, doc=0)
        0.33333334 = coord(1/3)
      
      1. LUCENE-2936.patch
        6 kB
        Hoss Man
      2. LUCENE-2936.test.patch
        4 kB
        Hoss Man
      3. LUCENE-2936.patch
        1 kB
        Robert Muir
      4. LUCENE-2936.patch
        0.5 kB
        Robert Muir
      5. TestScore.java
        2 kB
        Koji Sekiguchi

        Activity

        Hide
        Grant Ingersoll added a comment -

        Bulk close for 3.1

        Show
        Grant Ingersoll added a comment - Bulk close for 3.1
        Hide
        Hoss Man added a comment -

        Hoss's assumption is correct because the problem was found at customer site when they used a BooleanQuery containing PhraseQueries (term query on 1-gram fields)

        sure .. but to be clear: the crux of the issue was really a bug in PhraseQuery – it wasn't correctly identifying where there was a match/non-match (independently of when the score was zero/non-zero).

        the reason my initial attempt at exposing the bug (by setting an index boost of 0 on some of the fields) didn't cause any of the existing TestSimpleExplanations.testP* to fail was because of the bugs i mentioned in in CheckHits (it was naively looking at hte score to verify when an explanation was for a match/non-match) so the bug didn't show up in the testP* methods until i fixed that. It did show up in the "testMultiFieldBQofPQ*" methods i added because BooleanQuery's explain logic was actually modifying the score of it's explanation based on the match value of the explanation for the sub-queries (because of coord)

        Show
        Hoss Man added a comment - Hoss's assumption is correct because the problem was found at customer site when they used a BooleanQuery containing PhraseQueries (term query on 1-gram fields) sure .. but to be clear: the crux of the issue was really a bug in PhraseQuery – it wasn't correctly identifying where there was a match/non-match (independently of when the score was zero/non-zero). the reason my initial attempt at exposing the bug (by setting an index boost of 0 on some of the fields) didn't cause any of the existing TestSimpleExplanations.testP* to fail was because of the bugs i mentioned in in CheckHits (it was naively looking at hte score to verify when an explanation was for a match/non-match) so the bug didn't show up in the testP* methods until i fixed that. It did show up in the "testMultiFieldBQofPQ*" methods i added because BooleanQuery's explain logic was actually modifying the score of it's explanation based on the match value of the explanation for the sub-queries (because of coord)
        Hide
        Hoss Man added a comment -

        fixed in trunk and 3x...

        Committed revision 1074357.
        Committed revision 1074363.

        Show
        Hoss Man added a comment - fixed in trunk and 3x... Committed revision 1074357. Committed revision 1074363.
        Hide
        Koji Sekiguchi added a comment -

        Patch looks great.

        So then i expanded the patch to also include BooleanQueries containing phrase queries on multiple fields, and then i was finally able to make TestSimpleExplanations in similar ways to what Koji is seeing.

        Hoss's assumption is correct because the problem was found at customer site when they used a BooleanQuery containing PhraseQueries (term query on 1-gram fields).

        Show
        Koji Sekiguchi added a comment - Patch looks great. So then i expanded the patch to also include BooleanQueries containing phrase queries on multiple fields, and then i was finally able to make TestSimpleExplanations in similar ways to what Koji is seeing. Hoss's assumption is correct because the problem was found at customer site when they used a BooleanQuery containing PhraseQueries (term query on 1-gram fields).
        Hide
        Robert Muir added a comment -

        would appreciate a review so we can get this into 3.1

        +1 to the patch.

        Show
        Robert Muir added a comment - would appreciate a review so we can get this into 3.1 +1 to the patch.
        Hide
        Hoss Man added a comment -

        This superceeds my "test" patch by actually including the fix as well.

        while investigating this, i also discovered some heinous bugs in CheckHits where match and non match were naively been asserted using the explanation value instead of actaully checking "isMatch" (which explained why some of my earlier attempts at testing for this bug in PhraseQUery explain didn't surface the problem)

        There may still be other situations where a 0 fieldNorm screws up a score explanation, but this should fix the ones we know about.

        would appreciate a review so we can get this into 3.1

        Show
        Hoss Man added a comment - This superceeds my "test" patch by actually including the fix as well. while investigating this, i also discovered some heinous bugs in CheckHits where match and non match were naively been asserted using the explanation value instead of actaully checking "isMatch" (which explained why some of my earlier attempts at testing for this bug in PhraseQUery explain didn't surface the problem) There may still be other situations where a 0 fieldNorm screws up a score explanation, but this should fix the ones we know about. would appreciate a review so we can get this into 3.1
        Hide
        Robert Muir added a comment -
        I don't understand 95% of what yonik & robert have been saying in this issue – but fortunately i don't think that matters – it sounds like they both agree htat what they are talking about is unrelated to this bug.
        

        Sorry for the confusion, my issues were actually a separate (corner-case) bug, which yonik fixed already in LUCENE-2937. That bug would cause you to have a field boost of 0 in some very very rare/likely-to-have-never-happened-with-our-default-sim cases when you shouldn't.

        So for this issue we can address the explains, for when you explicitly set boost of zero.

        Show
        Robert Muir added a comment - I don't understand 95% of what yonik & robert have been saying in this issue – but fortunately i don't think that matters – it sounds like they both agree htat what they are talking about is unrelated to this bug. Sorry for the confusion, my issues were actually a separate (corner-case) bug, which yonik fixed already in LUCENE-2937 . That bug would cause you to have a field boost of 0 in some very very rare/likely-to-have-never-happened-with-our-default-sim cases when you shouldn't. So for this issue we can address the explains, for when you explicitly set boost of zero.
        Hide
        Hoss Man added a comment -

        I don't understand 95% of what yonik & robert have been saying in this issue – but fortunately i don't think that matters – it sounds like they both agree htat what they are talking about is unrelated to this bug.

        having said that: once i looked at Koji's example program, i tried making a simple one line change to TestExplanations to set a field boost of "0" on some docs, expecting that to trigger failures in TestSimpleExplanations – but it did not (it did however trigger lots of failures in TestComplexExplanations.

        So i expanded TestSimpleExplanations to include more cases where BooleanQueries included clauses in multiple fields – one of which had gotten an index time field boost of zero for some documents, and i still couldn't make TestSimpleExplanations fail.

        So then i expanded the patch to also include BooleanQueries containing phrase queries on multiple fields, and then i was finally able to make TestSimpleExplanations in similar ways to what Koji is seeing.

        patch with changes to tests to reproduce bug is attached

        Show
        Hoss Man added a comment - I don't understand 95% of what yonik & robert have been saying in this issue – but fortunately i don't think that matters – it sounds like they both agree htat what they are talking about is unrelated to this bug. having said that: once i looked at Koji's example program, i tried making a simple one line change to TestExplanations to set a field boost of "0" on some docs, expecting that to trigger failures in TestSimpleExplanations – but it did not (it did however trigger lots of failures in TestComplexExplanations. So i expanded TestSimpleExplanations to include more cases where BooleanQueries included clauses in multiple fields – one of which had gotten an index time field boost of zero for some documents, and i still couldn't make TestSimpleExplanations fail. So then i expanded the patch to also include BooleanQueries containing phrase queries on multiple fields, and then i was finally able to make TestSimpleExplanations in similar ways to what Koji is seeing. patch with changes to tests to reproduce bug is attached
        Hide
        Hoss Man added a comment -

        Isn't that a bug?

        We fixed our search code to not discard negative scores, so explain should also handle that?

        i hav'nt read hte whole issue, but as i recall this is just a default assumption in hte legacy base class, most use cases should be using the "Complex" suclass where the match property can be set explicitly.

        Show
        Hoss Man added a comment - Isn't that a bug? We fixed our search code to not discard negative scores, so explain should also handle that? i hav'nt read hte whole issue, but as i recall this is just a default assumption in hte legacy base class, most use cases should be using the "Complex" suclass where the match property can be set explicitly.
        Hide
        Robert Muir added a comment -

        Thanks, so the bug is really my imagination (as SmallFloat is designed to generally handle this, its just some corner case i produced in a test).

        So, if we can fix smallfloat my gross hack is not necessary, as we would then only produce byte 0 norms in the case of a zero boost... we can discuss if anything needs to be done there (in my opinion, its not serious, i am only concerned about non-zero floats quantizing to a zero byte).

        and then separately we gotta figure out about explains: in my opinion whether or not a document was matched by a query is unrelated to the score...

        Show
        Robert Muir added a comment - Thanks, so the bug is really my imagination (as SmallFloat is designed to generally handle this, its just some corner case i produced in a test). So, if we can fix smallfloat my gross hack is not necessary, as we would then only produce byte 0 norms in the case of a zero boost... we can discuss if anything needs to be done there (in my opinion, its not serious, i am only concerned about non-zero floats quantizing to a zero byte). and then separately we gotta figure out about explains: in my opinion whether or not a document was matched by a query is unrelated to the score...
        Hide
        Yonik Seeley added a comment -

        OK, although underflow is generally handled, Robert found at least one case where it doesn't work.

        System.out.println(SmallFloat.floatToByte315(5.8123817E-10f));

        That prints 0 for some reason. I'll see if I can figure out why.

        Show
        Yonik Seeley added a comment - OK, although underflow is generally handled, Robert found at least one case where it doesn't work. System.out.println(SmallFloat.floatToByte315(5.8123817E-10f)); That prints 0 for some reason. I'll see if I can figure out why.
        Hide
        Yonik Seeley added a comment -

        I'm not sure why this is a quantization issue... SmallFloat handles underflow by mapping to the smallest number greater than 0, so the only way to get a 0 field norm is an explicit boost of 0.

        Anyway, if we want to discard explicit representation for 0, some of the code that handled these edge cases should also be modified:

            if (smallfloat < fzero) {
              return (bits<=0) ?
                (byte)0   // negative numbers and zero both map to 0 byte
               :(byte)1;  // underflow is mapped to smallest non-zero number.
        
        Show
        Yonik Seeley added a comment - I'm not sure why this is a quantization issue... SmallFloat handles underflow by mapping to the smallest number greater than 0, so the only way to get a 0 field norm is an explicit boost of 0. Anyway, if we want to discard explicit representation for 0, some of the code that handled these edge cases should also be modified: if (smallfloat < fzero) { return (bits<=0) ? ( byte )0 // negative numbers and zero both map to 0 byte :( byte )1; // underflow is mapped to smallest non-zero number.
        Hide
        Robert Muir added a comment -

        Isn't that a bug?
        We fixed our search code to not discard negative scores, so explain should also handle that?

        I agree this is a bad assumption really (although subclasses can override). I am just concerned what explains will 'break' if we fix this.

        But still i think the float quantization issue (the root cause of this problem really) should be fixed... ideally we fix both!

        Show
        Robert Muir added a comment - Isn't that a bug? We fixed our search code to not discard negative scores, so explain should also handle that? I agree this is a bad assumption really (although subclasses can override). I am just concerned what explains will 'break' if we fix this. But still i think the float quantization issue (the root cause of this problem really) should be fixed... ideally we fix both!
        Hide
        Uwe Schindler added a comment -

        This line of code was also one of the first things I was thinking about.

        I also think this is a bug in explain!

        Show
        Uwe Schindler added a comment - This line of code was also one of the first things I was thinking about. I also think this is a bug in explain!
        Hide
        Yonik Seeley added a comment -
          public boolean isMatch() {
            return (0.0f < getValue());
          }
        

        Isn't that a bug?
        We fixed our search code to not discard negative scores, so explain should also handle that?

        Show
        Yonik Seeley added a comment - public boolean isMatch() { return (0.0f < getValue()); } Isn't that a bug? We fixed our search code to not discard negative scores, so explain should also handle that?
        Hide
        Robert Muir added a comment -

        Here's a new patch...

        apparently there was code in SmallFloat to specifically do this. This simply removes the bug.

        I would like to commit soon.

        Show
        Robert Muir added a comment - Here's a new patch... apparently there was code in SmallFloat to specifically do this. This simply removes the bug. I would like to commit soon.
        Hide
        Robert Muir added a comment -

        Hi Uwe: you are right! obviously this one needs a comment, but this is the idea:

        norm[0]=4.656613E-10
        norm[1]=5.820766E-10
        norm[2]=6.9849193E-10
        norm[3]=8.1490725E-10
        
        Show
        Robert Muir added a comment - Hi Uwe: you are right! obviously this one needs a comment, but this is the idea: norm[0]=4.656613E-10 norm[1]=5.820766E-10 norm[2]=6.9849193E-10 norm[3]=8.1490725E-10
        Hide
        Uwe Schindler added a comment -

        Robert: In your patch: Why use exactly that float (looks arbitrary) and not e.g. Float.MIN_VALUE (of course not NEGATIVE_INFINITY!)?

        Doesn't matter, just want to understand.

        Show
        Uwe Schindler added a comment - Robert: In your patch: Why use exactly that float (looks arbitrary) and not e.g. Float.MIN_VALUE (of course not NEGATIVE_INFINITY!)? Doesn't matter, just want to understand.
        Hide
        Robert Muir added a comment -

        sorry, here is correct patch (I had issue dyslexia)

        Show
        Robert Muir added a comment - sorry, here is correct patch (I had issue dyslexia)
        Hide
        Robert Muir added a comment -

        here's a patch along the lines i suggested.

        Show
        Robert Muir added a comment - here's a patch along the lines i suggested.
        Hide
        Robert Muir added a comment -

        Koji: the issue is the document boost of zero.

        because of this, the explanation does not indicate a match by default (see Explanation.java):

          /**
           * Indicates whether or not this Explanation models a good match.
           *
           * <p>
           * By default, an Explanation represents a "match" if the value is positive.
           * </p>
           * @see #getValue
           */
          public boolean isMatch() {
            return (0.0f < getValue());
          }
        

        Separately, we should decide what to do about norm values of zero. In my opinion, norm values of zero should not necessarily decode to a floating point value of zero (we might want to adjust our norm decoder by default to not do this).

        Otherwise, in addition to your problem, the search degrades into a pure boolean ranking model (as TF and IDF are completely zeroed out).

        This is really unlikely with the default relevance ranking (unless you use a boost of zero or similar), but is possible e.g. if you use a different SmallFloat quantization. I raised this issue on LUCENE-1360 where if you were to use this "short field" quantization on a large document, what should we do?

        So in my opinion, we should consider adjusting the NORM_TABLE in Similarity so that if the norm happens to be zero, it does not decode to a float of zero. This will have no impact on performance as its a statically calculated table.

        Show
        Robert Muir added a comment - Koji: the issue is the document boost of zero. because of this, the explanation does not indicate a match by default (see Explanation.java): /** * Indicates whether or not this Explanation models a good match. * * <p> * By default, an Explanation represents a "match" if the value is positive. * </p> * @see #getValue */ public boolean isMatch() { return (0.0f < getValue()); } Separately, we should decide what to do about norm values of zero. In my opinion, norm values of zero should not necessarily decode to a floating point value of zero (we might want to adjust our norm decoder by default to not do this). Otherwise, in addition to your problem, the search degrades into a pure boolean ranking model (as TF and IDF are completely zeroed out). This is really unlikely with the default relevance ranking (unless you use a boost of zero or similar), but is possible e.g. if you use a different SmallFloat quantization. I raised this issue on LUCENE-1360 where if you were to use this "short field" quantization on a large document, what should we do? So in my opinion, we should consider adjusting the NORM_TABLE in Similarity so that if the norm happens to be zero, it does not decode to a float of zero. This will have no impact on performance as its a statically calculated table.

          People

          • Assignee:
            Hoss Man
            Reporter:
            Koji Sekiguchi
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development