Lucene - Core
  1. Lucene - Core
  2. LUCENE-1279

RangeQuery and RangeFilter should use collation to check for range inclusion

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 2.3.1
    • Fix Version/s: 2.4
    • Component/s: core/search
    • Labels:
      None

      Description

      See this java-user discussion of problems caused by Unicode code-point comparison, instead of collation, in RangeQuery.

      RangeQuery could take in a Locale via a setter, which could be used with a java.text.Collator and/or CollationKey's, to handle ranges for languages which have alphabet orderings different from those in Unicode.

      1. LUCENE-1279.patch
        93 kB
        Steve Rowe
      2. LUCENE-1279.patch
        93 kB
        Steve Rowe
      3. LUCENE-1279.patch
        59 kB
        Steve Rowe
      4. LUCENE-1279.patch
        13 kB
        Steve Rowe

        Issue Links

          Activity

          Hide
          Steve Rowe added a comment -

          Hoss wrote:

          4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value - I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping.

          See LUCENE-1435, which is an implementation of this idea.

          Show
          Steve Rowe added a comment - Hoss wrote: 4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value - I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping. See LUCENE-1435 , which is an implementation of this idea.
          Hide
          Grant Ingersoll added a comment -

          Committed revision 696056.

          Show
          Grant Ingersoll added a comment - Committed revision 696056.
          Hide
          Grant Ingersoll added a comment -

          I think the problem is that every single index term has to be converted to a CollationKey for every single (range) search.

          Yes, agreed. The question mainly is would that be faster than the String comparisons. Basically, is a construction plus a bitwise compare faster than a string compare?

          Languages, in some cases using the same character repertoire, define different orderings. Also, I believe some orderings are context dependent - you can't always compare character by character. So adding this stuff to Lucene would be to duplicate a lot of the stuff that's already done in the Collator.

          Makes sense, was just wondering if there were some shortcuts to be had since we have a very particular case and I was thinking maybe it would allow us to narrow down the range to search.

          For instance, hypothetically speaking, say your field had a full range of words starting with A up to Z, but that you knew the ordering problem only occurred between L and P and that your lower and upper terms K and Q, then you could feel confident that you could skip to K and stop at Q w/o any ramifications. I realize this is repeating what is in the Collator, but it would be nice if the collator exposed the info. However, perhaps, if using a RuleBasedCollator, the getRules() method could be used to optimize. Again, just thinking out loud, I haven't explored it.

          I agree, this should still go forward, even as is.

          Show
          Grant Ingersoll added a comment - I think the problem is that every single index term has to be converted to a CollationKey for every single (range) search. Yes, agreed. The question mainly is would that be faster than the String comparisons. Basically, is a construction plus a bitwise compare faster than a string compare? Languages, in some cases using the same character repertoire, define different orderings. Also, I believe some orderings are context dependent - you can't always compare character by character. So adding this stuff to Lucene would be to duplicate a lot of the stuff that's already done in the Collator. Makes sense, was just wondering if there were some shortcuts to be had since we have a very particular case and I was thinking maybe it would allow us to narrow down the range to search. For instance, hypothetically speaking, say your field had a full range of words starting with A up to Z, but that you knew the ordering problem only occurred between L and P and that your lower and upper terms K and Q, then you could feel confident that you could skip to K and stop at Q w/o any ramifications. I realize this is repeating what is in the Collator, but it would be nice if the collator exposed the info. However, perhaps, if using a RuleBasedCollator, the getRules() method could be used to optimize. Again, just thinking out loud, I haven't explored it. I agree, this should still go forward, even as is.
          Hide
          Steve Rowe added a comment -

          from the Collator javadocs:

          When sorting a list of Strings however, it is generally necessary to compare each String multiple times. In this case, CollationKeys provide better performance. The CollationKey class converts a String to a series of bits that can be compared bitwise against other CollationKeys. A CollationKey is created by a Collator object for a given String.

          I don't think we need to implement this now, but I wonder if there is a performance difference if we created the CollationKey for comparison. The big question is whether the construction of that for each term outweighs the savings by repeated comparisons to lower and upper.

          I think the problem is that every single index term has to be converted to a CollationKey for every single (range) search. In an earlier comment on this issue, Hoss said:

          4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value - I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping.

          I'm working on a utility class to store arbitrary binary in sortable, indexable Strings, so that CollationKeys can be stored in the index. IMHO, though, this issue should still go forward.

          One more question, and it probably shows my lack of knowledge here, but would it be possible to enumerate the various codepoints where there are problems and just handle these separately, somehow? Basically, how pervasive is the problem? Would we perhaps be better off having a check to see if one of these bad codepoints falls in the range of lower/upper and then handle it separately?

          Languages, in some cases using the same character repertoire, define different orderings. Also, I believe some orderings are context dependent - you can't always compare character by character. So adding this stuff to Lucene would be to duplicate a lot of the stuff that's already done in the Collator.

          Show
          Steve Rowe added a comment - from the Collator javadocs: When sorting a list of Strings however, it is generally necessary to compare each String multiple times. In this case, CollationKeys provide better performance. The CollationKey class converts a String to a series of bits that can be compared bitwise against other CollationKeys. A CollationKey is created by a Collator object for a given String. I don't think we need to implement this now, but I wonder if there is a performance difference if we created the CollationKey for comparison. The big question is whether the construction of that for each term outweighs the savings by repeated comparisons to lower and upper. I think the problem is that every single index term has to be converted to a CollationKey for every single (range) search. In an earlier comment on this issue, Hoss said: 4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value - I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping. I'm working on a utility class to store arbitrary binary in sortable, indexable Strings, so that CollationKeys can be stored in the index. IMHO, though, this issue should still go forward. One more question, and it probably shows my lack of knowledge here, but would it be possible to enumerate the various codepoints where there are problems and just handle these separately, somehow? Basically, how pervasive is the problem? Would we perhaps be better off having a check to see if one of these bad codepoints falls in the range of lower/upper and then handle it separately? Languages, in some cases using the same character repertoire, define different orderings. Also, I believe some orderings are context dependent - you can't always compare character by character. So adding this stuff to Lucene would be to duplicate a lot of the stuff that's already done in the Collator.
          Hide
          Steve Rowe added a comment -

          Seems like the new tests in TestRangeFilter still uses Hits.

          Thanks, I missed those - this new patch removes Hits usages; also, switched a few deprecated Field.Index.UN_TOKENIZED usages to NOT_ANALYZED.

          All tests pass.

          Show
          Steve Rowe added a comment - Seems like the new tests in TestRangeFilter still uses Hits. Thanks, I missed those - this new patch removes Hits usages; also, switched a few deprecated Field.Index.UN_TOKENIZED usages to NOT_ANALYZED. All tests pass.
          Hide
          Grant Ingersoll added a comment -

          Mostly this consisted of switching away from deprecated Hits in tests.

          Seems like the new tests in TestRangeFilter still uses Hits.

          Also, from the Collator javadocs:

          When sorting a list of Strings however, it is generally necessary to compare each String multiple times. In this case, CollationKeys provide better performance. The CollationKey class converts a String to a series of bits that can be compared bitwise against other CollationKeys. A CollationKey is created by a Collator object for a given String.

          I don't think we need to implement this now, but I wonder if there is a performance difference if we created the CollationKey for comparison. The big question is whether the construction of that for each term outweighs the savings by repeated comparisons to lower and upper.

          One more question, and it probably shows my lack of knowledge here, but would it be possible to enumerate the various codepoints where there are problems and just handle these separately, somehow? Basically, how pervasive is the problem? Would we perhaps be better off having a check to see if one of these bad codepoints falls in the range of lower/upper and then handle it separately? Or, perhaps, some reasoning would allow us to better narrow in on the lowerTerm/upper instead of having to check the whole field. Just thinking out loud...

          Otherwise, looks pretty good.

          Show
          Grant Ingersoll added a comment - Mostly this consisted of switching away from deprecated Hits in tests. Seems like the new tests in TestRangeFilter still uses Hits. Also, from the Collator javadocs: When sorting a list of Strings however, it is generally necessary to compare each String multiple times. In this case, CollationKeys provide better performance. The CollationKey class converts a String to a series of bits that can be compared bitwise against other CollationKeys. A CollationKey is created by a Collator object for a given String. I don't think we need to implement this now, but I wonder if there is a performance difference if we created the CollationKey for comparison. The big question is whether the construction of that for each term outweighs the savings by repeated comparisons to lower and upper. One more question, and it probably shows my lack of knowledge here, but would it be possible to enumerate the various codepoints where there are problems and just handle these separately, somehow? Basically, how pervasive is the problem? Would we perhaps be better off having a check to see if one of these bad codepoints falls in the range of lower/upper and then handle it separately? Or, perhaps, some reasoning would allow us to better narrow in on the lowerTerm/upper instead of having to check the whole field. Just thinking out loud... Otherwise, looks pretty good.
          Hide
          Steve Rowe added a comment -

          Updated to current trunk revision (694771). Mostly this consisted of switching away from deprecated Hits in tests.

          Also, I used JavaCC 4.1 to regenerate QueryParser.java et al., and it looks like all of the files in the o.a.l.queryParser package have been changed - apparently the last time they were generated, JavaCC 4.0 was used.

          All tests pass for me (except TestIndexReaderReopen.testThreadSafety(), which I just posted to java-dev about, and which should be completely unrelated to this issue).

          Show
          Steve Rowe added a comment - Updated to current trunk revision (694771). Mostly this consisted of switching away from deprecated Hits in tests. Also, I used JavaCC 4.1 to regenerate QueryParser.java et al., and it looks like all of the files in the o.a.l.queryParser package have been changed - apparently the last time they were generated, JavaCC 4.0 was used. All tests pass for me (except TestIndexReaderReopen.testThreadSafety(), which I just posted to java-dev about, and which should be completely unrelated to this issue).
          Hide
          Grant Ingersoll added a comment -

          Steve, can you update this for trunk (assuming SVN is working)?

          Show
          Grant Ingersoll added a comment - Steve, can you update this for trunk (assuming SVN is working)?
          Hide
          Michael McCandless added a comment -

          Grant, what's the game plan on this one?

          Show
          Michael McCandless added a comment - Grant, what's the game plan on this one?
          Hide
          Steve Rowe added a comment -

          Rewritten patch, with the collating range functionality now added to existing classes RangeQuery and RangeFilter.

          All tests pass.

          Show
          Steve Rowe added a comment - Rewritten patch, with the collating range functionality now added to existing classes RangeQuery and RangeFilter. All tests pass.
          Hide
          Steve Rowe added a comment - - edited

          Hmmm... excellent point. you convinced me.

          Okay. At your (previous) suggestion, I have redone the patch (will attach shortly), moving the collating stuff into RangeQuery and RangeFilter, with enabling bits in QueryParser and ConstantScoreRangeQuery. I put WARNING text in the javadoc for each method that invokes the expensive index Term iteration, so hopefully that will give pause to those who might otherwise unwittingly slow things down.

          BTW: if hooks for CollatingRangeQuery are added to QueryParser, it shouldn't use this class just because a Locale is specified - that would cause some unexpected results for people who have been specifying a Locale for date reasons. a new "setter" would need to indicate when to pay attention to Collation.

          I added a new setter to QueryParser for this purpose: setRangeCollator(Collator).

          Show
          Steve Rowe added a comment - - edited Hmmm... excellent point. you convinced me. Okay. At your (previous) suggestion, I have redone the patch (will attach shortly), moving the collating stuff into RangeQuery and RangeFilter, with enabling bits in QueryParser and ConstantScoreRangeQuery. I put WARNING text in the javadoc for each method that invokes the expensive index Term iteration, so hopefully that will give pause to those who might otherwise unwittingly slow things down. BTW: if hooks for CollatingRangeQuery are added to QueryParser, it shouldn't use this class just because a Locale is specified - that would cause some unexpected results for people who have been specifying a Locale for date reasons. a new "setter" would need to indicate when to pay attention to Collation. I added a new setter to QueryParser for this purpose: setRangeCollator(Collator) .
          Hide
          Hoss Man added a comment -

          I thought I did that

          my bad. i missread.

          since the performance impact could be large compared to a standard RangeQuery, I thought it made more sense to put it where it couldn't be used accidentally

          Hmmm... excellent point. you convinced me.

          BTW: if hooks for CollatingRangeQuery are added to QueryParser, it shouldn't use this class just because a Locale is specified – that would cause some unexpected results for people who have been specifying a Locale for date reasons. a new "setter" would need to indicate when to pay attention to Collation.

          Show
          Hoss Man added a comment - I thought I did that my bad. i missread. since the performance impact could be large compared to a standard RangeQuery, I thought it made more sense to put it where it couldn't be used accidentally Hmmm... excellent point. you convinced me. BTW: if hooks for CollatingRangeQuery are added to QueryParser, it shouldn't use this class just because a Locale is specified – that would cause some unexpected results for people who have been specifying a Locale for date reasons. a new "setter" would need to indicate when to pay attention to Collation.
          Hide
          Steve Rowe added a comment -

          1) you should be able to at least start the enumerator by skiping to a term consisting of the lowerTermField and the termText of "" ... even if the Collation of the term text is random, you still know which field you want.

          I thought I did that - from the patch:

              TermEnum enumerator = reader.terms(new Term(getField(), ""));
              ...
            public String getField() {
              return (lowerTerm != null ? lowerTerm.field() : upperTerm.field());
            }
          

          2) why can a collator only be specified by a Locale, why not just let people specify the Collator they want directly?

          In the java-user thread that spawned this issue, I mentioned that this would be necessary for custom Collators. I used Locale because it's simpler to specify, but you're right, directly specifying a Collator makes more sense.

          3) instead of adding a new public CollatingRangeQuery, would it make more sense to add an optional Collator to RangeQuery (and RangeFilter) which triggers a different code path when non null? (from a performance standpoint it would basically be one conditional check at the begining of the rewrite method.)

          This was my original thought, but since the performance impact could be large compared to a standard RangeQuery, I thought it made more sense to put it where it couldn't be used accidentally . I can redo it to integrate with the existing classes, though.

          4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value - I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping.

          I thought of something similar, but wow, this would be large. It would require that the exact Collator used to generate the index terms also be used to generate CollationKeys for RangeQuery's/Filter's – the Collator's rules would have to be stored in the index. Also, how would binary CollationKey (de-)serialization fit into the String (de-)serialization currently in place for index terms?

          My guess is that the functionality provided here is most useful for fields with a small number of terms – especially in the case of RangeQuery's, where the BooleanQuery clause limit is not guarded against. Given this IMHO most likely scenario, the performance optimization you're talking about (and the attendant code complexification) probably isn't warranted.

          Show
          Steve Rowe added a comment - 1) you should be able to at least start the enumerator by skiping to a term consisting of the lowerTermField and the termText of "" ... even if the Collation of the term text is random, you still know which field you want. I thought I did that - from the patch: TermEnum enumerator = reader.terms( new Term(getField(), "")); ... public String getField() { return (lowerTerm != null ? lowerTerm.field() : upperTerm.field()); } 2) why can a collator only be specified by a Locale, why not just let people specify the Collator they want directly? In the java-user thread that spawned this issue, I mentioned that this would be necessary for custom Collators. I used Locale because it's simpler to specify, but you're right, directly specifying a Collator makes more sense. 3) instead of adding a new public CollatingRangeQuery, would it make more sense to add an optional Collator to RangeQuery (and RangeFilter) which triggers a different code path when non null? (from a performance standpoint it would basically be one conditional check at the begining of the rewrite method.) This was my original thought, but since the performance impact could be large compared to a standard RangeQuery, I thought it made more sense to put it where it couldn't be used accidentally . I can redo it to integrate with the existing classes, though. 4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value - I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping. I thought of something similar, but wow, this would be large. It would require that the exact Collator used to generate the index terms also be used to generate CollationKeys for RangeQuery's/Filter's – the Collator's rules would have to be stored in the index. Also, how would binary CollationKey (de-)serialization fit into the String (de-)serialization currently in place for index terms? My guess is that the functionality provided here is most useful for fields with a small number of terms – especially in the case of RangeQuery's, where the BooleanQuery clause limit is not guarded against. Given this IMHO most likely scenario, the performance optimization you're talking about (and the attendant code complexification) probably isn't warranted.
          Hide
          Hoss Man added a comment -

          a few random thoughts:

          1) you should be able to at least start the enumerator by skiping to a term consisting of the lowerTermField and the termText of "" ... even if the Collation of the term text is random, you still know which field you want.

          2) why can a collator only be specified by a Locale, why not just let people specify the Collator they want directly?

          3) instead of adding a new public CollatingRangeQuery, would it make more sense to add an optional Collator to RangeQuery (and RangeFilter) which triggers a different code path when non null? (from a performance standpoint it would basically be one conditional check at the begining of the rewrite method.)

          4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value – I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping.

          Show
          Hoss Man added a comment - a few random thoughts: 1) you should be able to at least start the enumerator by skiping to a term consisting of the lowerTermField and the termText of "" ... even if the Collation of the term text is random, you still know which field you want. 2) why can a collator only be specified by a Locale, why not just let people specify the Collator they want directly? 3) instead of adding a new public CollatingRangeQuery, would it make more sense to add an optional Collator to RangeQuery (and RangeFilter) which triggers a different code path when non null? (from a performance standpoint it would basically be one conditional check at the begining of the rewrite method.) 4) when i first saw the thread that spawned this issue, my first reaction was to wonder if it would make sense to start allowing a Collator to be specified when indexing, and to use the raw bytes from the CollationKey as the indexed value – I haven't thought it through very hard, but i wonder if that would be feasible (it seems like it would certainly faster at query time, since it would allow more traditional term skipping.
          Hide
          Steve Rowe added a comment -

          Attaching a patch containing class CollatingRangeQuery, which extends RangeQuery, overriding the rewrite() method. A test class is also supplied. This is targetted at contrib/.

          Because all index terms in the Field of the lower and upper terms of the range have to be examined, since index term ordering (Unicode code point order) is not necessarily the same as the collation in the given Locale, CollatingRangeQuery's will be significantly slower than the RangeQuery's.

          One of the tests uses some of the Farsi information Esra supplied in the original post. Note that neither Java 1.4.2 nor 1.5.0 contains collation information for Farsi. Instead, the test uses the Arabic Locale, which appears to contain the proper letter ordering for the non-Arabic Farsi letters.

          Show
          Steve Rowe added a comment - Attaching a patch containing class CollatingRangeQuery, which extends RangeQuery, overriding the rewrite() method. A test class is also supplied. This is targetted at contrib/. Because all index terms in the Field of the lower and upper terms of the range have to be examined, since index term ordering (Unicode code point order) is not necessarily the same as the collation in the given Locale, CollatingRangeQuery's will be significantly slower than the RangeQuery's. One of the tests uses some of the Farsi information Esra supplied in the original post. Note that neither Java 1.4.2 nor 1.5.0 contains collation information for Farsi. Instead, the test uses the Arabic Locale, which appears to contain the proper letter ordering for the non-Arabic Farsi letters.
          Hide
          Steve Rowe added a comment -

          (Wild guess): iterate over all terms instead of iterating over terms between the lower and upper term. Hmm, this is going to be slow.

          The implementation could default to the current behavior if no/null Locale is supplied.

          Show
          Steve Rowe added a comment - (Wild guess): iterate over all terms instead of iterating over terms between the lower and upper term. Hmm, this is going to be slow. The implementation could default to the current behavior if no/null Locale is supplied.
          Hide
          Yonik Seeley added a comment -

          How do you suggest actually retrieving all of the documents between two endpoints based on non-index ordering?

          Show
          Yonik Seeley added a comment - How do you suggest actually retrieving all of the documents between two endpoints based on non-index ordering?
          Hide
          Steve Rowe added a comment -

          RangeFilter should also take in a Locale, to perform the same sort of comparisons.

          QueryParser already takes in a Locale, though it was originally intended to be used for date comparisons. It could forward this Locale, through ConstantScoreRangeQuery, to RangeFilter.

          Show
          Steve Rowe added a comment - RangeFilter should also take in a Locale, to perform the same sort of comparisons. QueryParser already takes in a Locale, though it was originally intended to be used for date comparisons. It could forward this Locale, through ConstantScoreRangeQuery, to RangeFilter.

            People

            • Assignee:
              Grant Ingersoll
              Reporter:
              Steve Rowe
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development