Lucene - Core
  1. Lucene - Core
  2. LUCENE-3366

StandardFilter only works with ClassicTokenizer and only when version < 3.1

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Not a Problem
    • Affects Version/s: 3.3
    • Fix Version/s: None
    • Component/s: modules/analysis
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      The StandardFilter used to remove periods from acronyms and apostrophes-S's where they occurred. And it used to work in conjunction with the StandardTokenizer. Presently, it only does this with ClassicTokenizer and when the lucene match version is before 3.1. Here is a excerpt from the code:

        public final boolean incrementToken() throws IOException {
          if (matchVersion.onOrAfter(Version.LUCENE_31))
            return input.incrementToken(); // TODO: add some niceties for the new grammar
          else
            return incrementTokenClassic();
        }
      

      It seems to me that in the great refactor of the standard tokenizer, LUCENE-2167, something was forgotten here. I think that if someone uses the ClassicTokenizer then no matter what the version is, this filter should do what it used to do. And the TODO suggests someone forgot to make this filter do something useful for the StandardTokenizer. Or perhaps that idea should be discarded and this class should be named ClassicTokenFilter.

      In any event, the javadocs for this class appear out of date as there is no mention of ClassicTokenizer, and the wiki is out of date too.

        Activity

        Hide
        Robert Muir added a comment -

        Hi David, I think you want to use ClassicFilter.

        Show
        Robert Muir added a comment - Hi David, I think you want to use ClassicFilter.
        Hide
        David Smiley added a comment -

        Doh! Yes, I didn't notice it, Rob. But still... the purpose of StandardFilter in its current state seems to only exist to satisfy backwards compatibility for code that uses Lucene at a pre 3.x era; nothing more. Shouldn't it be marked @Deprecated to warn people?. Or, the "TODO" should be done to do something. However the current StandardTokenizer doesn't really have equivalent token types to ClassicTokenizer in order for StandardFilter to actually do something useful. So then there is no TODO to do.

        Show
        David Smiley added a comment - Doh! Yes, I didn't notice it, Rob. But still... the purpose of StandardFilter in its current state seems to only exist to satisfy backwards compatibility for code that uses Lucene at a pre 3.x era; nothing more. Shouldn't it be marked @Deprecated to warn people?. Or, the "TODO" should be done to do something. However the current StandardTokenizer doesn't really have equivalent token types to ClassicTokenizer in order for StandardFilter to actually do something useful. So then there is no TODO to do.
        Hide
        Robert Muir added a comment -

        the purpose of the filter is "Normalizes tokens extracted with StandardTokenizer".

        currently this is a no-op, but we can always improve it going with the spirit of the whole standard this thing implements.

        The TODO currently refers to this statement:
        "For Thai, Lao, Khmer, Myanmar, and other scripts that do not use typically use spaces between words, a good implementation should not depend on the default word boundary specification. It should use a more sophisticated mechanism ... Ideographic scripts such as Japanese and Chinese are even more complex"

        There is no problem having a TODO in this filter, we don't need to do a rush job for any reason...

        Some of the preparation for this (e.g. improving the default behavior for CJK) was already done in LUCENE-2911. We now tag all these special types,
        so in the meantime if someone wants to do their own downstream processing they can do this themselves.

        Show
        Robert Muir added a comment - the purpose of the filter is "Normalizes tokens extracted with StandardTokenizer". currently this is a no-op, but we can always improve it going with the spirit of the whole standard this thing implements. The TODO currently refers to this statement: "For Thai, Lao, Khmer, Myanmar, and other scripts that do not use typically use spaces between words, a good implementation should not depend on the default word boundary specification. It should use a more sophisticated mechanism ... Ideographic scripts such as Japanese and Chinese are even more complex" There is no problem having a TODO in this filter, we don't need to do a rush job for any reason... Some of the preparation for this (e.g. improving the default behavior for CJK) was already done in LUCENE-2911 . We now tag all these special types, so in the meantime if someone wants to do their own downstream processing they can do this themselves.
        Hide
        David Smiley added a comment -

        Ok. (I've been in no hurry to rush anything)

        I updated the http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters page to fix references to StandardFilter that should have been to ClassicFilter, and I removed some uses of StandardFilter altogether because it doesn't do anything. I'm disinclined to mention this filter in the upcoming revision of my book, but I'll be sure to mention the Classic* variants.

        Feel free to close this issue if you feel it is appropriate. I created it as an "improvement" because StandardFilter seems unfinished, and you've acknowledged it is. So perhaps it should stay open until it actually does something some day.

        Show
        David Smiley added a comment - Ok. (I've been in no hurry to rush anything) I updated the http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters page to fix references to StandardFilter that should have been to ClassicFilter, and I removed some uses of StandardFilter altogether because it doesn't do anything. I'm disinclined to mention this filter in the upcoming revision of my book, but I'll be sure to mention the Classic* variants. Feel free to close this issue if you feel it is appropriate. I created it as an "improvement" because StandardFilter seems unfinished, and you've acknowledged it is. So perhaps it should stay open until it actually does something some day.
        Hide
        Robert Muir added a comment -

        well its not "unfinished", the right decision might be to ultimately remove it.

        and we could deprecate it in 4.9 and remove it in 5.0 if this is the case, no one's indexes will be broken as it wouldnt have done anything.

        but I don't like what happens with thai etc right now if someone uses StandardAnalyzer.

        Show
        Robert Muir added a comment - well its not "unfinished", the right decision might be to ultimately remove it. and we could deprecate it in 4.9 and remove it in 5.0 if this is the case, no one's indexes will be broken as it wouldnt have done anything. but I don't like what happens with thai etc right now if someone uses StandardAnalyzer.
        Hide
        Robert Muir added a comment -

        use ClassicFilter if you want this behavior.

        Show
        Robert Muir added a comment - use ClassicFilter if you want this behavior.

          People

          • Assignee:
            Unassigned
            Reporter:
            David Smiley
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development