Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-1118

core analyzers should not produce tokens > N (100?) characters in length

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Discussion that led to this:

      http://www.gossamer-threads.com/lists/lucene/java-dev/56103

      I believe nearly any time a token > 100 characters in length is
      produced, it's a bug in the analysis that the user is not aware of.

      These long tokens cause all sorts of problems, downstream, so it's
      best to catch them early at the source.

      We can accomplish this by tacking on a LengthFilter onto the chains
      for StandardAnalyzer, SimpleAnalyzer, WhitespaceAnalyzer, etc.

      Should we do this in 2.3? I realize this is technically a break in
      backwards compatibility, however, I think it must be incredibly rare
      that this break would in fact break something real in the application?

      1. LUCENE-1118.patch
        9 kB
        Michael McCandless

        Activity

        Hide
        mikemccand Michael McCandless added a comment -

        I fixed only StandardAnalyzer to skip terms longer than 255 chars by
        default (it turns out SimpleAnalyzer, WhitespaceAnalyzer, StopAnalyzer
        already prune tokens at 255 chars).

        You can change the max allowed token length by calling
        StandardAnalyzer.setMaxTokenLength.

        I didn't use LengthFilter, because 1) I wanted to avoid copying the
        massive term only to then filter it (makes performance faster) and, 2)
        I wanted to increment position increment for the next valid token
        after a series of too-long tokens.

        All tests pass. I plan to commit in a day or two.

        Show
        mikemccand Michael McCandless added a comment - I fixed only StandardAnalyzer to skip terms longer than 255 chars by default (it turns out SimpleAnalyzer, WhitespaceAnalyzer, StopAnalyzer already prune tokens at 255 chars). You can change the max allowed token length by calling StandardAnalyzer.setMaxTokenLength. I didn't use LengthFilter, because 1) I wanted to avoid copying the massive term only to then filter it (makes performance faster) and, 2) I wanted to increment position increment for the next valid token after a series of too-long tokens. All tests pass. I plan to commit in a day or two.

          People

          • Assignee:
            mikemccand Michael McCandless
            Reporter:
            mikemccand Michael McCandless
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development