Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-2947

NGramTokenizer shouldn't trim whitespace

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 3.0.3
    • None
    • modules/analysis
    • None
    • New

    Description

      Before I tokenize my strings, I am padding them with white space:

      String foobar = " " + foo + " " + bar + " ";

      When constructing term vectors from ngrams, this strategy has a couple benefits. First, it places special emphasis on the starting and ending of a word. Second, it improves the similarity between phrases with swapped words. " foo bar " matches " bar foo " more closely than "foo bar" matches "bar foo".

      The problem is that Lucene's NGramTokenizer trims whitespace. This forces me to do some preprocessing on my strings before I can tokenize them:

      foobar.replaceAll(" ","$"); //arbitrary char not in my data

      This is undocumented, so users won't realize their strings are being trim()'ed, unless they look through the source, or examine the tokens manually.

      I am proposing NGramTokenizer should be changed to respect whitespace. Is there a compelling reason against this?

      Attachments

        1. LUCENE-2947.patch
          16 kB
          David Byrne
        2. NGramTokenizerTest.java
          0.7 kB
          David Byrne

        Activity

          People

            Unassigned Unassigned
            dbyrne David Byrne
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: