Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
3.0.3
-
None
-
None
-
New
Description
Before I tokenize my strings, I am padding them with white space:
String foobar = " " + foo + " " + bar + " ";
When constructing term vectors from ngrams, this strategy has a couple benefits. First, it places special emphasis on the starting and ending of a word. Second, it improves the similarity between phrases with swapped words. " foo bar " matches " bar foo " more closely than "foo bar" matches "bar foo".
The problem is that Lucene's NGramTokenizer trims whitespace. This forces me to do some preprocessing on my strings before I can tokenize them:
foobar.replaceAll(" ","$"); //arbitrary char not in my data
This is undocumented, so users won't realize their strings are being trim()'ed, unless they look through the source, or examine the tokens manually.
I am proposing NGramTokenizer should be changed to respect whitespace. Is there a compelling reason against this?