Description
TokenizerChain is overwriting, not chaining tokenfilters in normalize.
This doesn't currently break search because normalize is not being used at the Solr level (AFAICT); rather, TextField has its own analyzeMultiTerm() that duplicates code from the newer normalize.
Code as is:
TokenStream result = in; for (TokenFilterFactory filter : filters) { if (filter instanceof MultiTermAwareComponent) { filter = (TokenFilterFactory) ((MultiTermAwareComponent) filter).getMultiTermComponent(); result = filter.create(in); } }
The fix is simple:
- result = filter.create(in); + result = filter.create(result);
Attachments
Issue Links
- relates to
-
SOLR-12034 Replace TokenizerChain in Solr with Lucene's CustomAnalyzer
- Resolved
- links to