Details
-
New Feature
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
9.0, 8.2
-
None
-
None
-
New
Description
The ICU Transliteration API is currently exposed through Lucene only post-tokinzer, via ICUTransformFilter. Some tokenizers (particularly dictionary-based) may assume pre-normalized input (e.g., for Chinese characters, there may be an assumption of traditional-only or simplified-only input characters, at the level of either all input, or per-dictionary-defined-token).
The potential usefulness of a CharFilter that exposes the ICU Transliteration API was suggested in a thread on the Solr mailing list, and my hope is that this issue can facilitate more detailed discussion of the proposed addition.
A concrete example of mixed traditional/simplified characters that are currently tokenized differently by the ICUTokenizer are:
- 红楼梦 (SSS)
- 紅樓夢 (TTT)
- 紅楼夢 (TST)
The first two tokens (simplified-only and traditional-only, respectively) are included in the CJ dictionary that backs ICUTokenizer, but the last (a mixture of traditional and simplified characters) is not, and is not recognized as a token. Even if we assume this to be an intentional omission from the dictionary that results in behavior that could be desirable for some use cases, there are surely some use cases that would benefit from a more permissive dictionary-based tokenization strategy (such as could be supported by pre-tokenizer transliteration).