Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-9581

Clarify discardCompoundToken behavior in the JapaneseTokenizer

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • None
    • 9.0, 8.8
    • None
    • None
    • New

    Description

      At first sight, the discardCompoundToken option added in LUCENE-9123 seems redundant with the NORMAL mode of the Japanese tokenizer. When set to true, the current behavior is to disable the decomposition for compounds, that's exactly what the NORMAL mode does.

      So I wonder if the right semantic of the option would be to keep only the decomposition of the compound or if it's really needed. If the goal is to make the output compatible with a graph token filter, the current workaround to set the mode to NORMAL should be enough.

      That's consistent with the mode that should be used to preserve positions in the index since we don't handle position length on the indexing side. 

      Am I missing something regarding the new option ? Is there a compelling case where it differs from the NORMAL mode ?

      Attachments

        1. LUCENE-9581.patch
          2 kB
          Kazuaki Hiraga
        2. LUCENE-9581.patch
          14 kB
          Jim Ferenczi
        3. LUCENE-9581.patch
          8 kB
          Jim Ferenczi

        Activity

          People

            Unassigned Unassigned
            jimczi Jim Ferenczi
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: