Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-324

org.apache.lucene.analysis.cn.ChineseTokenizer missing offset decrement

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Trivial
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.9
    • Component/s: modules/analysis
    • Labels:
      None
    • Environment:

      Operating System: All
      Platform: All

    • Bugzilla Id:
      32687

      Description

      Apparently, in ChineseTokenizer, offset should be decremented like bufferIndex
      when Character is OTHER_LETTER. This directly affects startOffset and endOffset
      values.

      This is critical to have Highlighter working correctly because Highlighter marks
      matching text based on these offset values.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              saturnism@gmail.com Ray Tsang
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: