Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-3920

ngram tokenizer/filters create nonsense offsets if followed by a word combiner

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.6, 4.0-ALPHA
    • None
    • None
    • None
    • New

    Description

      It seems like maybe its possibly applying the offsets from the wrong token?

      Because after shingling, the resulting token has a startOffset thats after the endoffset.

      Attachments

        1. LUCENE-3920_test.patch
          1 kB
          Robert Muir

        Issue Links

          Activity

            People

              jpountz Adrien Grand
              rcmuir Robert Muir
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: