Lucene - Core
  1. Lucene - Core
  2. LUCENE-444

StandardTokenizer loses Korean characters

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.9
    • Component/s: modules/analysis
    • Labels:
      None

      Description

      While using StandardAnalyzer, exp. StandardTokenizer with Korean text stream, StandardTokenizer ignores the Korean characters. This is because the definition of CJK token in StandardTokenizer.jj JavaCC file doesn't have enough range covering Korean syllables described in Unicode character map.
      This patch adds one line of 0xAC00~0xD7AF, the Korean syllables range to the StandardTokenizer.jj code.

        Activity

        Hide
        Cheolgoo Kang added a comment -

        This patch adds one line of 0xAC00~0xD7AF, the Korean syllables range to the StandardTokenizer.jj code.

        Show
        Cheolgoo Kang added a comment - This patch adds one line of 0xAC00~0xD7AF, the Korean syllables range to the StandardTokenizer.jj code.
        Hide
        Otis Gospodnetic added a comment -

        Committed. Thanks Cheolgoo.

        Show
        Otis Gospodnetic added a comment - Committed. Thanks Cheolgoo.
        Hide
        Erik Hatcher added a comment -

        I'm closing this issue... but some unit tests would be nice to go along with this too, eventually

        Show
        Erik Hatcher added a comment - I'm closing this issue... but some unit tests would be nice to go along with this too, eventually

          People

          • Assignee:
            Unassigned
            Reporter:
            Cheolgoo Kang
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development