Lucene - Core
  1. Lucene - Core
  2. LUCENE-2522

add simple japanese tokenizer, based on tinysegmenter

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: 4.9, 5.0
    • Component/s: modules/analysis
    • Labels:
      None
    • Lucene Fields:
      New, Patch Available

      Description

      TinySegmenter (http://www.chasen.org/~taku/software/TinySegmenter/) is a tiny japanese segmenter.

      It was ported to java/lucene by Kohei TAKETA <k-tak@void.in>,
      and is under friendly license terms (BSD, some files explicitly disclaim copyright to the source code, giving a blessing instead)

      Koji knows the author, and already contacted about incorporating into lucene:

      I've contacted Takeda-san who is the creater of Java version of
      TinySegmenter. He said he is happy if his program is part of Lucene.
      He is a co-author of my book about Solr published in Japan, BTW. ;-)
      
      1. LUCENE-2522.patch
        125 kB
        Robert Muir
      2. LUCENE-2522.patch
        94 kB
        Robert Muir
      3. LUCENE-2522.patch
        56 kB
        Robert Muir

        Activity

        Hide
        Robert Muir added a comment -

        here is a really quickly done patch, just to get started (not really for committing)

        • converted their tests to basetokenstream tests,
        • changed it to use CharTermAttribute instead of TermAttribute,
        • added clearAttributes()
        • made class final.
        • added solr factory.

        The code is nice, it is setup to work on unicode codepoints etc, but i think we can improve
        it by using CharArrayMaps for speed and by using lucene's codepoint i/o stuff in CharUtils.

        Show
        Robert Muir added a comment - here is a really quickly done patch, just to get started (not really for committing) converted their tests to basetokenstream tests, changed it to use CharTermAttribute instead of TermAttribute, added clearAttributes() made class final. added solr factory. The code is nice, it is setup to work on unicode codepoints etc, but i think we can improve it by using CharArrayMaps for speed and by using lucene's codepoint i/o stuff in CharUtils.
        Hide
        Robert Muir added a comment -

        i refactored the TinySegmenterConstants to use ints/switch statements instead of all the hashmaps.

        this creates a larger .java file, but its a smaller .class, and scoring no longer has to create 24 strings per character

        Show
        Robert Muir added a comment - i refactored the TinySegmenterConstants to use ints/switch statements instead of all the hashmaps. this creates a larger .java file, but its a smaller .class, and scoring no longer has to create 24 strings per character
        Hide
        Robert Muir added a comment -

        attached is an updated patch, its still a work in progress (needs some more tests and benchmarking and some other things little fixes).

        Theres a general pattern for these segmenters (this one, smartchinese, sen) thats a little tricky, that is they want to really look at sentences to determine how to segment.

        So, I added a base class for this to make writing these segmenters easier, and also to hopefully improve segmentation accuracy. (I would like to switch smartchinese over to it) This class makes it easy to segment sentences with a Sentence BreakIterator... in my opinion it doesnt matter how theoretically good the word tokenization is for these things, if the sentence tokenizer is really bad (I found this issue with both sen and smartchinese).

        hope to get it committable soon

        Show
        Robert Muir added a comment - attached is an updated patch, its still a work in progress (needs some more tests and benchmarking and some other things little fixes). Theres a general pattern for these segmenters (this one, smartchinese, sen) thats a little tricky, that is they want to really look at sentences to determine how to segment. So, I added a base class for this to make writing these segmenters easier, and also to hopefully improve segmentation accuracy. (I would like to switch smartchinese over to it) This class makes it easy to segment sentences with a Sentence BreakIterator... in my opinion it doesnt matter how theoretically good the word tokenization is for these things, if the sentence tokenizer is really bad (I found this issue with both sen and smartchinese). hope to get it committable soon
        Hide
        Steve Rowe added a comment -

        Bulk move 4.4 issues to 4.5 and 5.0

        Show
        Steve Rowe added a comment - Bulk move 4.4 issues to 4.5 and 5.0
        Hide
        Uwe Schindler added a comment -

        Move issue to Lucene 4.9.

        Show
        Uwe Schindler added a comment - Move issue to Lucene 4.9.

          People

          • Assignee:
            Unassigned
            Reporter:
            Robert Muir
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:

              Development