Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-6913

Standard/Classic/UAX tokenizers could be more ram efficient

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None
    • New

    Description

      These tokenizers map codepoints to character classes with the following datastructure (loaded in clinit):

        private static char [] zzUnpackCMap(String packed) {
          char [] map = new char[0x110000];
      

      This requires 2MB RAM for each tokenizer class (in trunk 6MB if all 3 classes are loaded, in branch_5x 10MB since there are 2 additional backwards compat classes).

      On the other hand, none of our tokenizers actually use a huge number of character classes, so char is overkill: e.g. this map can safely be a byte [] and we can save half the memory. Perhaps it could make these tokenizers faster too.

      Attachments

        1. LUCENE-6913.not.a.patch
          3 kB
          Robert Muir

        Activity

          People

            Unassigned Unassigned
            rcmuir Robert Muir
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: