Details

    • Type: Task
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0-BETA, 6.0
    • Component/s: modules/analysis
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Spinoff from SOLR-3684.

      Most lucene tokenizers have some buffer size, e.g. in CharTokenizer/ICUTokenizer its char[4096].

      But the jflex tokenizers use char[16384] by default, which seems overkill. I'm not sure we really see any performance bonus by having such a huge buffer size as a default.

      There is a jflex parameter to set this: I think we should consider reducing it.

      In a configuration like solr, tokenizers are reused per-thread-per-field,
      so these can easily stack up in RAM.

      Additionally CharFilters are not reused so the configuration in e.g.
      HtmlStripCharFilter might not be great since its per-document garbage.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              rcmuir Robert Muir
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: