Details

    • Type: Task Task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0-BETA, 5.0
    • Component/s: modules/analysis
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Spinoff from SOLR-3684.

      Most lucene tokenizers have some buffer size, e.g. in CharTokenizer/ICUTokenizer its char[4096].

      But the jflex tokenizers use char[16384] by default, which seems overkill. I'm not sure we really see any performance bonus by having such a huge buffer size as a default.

      There is a jflex parameter to set this: I think we should consider reducing it.

      In a configuration like solr, tokenizers are reused per-thread-per-field,
      so these can easily stack up in RAM.

      Additionally CharFilters are not reused so the configuration in e.g.
      HtmlStripCharFilter might not be great since its per-document garbage.

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Unassigned
            Reporter:
            Robert Muir
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development