Uploaded image for project: 'Lucene - Core'
  1. Lucene - Core
  2. LUCENE-7465

Add a PatternTokenizer that uses Lucene's RegExp implementation


    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 6.5, 7.0
    • None
    • None
    • New


      I think there are some nice benefits to a version of PatternTokenizer that uses Lucene's RegExp impl instead of the JDK's:

      • Lucene's RegExp is compiled to a DFA up front, so if a "too hard" RegExp is attempted the user discovers it up front instead of later on when a "lucky" document arrives
      • It processes the incoming characters as a stream, only pulling 128 characters at a time, vs the existing PatternTokenizer which currently reads the entire string up front (this has caused heap problems in the past)
      • It should be fast.

      I named it SimplePatternTokenizer, and it still needs a factory and improved tests, but I think it's otherwise close.

      It currently does not take a group parameter because Lucene's RegExps don't yet implement sub group capture. I think we could add that at some point, but it's a bit tricky.

      This doesn't even have group=-1 support (like String.split) ... I think if we did that we should maybe name it differently (SimplePatternSplitTokenizer?).


        1. LUCENE-7465.patch
          55 kB
          Michael McCandless
        2. LUCENE-7465.patch
          33 kB
          Michael McCandless



            mikemccand Michael McCandless
            mikemccand Michael McCandless
            0 Vote for this issue
            7 Start watching this issue