Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 3.0
    • modules/analysis
    • None
    • New, Patch Available

    Description

      If i use LUCENE_VERSION >= 2.9 with smart chinese analyzer, it will crash indexwriter with any reasonable amount of chinese text.

      its especially annoying because it happens in 2.9.1 RC as well.

      this is because the position increments for tokens after stopwords are bogus:

      Here's an example (from test case), where the position increment should be 2, but is instead 91975314!

        public void testChineseStopWords2() throws Exception {
          Analyzer ca = new SmartChineseAnalyzer(Version.LUCENE_CURRENT); /* will load stopwords */
          String sentence = "Title:San"; // : is a stopword
          String result[] = { "titl", "san"};
          int startOffsets[] = { 0, 6 };
          int endOffsets[] = { 5, 9 };
          int posIncr[] = { 1, 2 };
          assertAnalyzesTo(ca, sentence, result, startOffsets, endOffsets, posIncr);
        }
      

      junit.framework.AssertionFailedError: posIncrement 1 expected:<2> but was:<91975314>
      at junit.framework.Assert.fail(Assert.java:47)
      at junit.framework.Assert.failNotEquals(Assert.java:280)
      at junit.framework.Assert.assertEquals(Assert.java:64)
      at junit.framework.Assert.assertEquals(Assert.java:198)
      at org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:83)
      ...

      Attachments

        1. LUCENE-2014_branch.patch
          3 kB
          Robert Muir
        2. LUCENE-2014.patch
          2 kB
          Robert Muir
        3. LUCENE-2014.patch
          1 kB
          Robert Muir

        Activity

          rcmuir Robert Muir added a comment -

          this patch only contains a testcase demonstrating the problem.

          rcmuir Robert Muir added a comment - this patch only contains a testcase demonstrating the problem.
          uschindler Uwe Schindler added a comment -

          Maybe we should use now BaseTokenStreamTestcase (which now no longer uses old/new TS API) to now test all Version constants, which is easy in 3.0 (because it's enum now) and you can iterate using for(Version v : Version.values()).

          I proposed this already for Highlighter (see other issue).

          Is it a problem in StopWordFilter?

          uschindler Uwe Schindler added a comment - Maybe we should use now BaseTokenStreamTestcase (which now no longer uses old/new TS API) to now test all Version constants, which is easy in 3.0 (because it's enum now) and you can iterate using for(Version v : Version.values()). I proposed this already for Highlighter (see other issue). Is it a problem in StopWordFilter?
          rcmuir Robert Muir added a comment -

          Is it a problem in StopWordFilter?

          I don't really know where it is to tell you the truth... i spent a little time trying to create an english testcase for StopFilter, but couldn't reproduce it there.

          smartcn doesn't even touch position increment attributes, so its really wierd...

          rcmuir Robert Muir added a comment - Is it a problem in StopWordFilter? I don't really know where it is to tell you the truth... i spent a little time trying to create an english testcase for StopFilter, but couldn't reproduce it there. smartcn doesn't even touch position increment attributes, so its really wierd...
          uschindler Uwe Schindler added a comment -

          I do not see the problem, from StopFilter:

          @Override
          public final boolean incrementToken() throws IOException {
            // return the first non-stop word found
            int skippedPositions = 0;
            while (input.incrementToken()) {
              if (!stopWords.contains(termAtt.termBuffer(), 0, termAtt.termLength())) {
                if (enablePositionIncrements) {
                  posIncrAtt.setPositionIncrement(posIncrAtt.getPositionIncrement() + skippedPositions);
                }
                return true;
              }
              skippedPositions += posIncrAtt.getPositionIncrement();
            }
            // reached EOS -- return false
            return false;
          }
          

          The problem can only be that the input filter returned some big posIncr for the stop word. The code seems very clear to me. Let's debug

          uschindler Uwe Schindler added a comment - I do not see the problem, from StopFilter: @Override public final boolean incrementToken() throws IOException { // return the first non-stop word found int skippedPositions = 0; while (input.incrementToken()) { if (!stopWords.contains(termAtt.termBuffer(), 0, termAtt.termLength())) { if (enablePositionIncrements) { posIncrAtt.setPositionIncrement(posIncrAtt.getPositionIncrement() + skippedPositions); } return true ; } skippedPositions += posIncrAtt.getPositionIncrement(); } // reached EOS -- return false return false ; } The problem can only be that the input filter returned some big posIncr for the stop word. The code seems very clear to me. Let's debug
          rcmuir Robert Muir added a comment -

          Uwe, check this out.

          smartcn doesn't use PositionIncrementAttribute, but its tokenizer does call clearAttributes() as it should.

          but if I modify WordTokenFilter to set the positionincrement to 1:
          posIncAtt = addAttribute(PositionIncrementAttribute.class);
          ...
          posIncAtt.setPositionIncrement(1);

          then the test passes... basically uninitialized variable problem... but smartcn shouldnt have to do this, right?

          rcmuir Robert Muir added a comment - Uwe, check this out. smartcn doesn't use PositionIncrementAttribute, but its tokenizer does call clearAttributes() as it should. but if I modify WordTokenFilter to set the positionincrement to 1: posIncAtt = addAttribute(PositionIncrementAttribute.class); ... posIncAtt.setPositionIncrement(1); then the test passes... basically uninitialized variable problem... but smartcn shouldnt have to do this, right?
          uschindler Uwe Schindler added a comment -

          Hm hm
          But the StopFilter also adds the attribute and therefore the clearAttributes call should clear it.

          I'll look into it.

          uschindler Uwe Schindler added a comment - Hm hm But the StopFilter also adds the attribute and therefore the clearAttributes call should clear it. I'll look into it.
          rcmuir Robert Muir added a comment -

          duh, this is the problem Uwe.

          WordTokenFilter is like a source of tokens, even though it is not a tokenizer.

          this is because smartcn's tokenizer just breaks out sentences.... WordTokenFilter breaks these into words.

          so i think WordTokenFilter must call clearAttributes()... ?

          rcmuir Robert Muir added a comment - duh, this is the problem Uwe. WordTokenFilter is like a source of tokens, even though it is not a tokenizer. this is because smartcn's tokenizer just breaks out sentences.... WordTokenFilter breaks these into words. so i think WordTokenFilter must call clearAttributes()... ?
          rcmuir Robert Muir added a comment -

          this patch adds clearAttributes to chinese WordTokenFilter, fixes the issue.

          rcmuir Robert Muir added a comment - this patch adds clearAttributes to chinese WordTokenFilter, fixes the issue.
          uschindler Uwe Schindler added a comment -

          This is the problem, you are right. I thought about that, too.

          The question is, why does the PosIncr get such strange values even when the filter is source of tokens? Nobody else modifies it?

          uschindler Uwe Schindler added a comment - This is the problem, you are right. I thought about that, too. The question is, why does the PosIncr get such strange values even when the filter is source of tokens? Nobody else modifies it?
          rcmuir Robert Muir added a comment -

          Uwe, yeah the only thing modifying it should be StopFilter... so I can see the values being "kinda strange" but not as wierd as what I see.

          i worry about this clearAttributes solution though, perhaps WordTokenFilter should use captureState/restoreState api, like the ThaiWordFilter does (very similar analyzer).
          If i use capture/restoreState this should not be a problem right?

          And this way things like custom attributes would be preserved?

          rcmuir Robert Muir added a comment - Uwe, yeah the only thing modifying it should be StopFilter... so I can see the values being "kinda strange" but not as wierd as what I see. i worry about this clearAttributes solution though, perhaps WordTokenFilter should use captureState/restoreState api, like the ThaiWordFilter does (very similar analyzer). If i use capture/restoreState this should not be a problem right? And this way things like custom attributes would be preserved?
          uschindler Uwe Schindler added a comment -

          Hihi, I know where the strange values come from: It is the test in BaseTokenStreamTestCase itsself, that does it to check for missing clearAttributes, see assertTokenStreamContents.... It sets all Attributes to bogus values before calling incrementToken. If you do not clear the attributes, the bogus values stay there.

          But the question is, why does IndexWriter fail (how does it fail?). Normally it should not be affected, as the posIncr stays 1.

          uschindler Uwe Schindler added a comment - Hihi, I know where the strange values come from: It is the test in BaseTokenStreamTestCase itsself, that does it to check for missing clearAttributes, see assertTokenStreamContents.... It sets all Attributes to bogus values before calling incrementToken. If you do not clear the attributes, the bogus values stay there. But the question is, why does IndexWriter fail (how does it fail?). Normally it should not be affected, as the posIncr stays 1.
          rcmuir Robert Muir added a comment -

          Maybe we should use now BaseTokenStreamTestcase (which now no longer uses old/new TS API) to now test all Version constants, which is easy in 3.0 (because it's enum now) and you can iterate using for(Version v : Version.values()).

          this might be a good idea, although the behavior of the analyzer could change depending upon Version. Maybe best to actually test the different possibilities explicitly?

          I think after this one is resolved, i will open a task as a first step to improve the tests of these analyzers to test posInc as well, because I don't see it tested for similar cases like Thai.

          rcmuir Robert Muir added a comment - Maybe we should use now BaseTokenStreamTestcase (which now no longer uses old/new TS API) to now test all Version constants, which is easy in 3.0 (because it's enum now) and you can iterate using for(Version v : Version.values()). this might be a good idea, although the behavior of the analyzer could change depending upon Version. Maybe best to actually test the different possibilities explicitly? I think after this one is resolved, i will open a task as a first step to improve the tests of these analyzers to test posInc as well, because I don't see it tested for similar cases like Thai.
          rcmuir Robert Muir added a comment - - edited

          But the question is, why does IndexWriter fail (how does it fail?). Normally it should not be affected, as the posIncr stays 1.

          Oh, the IndexWriter fails because of integer overflow with any large document (lots of posIncr's get added up, overflow and create a negative posIncr)
          so the negative posIncr creates an exception.

          <edit>

          Uwe I think this really happens especially because of the way smartcn works.
          smartcn creates individual tokens for each piece of punctuation (including things like whitespace), and puts these in the stopword list.
          so if you have a chinese document with lots of space ... you can imagine how it can add up and overflow.

          rcmuir Robert Muir added a comment - - edited But the question is, why does IndexWriter fail (how does it fail?). Normally it should not be affected, as the posIncr stays 1. Oh, the IndexWriter fails because of integer overflow with any large document (lots of posIncr's get added up, overflow and create a negative posIncr) so the negative posIncr creates an exception. <edit> Uwe I think this really happens especially because of the way smartcn works. smartcn creates individual tokens for each piece of punctuation (including things like whitespace), and puts these in the stopword list. so if you have a chinese document with lots of space ... you can imagine how it can add up and overflow.
          uschindler Uwe Schindler added a comment -

          Ah understand, because nobody resets the posinc to 1 back, it adds up in a 2^n manner. stop filter updates to 2, because stop word. After that nobody resets to 1 back, so it gets 2, 4, 8,... bäng if more stopwords occur.

          uschindler Uwe Schindler added a comment - Ah understand, because nobody resets the posinc to 1 back, it adds up in a 2^n manner. stop filter updates to 2, because stop word. After that nobody resets to 1 back, so it gets 2, 4, 8,... bäng if more stopwords occur.
          rcmuir Robert Muir added a comment -

          Uwe exactly. so only remaining question is, do you think I should change this filter to use capture/restoreState api instead of using clearAttributes?

          I guess the only advantage would be that it would preserve any customAttributes or payloads that someone might add after the SentenceTokenizer, but before the WordTokenFilter propagating them downto the individual words.

          rcmuir Robert Muir added a comment - Uwe exactly. so only remaining question is, do you think I should change this filter to use capture/restoreState api instead of using clearAttributes? I guess the only advantage would be that it would preserve any customAttributes or payloads that someone might add after the SentenceTokenizer, but before the WordTokenFilter propagating them downto the individual words.
          uschindler Uwe Schindler added a comment - - edited

          i worry about this clearAttributes solution though, perhaps WordTokenFilter should use captureState/restoreState api, like the ThaiWordFilter does (very similar analyzer).

          If i use capture/restoreState this should not be a problem right?

          I think the filter is fine how it is at the moment. The problem is only the missing clearAttributes when you produce more than one token out of one big one (the sentence). No need for captureState, because the tokens are new ones. If somebody adds custom attributes, they would have cleared, but would that be not correct?

          I guess the only advantage would be that it would preserve any customAttributes or payloads that someone might add after the SentenceTokenizer, but before the WordTokenFilter propagating them downto the individual words.

          Does this make sense to insert a filter between both? The transition from sentence tokens to word tokens creates totally different tokens, how should a payload or other custom att work correct here? Normally such payload filters should be inserted after the WordFilter. The problem of capture/restore state is addiional copy cost for nothing (the long sentence token is copied again and again and always reset to the text word).

          uschindler Uwe Schindler added a comment - - edited i worry about this clearAttributes solution though, perhaps WordTokenFilter should use captureState/restoreState api, like the ThaiWordFilter does (very similar analyzer). If i use capture/restoreState this should not be a problem right? I think the filter is fine how it is at the moment. The problem is only the missing clearAttributes when you produce more than one token out of one big one (the sentence). No need for captureState, because the tokens are new ones. If somebody adds custom attributes, they would have cleared, but would that be not correct? I guess the only advantage would be that it would preserve any customAttributes or payloads that someone might add after the SentenceTokenizer, but before the WordTokenFilter propagating them downto the individual words. Does this make sense to insert a filter between both? The transition from sentence tokens to word tokens creates totally different tokens, how should a payload or other custom att work correct here? Normally such payload filters should be inserted after the WordFilter. The problem of capture/restore state is addiional copy cost for nothing (the long sentence token is copied again and again and always reset to the text word).
          rcmuir Robert Muir added a comment -

          I think the filter is fine how it is at the moment. The problem is only the missing clearAttributes when you produce more than one token out of one big one (the sentence). No need for captureState, because the tokens are new ones. If somebody adds custom attributes, they would have cleared, but would that be not correct?

          not really sure, thats why I asked you

          I guess for now, its good enough to fix it to not crash IndexWriter.

          I will commit soon.

          rcmuir Robert Muir added a comment - I think the filter is fine how it is at the moment. The problem is only the missing clearAttributes when you produce more than one token out of one big one (the sentence). No need for captureState, because the tokens are new ones. If somebody adds custom attributes, they would have cleared, but would that be not correct? not really sure, thats why I asked you I guess for now, its good enough to fix it to not crash IndexWriter. I will commit soon.
          uschindler Uwe Schindler added a comment -

          I will commit soon.

          +1

          And how about 2.9.1?

          uschindler Uwe Schindler added a comment - I will commit soon. +1 And how about 2.9.1?
          rcmuir Robert Muir added a comment -

          And how about 2.9.1?

          I will upload and test a patch against 2.9 branch (can you commit it for me?)

          rcmuir Robert Muir added a comment - And how about 2.9.1? I will upload and test a patch against 2.9 branch (can you commit it for me?)
          uschindler Uwe Schindler added a comment -

          ok.

          uschindler Uwe Schindler added a comment - ok.
          rcmuir Robert Muir added a comment -

          Committed revision 830871 to trunk.

          I will test this against 2.9 and upload a patch.

          Thanks for your help Uwe.

          rcmuir Robert Muir added a comment - Committed revision 830871 to trunk. I will test this against 2.9 and upload a patch. Thanks for your help Uwe.
          uschindler Uwe Schindler added a comment -

          No prob. I also forgot about the bogus values set by BaseTokenStreamTestcase.... But there is no possibility to test/document this in a good way.

          uschindler Uwe Schindler added a comment - No prob. I also forgot about the bogus values set by BaseTokenStreamTestcase.... But there is no possibility to test/document this in a good way.
          rcmuir Robert Muir added a comment -

          patch against 2.9 branch

          rcmuir Robert Muir added a comment - patch against 2.9 branch

          Guys, how serious is this issue? Should we respin 2.9.1?

          mikemccand Michael McCandless added a comment - Guys, how serious is this issue? Should we respin 2.9.1?
          uschindler Uwe Schindler added a comment -

          I merged your changes into 2.9. I can commit, no need for a patch!

          uschindler Uwe Schindler added a comment - I merged your changes into 2.9. I can commit, no need for a patch!
          rcmuir Robert Muir added a comment -

          Mike, its up to you.

          I was just analyzing some not-ridiculously-large Chinese texts from Gutenberg, when I hit the issue.

          The problem is that smartcn indexes punctuation as individual tokens, but filters them out with StopFilter (its stopword list is all punctuation).
          This means it makes heavy use of stopfilter, compared to other analyzers.

          rcmuir Robert Muir added a comment - Mike, its up to you. I was just analyzing some not-ridiculously-large Chinese texts from Gutenberg, when I hit the issue. The problem is that smartcn indexes punctuation as individual tokens, but filters them out with StopFilter (its stopword list is all punctuation). This means it makes heavy use of stopfilter, compared to other analyzers.
          uschindler Uwe Schindler added a comment -

          I also merged the BaseTokenStreamTestcase back, because the bogus values setter was missing in 2.9. Now the tests produce same results.

          Will commit soon.

          uschindler Uwe Schindler added a comment - I also merged the BaseTokenStreamTestcase back, because the bogus values setter was missing in 2.9. Now the tests produce same results. Will commit soon.
          rcmuir Robert Muir added a comment - - edited

          I also merged the BaseTokenStreamTestcase back, because the bogus values setter was missing in 2.9. Now the tests produce same results.

          good deal... i didnt test the bug with the JUnit test against 2.9, but my IndexWriter threw the exception if i used Version.LUCENE_29 (with the RC2 jars), so i knew it was affected.

          rcmuir Robert Muir added a comment - - edited I also merged the BaseTokenStreamTestcase back, because the bogus values setter was missing in 2.9. Now the tests produce same results. good deal... i didnt test the bug with the JUnit test against 2.9, but my IndexWriter threw the exception if i used Version.LUCENE_29 (with the RC2 jars), so i knew it was affected.
          uschindler Uwe Schindler added a comment -

          Committed in 2.9, revision: 830876

          I think you can close the issue. We should ask Mike, to create a new RC, then we also have the other bug fixed in 2.9 (I resolved yesterday). Mike then only have to move the CHANGES entries down to 2.9.1 in contrib/CHANGES.txt

          The other problem still in 2.9 is the default for posincr in StopFilter is version is <2.9, which is now always false for StandardAnalyzer-no-argctor and others.

          uschindler Uwe Schindler added a comment - Committed in 2.9, revision: 830876 I think you can close the issue. We should ask Mike, to create a new RC, then we also have the other bug fixed in 2.9 (I resolved yesterday). Mike then only have to move the CHANGES entries down to 2.9.1 in contrib/CHANGES.txt The other problem still in 2.9 is the default for posincr in StopFilter is version is <2.9, which is now always false for StandardAnalyzer-no-argctor and others.
          rcmuir Robert Muir added a comment -

          thanks again Uwe

          rcmuir Robert Muir added a comment - thanks again Uwe
          rcmuir Robert Muir added a comment -

          Does this make sense to insert a filter between both? The transition from sentence tokens to word tokens creates totally different tokens, how should a payload or other custom att work correct here? Normally such payload filters should be inserted after the WordFilter. The problem of capture/restore state is addiional copy cost for nothing (the long sentence token is copied again and again and always reset to the text word).

          I could imagine a use case where a person wants to keep the sentence information intact (perhaps to improve retrieval accuracy or maybe just restrict phrase queries to match within sentences).
          But I guess to some extent, the chinese phrasequery works pretty intelligently already with >= Version.LUCENE_29 because punctuation is a stopword, and the position increments are adjusted.

          I agree about the expensive cost though... best to leave it for now. But this is the way the Thai analyzer works.

          rcmuir Robert Muir added a comment - Does this make sense to insert a filter between both? The transition from sentence tokens to word tokens creates totally different tokens, how should a payload or other custom att work correct here? Normally such payload filters should be inserted after the WordFilter. The problem of capture/restore state is addiional copy cost for nothing (the long sentence token is copied again and again and always reset to the text word). I could imagine a use case where a person wants to keep the sentence information intact (perhaps to improve retrieval accuracy or maybe just restrict phrase queries to match within sentences). But I guess to some extent, the chinese phrasequery works pretty intelligently already with >= Version.LUCENE_29 because punctuation is a stopword, and the position increments are adjusted. I agree about the expensive cost though... best to leave it for now. But this is the way the Thai analyzer works.
          tomoko Tomoko Uchida added a comment -

          This issue was moved to GitHub issue: #3089.

          tomoko Tomoko Uchida added a comment - This issue was moved to GitHub issue: #3089 .

          People

            rcmuir Robert Muir
            rcmuir Robert Muir
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: