Lucene - Core
  1. Lucene - Core
  2. LUCENE-5218

background merge hit exception && Caused by: java.lang.ArrayIndexOutOfBoundsException

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 4.4
    • Fix Version/s: 4.5, 5.0
    • Component/s: core/index
    • Labels:
      None
    • Environment:

      Linux MMapDirectory.

    • Lucene Fields:
      New

      Description

      forceMerge(80)
      ==============================
      Caused by: java.io.IOException: background merge hit exception: _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt [maxNumSegments=80]
      at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
      at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
      at com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
      ... 4 more
      Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
      at org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
      at org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
      at org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
      at org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
      at org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
      at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
      at org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
      at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
      at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
      at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
      at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
      at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
      at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

      ===============

      1. lucene44-LUCENE-5218.zip
        8.59 MB
        Littlestar
      2. LUCENE-5218.patch
        3 kB
        Michael McCandless

        Activity

        Adrien Grand made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Hide
        Adrien Grand added a comment -

        4.5 release -> bulk close

        Show
        Adrien Grand added a comment - 4.5 release -> bulk close
        Hide
        ASF subversion and git services added a comment -

        Commit 1526579 from Michael McCandless in branch 'dev/trunk'
        [ https://svn.apache.org/r1526579 ]

        LUCENE-5218: add CHANGES

        Show
        ASF subversion and git services added a comment - Commit 1526579 from Michael McCandless in branch 'dev/trunk' [ https://svn.apache.org/r1526579 ] LUCENE-5218 : add CHANGES
        Hide
        ASF subversion and git services added a comment -

        Commit 1526577 from Michael McCandless in branch 'dev/branches/branch_4x'
        [ https://svn.apache.org/r1526577 ]

        LUCENE-5218: add CHANGES

        Show
        ASF subversion and git services added a comment - Commit 1526577 from Michael McCandless in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1526577 ] LUCENE-5218 : add CHANGES
        Hide
        ASF subversion and git services added a comment -

        Commit 1526575 from Michael McCandless in branch 'dev/branches/lucene_solr_4_5'
        [ https://svn.apache.org/r1526575 ]

        LUCENE-5218: add CHANGES

        Show
        ASF subversion and git services added a comment - Commit 1526575 from Michael McCandless in branch 'dev/branches/lucene_solr_4_5' [ https://svn.apache.org/r1526575 ] LUCENE-5218 : add CHANGES
        Adrien Grand made changes -
        Fix Version/s 4.5 [ 12324742 ]
        Fix Version/s 4.6 [ 12324999 ]
        Michael McCandless made changes -
        Status Open [ 1 ] Resolved [ 5 ]
        Fix Version/s 5.0 [ 12321663 ]
        Fix Version/s 4.6 [ 12324999 ]
        Resolution Fixed [ 1 ]
        Hide
        Michael McCandless added a comment -

        I also committed the fix to 45x branch; if we re-spin a new RC then I'll update the fix version here.

        Show
        Michael McCandless added a comment - I also committed the fix to 45x branch; if we re-spin a new RC then I'll update the fix version here.
        Hide
        ASF subversion and git services added a comment -

        Commit 1526546 from Michael McCandless in branch 'dev/branches/lucene_solr_4_5'
        [ https://svn.apache.org/r1526546 ]

        LUCENE-5218: fix exception when trying to read a 0-byte BinaryDocValues field

        Show
        ASF subversion and git services added a comment - Commit 1526546 from Michael McCandless in branch 'dev/branches/lucene_solr_4_5' [ https://svn.apache.org/r1526546 ] LUCENE-5218 : fix exception when trying to read a 0-byte BinaryDocValues field
        Hide
        ASF subversion and git services added a comment -

        Commit 1526538 from Michael McCandless in branch 'dev/trunk'
        [ https://svn.apache.org/r1526538 ]

        LUCENE-5218: fix exception when trying to read a 0-byte BinaryDocValues field

        Show
        ASF subversion and git services added a comment - Commit 1526538 from Michael McCandless in branch 'dev/trunk' [ https://svn.apache.org/r1526538 ] LUCENE-5218 : fix exception when trying to read a 0-byte BinaryDocValues field
        Hide
        ASF subversion and git services added a comment -

        Commit 1526529 from Michael McCandless in branch 'dev/branches/branch_4x'
        [ https://svn.apache.org/r1526529 ]

        LUCENE-5218: fix exception when trying to read a 0-byte BinaryDocValues field

        Show
        ASF subversion and git services added a comment - Commit 1526529 from Michael McCandless in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1526529 ] LUCENE-5218 : fix exception when trying to read a 0-byte BinaryDocValues field
        Hide
        Littlestar added a comment -

        patch tested OK.
        please submit to trunk/trunk4x/trunk45, thanks.

        Checking only these segments: _d8:
        44 of 54: name=_d8 docCount=19599
        codec=hybaseStd42x
        compound=true
        numFiles=3
        size (MB)=9.559
        diagnostics =

        {timestamp=1379167874407, mergeFactor=22, os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}

        no deletions
        test: open reader.........OK
        test: fields..............OK [29 fields]
        test: field norms.........OK [4 fields]
        test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 689694 tokens]
        test: stored fields.......OK [408046 total field count; avg 20.82 fields per doc]
        test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
        test: docvalues...........OK [0 total doc count; 13 docvalues fields]

        No problems were detected with this index.

        Show
        Littlestar added a comment - patch tested OK. please submit to trunk/trunk4x/trunk45, thanks. Checking only these segments: _d8: 44 of 54: name=_d8 docCount=19599 codec=hybaseStd42x compound=true numFiles=3 size (MB)=9.559 diagnostics = {timestamp=1379167874407, mergeFactor=22, os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation} no deletions test: open reader.........OK test: fields..............OK [29 fields] test: field norms.........OK [4 fields] test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 689694 tokens] test: stored fields.......OK [408046 total field count; avg 20.82 fields per doc] test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc] test: docvalues...........OK [0 total doc count; 13 docvalues fields] No problems were detected with this index.
        Michael McCandless made changes -
        Attachment LUCENE-5218.patch [ 12605084 ]
        Hide
        Michael McCandless added a comment -

        Patch w/ test and fix; I fixed it slightly differently, just returning immediately when length == 0.

        Show
        Michael McCandless added a comment - Patch w/ test and fix; I fixed it slightly differently, just returning immediately when length == 0.
        Hide
        Michael McCandless added a comment -

        Thanks Littlestar, I'm able to reproduce this with a small test case ... I'll add a patch shortly.

        Show
        Michael McCandless added a comment - Thanks Littlestar, I'm able to reproduce this with a small test case ... I'll add a patch shortly.
        Michael McCandless made changes -
        Assignee Michael McCandless [ mikemccand ]
        Hide
        Littlestar added a comment -

        mybe binary doc length=0.
        myapp convert string to byte[], adding to binaryDocvalues.

        above patch works for me.

        Show
        Littlestar added a comment - mybe binary doc length=0. myapp convert string to byte[], adding to binaryDocvalues. above patch works for me.
        Hide
        Michael McCandless added a comment -

        Hmm are you adding length=0 binary doc values? It sounds like this could be a bug in that case, when the start aligns with the block boundary.

        Show
        Michael McCandless added a comment - Hmm are you adding length=0 binary doc values? It sounds like this could be a bug in that case, when the start aligns with the block boundary.
        Hide
        Littlestar added a comment - - edited
        public void fillSlice(BytesRef b, long start, int length) {
              assert length >= 0: "length=" + length;
              assert length <= blockSize+1;
              final int index = (int) (start >> blockBits);
              final int offset = (int) (start & blockMask);
              b.length = length;
              if (blockSize - offset >= length) {
                // Within block
                b.bytes = blocks[index];  //here is java.lang.ArrayIndexOutOfBoundsException
                b.offset = offset;
              } else {
                // Split
                b.bytes = new byte[length];
                b.offset = 0;
                System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
                System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, length-(blockSize-offset));
              }
            }
        

        I debug into above code.
        when java.lang.ArrayIndexOutOfBoundsException occurs:
        b=[], start=131072,lenth=0
        index=2, offset=0, blockSize=65536, blockBits=16, blockMask=65535, blocks.length=2

        my patch:

         public void fillSlice(BytesRef b, long start, int length) {
              assert length >= 0: "length=" + length;
              assert length <= blockSize+1: "length=" + length;
              final int index = (int) (start >> blockBits);
              final int offset = (int) (start & blockMask);
              b.length = length;
              
              +if (index >= blocks.length) {
              +   // Outside block
              +   b.bytes = EMPTY_BYTES;
              +   b.offset = b.length = 0;
              +} else if (blockSize - offset >= length) {
                // Within block
                b.bytes = blocks[index];
                b.offset = offset;
              } else {
                // Split
                b.bytes = new byte[length];
                b.offset = 0;
                System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
                System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, length-(blockSize-offset));
              }
            }
        
        
        Show
        Littlestar added a comment - - edited public void fillSlice(BytesRef b, long start, int length) { assert length >= 0: "length=" + length; assert length <= blockSize+1; final int index = (int) (start >> blockBits); final int offset = (int) (start & blockMask); b.length = length; if (blockSize - offset >= length) { // Within block b.bytes = blocks[index]; //here is java.lang.ArrayIndexOutOfBoundsException b.offset = offset; } else { // Split b.bytes = new byte[length]; b.offset = 0; System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset); System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, length-(blockSize-offset)); } } I debug into above code. when java.lang.ArrayIndexOutOfBoundsException occurs: b=[], start=131072,lenth=0 index=2, offset=0, blockSize=65536, blockBits=16, blockMask=65535, blocks.length=2 my patch: public void fillSlice(BytesRef b, long start, int length) { assert length >= 0: "length=" + length; assert length <= blockSize+1: "length=" + length; final int index = (int) (start >> blockBits); final int offset = (int) (start & blockMask); b.length = length; +if (index >= blocks.length) { + // Outside block + b.bytes = EMPTY_BYTES; + b.offset = b.length = 0; +} else if (blockSize - offset >= length) { // Within block b.bytes = blocks[index]; b.offset = offset; } else { // Split b.bytes = new byte[length]; b.offset = 0; System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset); System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, length-(blockSize-offset)); } }
        Littlestar made changes -
        Attachment lucene44-LUCENE-5218.zip [ 12604772 ]
        Hide
        Littlestar added a comment - - edited

        I attached testindex and testcode.
        Some files in test_fail_index was deleted except segment "_d8". (full indexSize is 5G(54 segments), only segment "_d8" needed for testcase)

        java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index

        Thanks.

        Show
        Littlestar added a comment - - edited I attached testindex and testcode. Some files in test_fail_index was deleted except segment "_d8". (full indexSize is 5G(54 segments), only segment "_d8" needed for testcase) java -cp ./bin;./lib/lucene-core-4.4.0.jar;./lib/lucene-codecs-4.4.0.jar org.apache.lucene.index.CheckIndex -segment _d8 ./test_fail_index Thanks.
        Hide
        Littlestar added a comment - - edited

        org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir
        failed agian.

        how to narrow it down, thanks.
        I can reduced index files, mininal files, contain _d8, CheckIndex failed with above errors.
        Can I upload the following index files? (32M)

         
        segments_54d
        *.si
        _d8.cfe
        _d8.cfs
        
        Show
        Littlestar added a comment - - edited org.apache.lucene.index.CheckIndex -segment _d8 /tmp/backup/indexdir failed agian. how to narrow it down, thanks. I can reduced index files, mininal files, contain _d8, CheckIndex failed with above errors. Can I upload the following index files? (32M) segments_54d *.si _d8.cfe _d8.cfs
        Hide
        Littlestar added a comment - - edited

        I can't narrow it down from an empty index.
        but I backup the failed index (5G).

        I run core\src\java\org\apache\lucene\index\CheckIndex.java
        maybe one segment failed.

        44 of 54: name=_d8 docCount=19599
        codec=hybaseStd42x
        compound=true
        numFiles=3
        size (MB)=9.559
        diagnostics =

        {timestamp=1379167874407, mergeFactor=22, os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}

        no deletions
        test: open reader.........OK
        test: fields..............OK [29 fields]
        test: field norms.........OK [4 fields]
        test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 689694 tokens]
        test: stored fields.......OK [408046 total field count; avg 20.82 fields per doc]
        test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
        test: docvalues...........ERROR [2]
        java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
        at org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
        at org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316)
        at org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420)
        at org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291)
        at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615)
        at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)
        FAILED
        WARNING: fixIndex() would remove reference to this segment; full exception:
        java.lang.RuntimeException: DocValues test failed
        at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628)
        at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854)

        45 of 54: name=_3xn docCount=215407

        Show
        Littlestar added a comment - - edited I can't narrow it down from an empty index. but I backup the failed index (5G). I run core\src\java\org\apache\lucene\index\CheckIndex.java maybe one segment failed. 44 of 54: name=_d8 docCount=19599 codec=hybaseStd42x compound=true numFiles=3 size (MB)=9.559 diagnostics = {timestamp=1379167874407, mergeFactor=22, os.version=2.6.32-358.el6.x86_64, os=Linux, lucene.version=4.4.0 1504776 - sarowe - 2013-07-19 02:49:47, source=merge, os.arch=amd64, mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation} no deletions test: open reader.........OK test: fields..............OK [29 fields] test: field norms.........OK [4 fields] test: terms, freq, prox...OK [289268 terms; 3096641 terms/docs pairs; 689694 tokens] test: stored fields.......OK [408046 total field count; avg 20.82 fields per doc] test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc] test: docvalues...........ERROR [2] java.lang.ArrayIndexOutOfBoundsException: 2 at org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92) at org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267) at org.apache.lucene.index.CheckIndex.checkBinaryDocValues(CheckIndex.java:1316) at org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:1420) at org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:1291) at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:615) at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854) FAILED WARNING: fixIndex() would remove reference to this segment; full exception: java.lang.RuntimeException: DocValues test failed at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:628) at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1854) 45 of 54: name=_3xn docCount=215407
        Hide
        Michael McCandless added a comment -

        Can you reproduce this, starting from a clean index? If so, try to narrow it down, e.g. try not passing any of those extra JVM args and see if it still repros?

        It's also quite possible it's a real bug somewhere in how binary DV fields are written/read! So if you can boil it down to a smallish test case then we can better narrow it down.

        Show
        Michael McCandless added a comment - Can you reproduce this, starting from a clean index? If so, try to narrow it down, e.g. try not passing any of those extra JVM args and see if it still repros? It's also quite possible it's a real bug somewhere in how binary DV fields are written/read! So if you can boil it down to a smallish test case then we can better narrow it down.
        Hide
        Littlestar added a comment -

        jvm args:
        -Xss256k -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:GCPauseIntervalMillis=1000 -XX:NewRatio=2 -XX:SurvivorRatio=1
        maybe UseG1GC has bug?

        Show
        Littlestar added a comment - jvm args: -Xss256k -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:GCPauseIntervalMillis=1000 -XX:NewRatio=2 -XX:SurvivorRatio=1 maybe UseG1GC has bug?
        Hide
        Michael McCandless added a comment -

        Don't use 7u40: there is apparently a JVM bug that can cause index corruption like this (LUCENE-5212).

        But 7u25 should be safe. If you use only 7u25, and start from a new index, you can reproduce this exception? Can you run CheckIndex on the resulting index and post the output?

        Show
        Michael McCandless added a comment - Don't use 7u40: there is apparently a JVM bug that can cause index corruption like this ( LUCENE-5212 ). But 7u25 should be safe. If you use only 7u25, and start from a new index, you can reproduce this exception? Can you run CheckIndex on the resulting index and post the output?
        Hide
        Littlestar added a comment - - edited

        my app continue insert records, may be 10-10000 records per seconds.
        lucene index with a lots of small segments, so I call forceMerge(80) before each call.

        Show
        Littlestar added a comment - - edited my app continue insert records, may be 10-10000 records per seconds. lucene index with a lots of small segments, so I call forceMerge(80) before each call.
        Hide
        Littlestar added a comment - - edited

        java version "1.7.0_25"
        I also build jdk 7u40 with openjdk-7u40-fcs-src-b43-26_aug_2013.zip
        two jdks has same problem.

        Show
        Littlestar added a comment - - edited java version "1.7.0_25" I also build jdk 7u40 with openjdk-7u40-fcs-src-b43-26_aug_2013.zip two jdks has same problem.
        Hide
        Michael McCandless added a comment -

        Which JVM are you using?

        Show
        Michael McCandless added a comment - Which JVM are you using?
        Hide
        Littlestar added a comment - - edited

        PagedBytes.java#fillSlice
        maybe wrong start??

         
        public void fillSlice(BytesRef b, long start, int length) {
              assert length >= 0: "length=" + length;
              assert length <= blockSize+1;
              final int index = (int) (start >> blockBits);
              final int offset = (int) (start & blockMask);
              b.length = length;
              if (blockSize - offset >= length) {
                // Within block
                b.bytes = blocks[index];
                b.offset = offset;
              } else {
                // Split
                b.bytes = new byte[length];
                b.offset = 0;
                System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset);
                System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, length-(blockSize-offset));
              }
            }
        
        
        Show
        Littlestar added a comment - - edited PagedBytes.java#fillSlice maybe wrong start?? public void fillSlice(BytesRef b, long start, int length) { assert length >= 0: "length=" + length; assert length <= blockSize+1; final int index = (int) (start >> blockBits); final int offset = (int) (start & blockMask); b.length = length; if (blockSize - offset >= length) { // Within block b.bytes = blocks[index]; b.offset = offset; } else { // Split b.bytes = new byte[length]; b.offset = 0; System.arraycopy(blocks[index], offset, b.bytes, 0, blockSize-offset); System.arraycopy(blocks[1+index], 0, b.bytes, blockSize-offset, length-(blockSize-offset)); } }
        Littlestar made changes -
        Field Original Value New Value
        Description forceMerge(80)
        ==============================
        Caused by: java.io.IOException: background merge hit exception: _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt [maxNumSegments=80]
        at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
        at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
        at com.trs.hybase.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
        ... 4 more
        Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
        at org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
        at org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
        at org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
        at org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
        at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
        at org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
        at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
        at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
        at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

        ===============
        forceMerge(80)
        ==============================
        Caused by: java.io.IOException: background merge hit exception: _3h(4.4):c79921/2994 _3vs(4.4):c38658 _eq(4.4):c38586 _h1(4.4):c37370 _16k(4.4):c36591 _j4(4.4):c34316 _dx(4.4):c30550 _3m6(4.4):c30058 _dl(4.4):c28440 _d8(4.4):c19599 _dy(4.4):c1500/75 _h2(4.4):c1500 into _3vt [maxNumSegments=80]
        at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1714)
        at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1650)
        at com.xxx.yyy.engine.lucene.LuceneEngine.flushAndReopen(LuceneEngine.java:1295)
        ... 4 more
        Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
        at org.apache.lucene.util.PagedBytes$Reader.fillSlice(PagedBytes.java:92)
        at org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer$6.get(Lucene42DocValuesProducer.java:267)
        at org.apache.lucene.codecs.DocValuesConsumer$2$1.setNext(DocValuesConsumer.java:239)
        at org.apache.lucene.codecs.DocValuesConsumer$2$1.hasNext(DocValuesConsumer.java:201)
        at org.apache.lucene.codecs.lucene42.Lucene42DocValuesConsumer.addBinaryField(Lucene42DocValuesConsumer.java:218)
        at org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:110)
        at org.apache.lucene.codecs.DocValuesConsumer.mergeBinaryField(DocValuesConsumer.java:186)
        at org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:171)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
        at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3772)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3376)
        at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
        at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

        ===============
        Littlestar created issue -

          People

          • Assignee:
            Michael McCandless
            Reporter:
            Littlestar
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development