Solr
  1. Solr
  2. SOLR-4909

Solr and IndexReader Re-opening on Replication Slave

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 4.3
    • Fix Version/s: 4.5, 6.0
    • Component/s: replication (java), search
    • Labels:
      None

      Description

      I've been experimenting with caching filter data per segment in Solr using a CachingWrapperFilter & FilteredQuery within a custom query parser (as suggested by Yonik Seeley in SOLR-3763) and encountered situations where the value of getCoreCacheKey() on the AtomicReader for each segment can change for a given segment on disk when the searcher is reopened. As CachingWrapperFilter uses the value of the segment's getCoreCacheKey() as the key in the cache, there are situations where the data cached on that segment is not reused when the segment on disk is still part of the index. This affects the Lucene field cache and field value caches as well as they are cached per segment.

      When Solr first starts it opens the searcher's underlying DirectoryReader in StandardIndexReaderFactory.newReader by calling DirectoryReader.open(indexDir, termInfosIndexDivisor), and the reader is subsequently reopened in SolrCore.openNewSearcher by calling DirectoryReader.openIfChanged(currentReader, writer.get(), true). The act of reopening the reader with the writer when it was first opened without a writer results in the value of getCoreCacheKey() changing on each of the segments even though some of the segments have not changed. Depending on the role of the Solr server, this has different effects:

      • On a SolrCloud node or free-standing index and search server the segment cache is invalidated during the first DirectoryReader reopen - subsequent reopens use the same IndexWriter instance and as such the value of getCoreCacheKey() on each segment does not change so the cache is retained.
      • For a master-slave replication set up the segment cache invalidation occurs on the slave during every replication as the index is reopened using a new IndexWriter instance which results in the value of getCoreCacheKey() changing on each segment when the DirectoryReader is reopened using a different IndexWriter instance.

      I can think of a few approaches to alter the re-opening behavior to allow reuse of segment level caches in both cases, and I'd like to get some input on other ideas before digging in:

      • To change the cloud node/standalone first commit issue it might be possible to create the UpdateHandler and IndexWriter before the DirectoryReader, and use the writer to open the reader. There is a comment in the SolrCore constructor by Yonik Seeley that the searcher should be opened before the update handler so that may not be an acceptable approach.
      • To change the behavior of a slave in a replication set up, one solution would be to not open a writer from the SnapPuller when the new index is retrieved if the core is enabled as a slave only. The writer is needed on a server configured as a master & slave that is functioning as a replication repeater so downstream slaves can see the changes in the index and retrieve them.

      I'll attach a unit test that demonstrates the behavior of reopening the DirectoryReader and it's effects on the value of getCoreCacheKey. My assumption is that the behavior of Lucene during the various reader reopen operations is correct and that the changes are necessary on the Solr side of things.

      1. SOLR-4909_confirm_keys.patch
        19 kB
        Michael Garski
      2. SOLR-4909_fix.patch
        4 kB
        Michael Garski
      3. SOLR-4909_v2.patch
        9 kB
        Michael Garski
      4. SOLR-4909_v3.patch
        12 kB
        Michael Garski
      5. SOLR-4909.patch
        21 kB
        Robert Muir
      6. SOLR-4909.patch
        16 kB
        Robert Muir
      7. SOLR-4909-demo.patch
        6 kB
        Michael Garski

        Issue Links

          Activity

          Hide
          Michael Garski added a comment -

          Attaching unit test that demonstrates the effects on getCoreCacheKey() when the reader is opened in different ways. There are no asserts, just printlns on the segments of the reader. It is not meant to be merged into the code base, only to demonstrate. The patch was created on the lucene_solr_4_3 branch, revision 1490006

          Show
          Michael Garski added a comment - Attaching unit test that demonstrates the effects on getCoreCacheKey() when the reader is opened in different ways. There are no asserts, just printlns on the segments of the reader. It is not meant to be merged into the code base, only to demonstrate. The patch was created on the lucene_solr_4_3 branch, revision 1490006
          Hide
          Robert Muir added a comment -

          this analysis is correct: its the same basic issue as SOLR-4764

          Show
          Robert Muir added a comment - this analysis is correct: its the same basic issue as SOLR-4764
          Hide
          Michael Garski added a comment -

          Thanks Robert - I'll update the name of this issue to address the replication slave case as SOLR-4764 addresses the NRT case. I should have a patch for that in the next day or two.

          Show
          Michael Garski added a comment - Thanks Robert - I'll update the name of this issue to address the replication slave case as SOLR-4764 addresses the NRT case. I should have a patch for that in the next day or two.
          Hide
          Mark Miller added a comment -

          The likely fix for SOLR-4764 is to just open the IW right away - I imagine that will solve this case as well.

          Show
          Mark Miller added a comment - The likely fix for SOLR-4764 is to just open the IW right away - I imagine that will solve this case as well.
          Hide
          Michael Garski added a comment -

          Opening the writer during core initialization and using that to open the reader will not solve the replication case. Currently after the index changes are retrieved the writer is closed and reopened in SnapPuller.openNewWriterAndSearcher to be aware of the changes just pulled in from the master. When a reader is re-opened with a different writer the value of getCoreCacheKey changes for each segment resulting in a loss of any per-segment caches.

          An instance configured only as a replication slave is essentially read-only... should it even have a writer instance?

          Show
          Michael Garski added a comment - Opening the writer during core initialization and using that to open the reader will not solve the replication case. Currently after the index changes are retrieved the writer is closed and reopened in SnapPuller.openNewWriterAndSearcher to be aware of the changes just pulled in from the master. When a reader is re-opened with a different writer the value of getCoreCacheKey changes for each segment resulting in a loss of any per-segment caches. An instance configured only as a replication slave is essentially read-only... should it even have a writer instance?
          Hide
          Mark Miller added a comment -

          should it even have a writer instance?

          Doesn't really matter - the advantagous certainly outweigh any cost.

          Currently after the index changes are retrieved the writer is closed and reopened in SnapPuller.openNewWriterAndSearcher to be aware of the changes just pulled in from the master.

          This depends - it doesn't reopen the writer to be aware of any changes if the same index dir is used. It opens a new indexwriter when the index directory is completely changed/moved - and I don't see that going away anytime soon.

          Show
          Mark Miller added a comment - should it even have a writer instance? Doesn't really matter - the advantagous certainly outweigh any cost. Currently after the index changes are retrieved the writer is closed and reopened in SnapPuller.openNewWriterAndSearcher to be aware of the changes just pulled in from the master. This depends - it doesn't reopen the writer to be aware of any changes if the same index dir is used. It opens a new indexwriter when the index directory is completely changed/moved - and I don't see that going away anytime soon.
          Hide
          Michael Garski added a comment -

          It opens a new indexwriter when the index directory is completely changed/moved - and I don't see that going away anytime soon.

          That makes sense - new physical directory, new writer - I would not expect that to change.

          it doesn't reopen the writer to be aware of any changes if the same index dir is used

          That's not the behavior that is occurring when index deltas from the master are applied to the existing index directory. Here is a trace of the calls made in that case:

          SnapPuller.fetchLatestIndex(SolrCore core, forceReplication = false)
          SnapPuller.openNewWriterAndSearcher(isFullCopyNeeded = false) [isFullCopyNeeded is false as the index deltas are applied to the existing index directory]
          DirectUpdateHandler2.newIndexWriter(rollback = false) [isFullCopyNeeded is passed in as the value of the rollback parameter]
          DefaultSolrCoreState.newIndexWriter(SolrCore core, rollback = false)
          With the value of rollback == false the writer is now closed and a new one is created, resulting in the loss of all segment-level caches.

          It appears as if when isFullCopyNeeded == false, then the call to DefaultSolrCoreState.newIndexWriter should not be made, however if that is changed to not open a new writer a handful of the TestReplicationHandler tests then fail.

          Show
          Michael Garski added a comment - It opens a new indexwriter when the index directory is completely changed/moved - and I don't see that going away anytime soon. That makes sense - new physical directory, new writer - I would not expect that to change. it doesn't reopen the writer to be aware of any changes if the same index dir is used That's not the behavior that is occurring when index deltas from the master are applied to the existing index directory. Here is a trace of the calls made in that case: SnapPuller.fetchLatestIndex(SolrCore core, forceReplication = false) SnapPuller.openNewWriterAndSearcher(isFullCopyNeeded = false) [isFullCopyNeeded is false as the index deltas are applied to the existing index directory] DirectUpdateHandler2.newIndexWriter(rollback = false) [isFullCopyNeeded is passed in as the value of the rollback parameter] DefaultSolrCoreState.newIndexWriter(SolrCore core, rollback = false) With the value of rollback == false the writer is now closed and a new one is created, resulting in the loss of all segment-level caches. It appears as if when isFullCopyNeeded == false, then the call to DefaultSolrCoreState.newIndexWriter should not be made, however if that is changed to not open a new writer a handful of the TestReplicationHandler tests then fail.
          Hide
          Mark Miller added a comment -

          Ah, right. This how we are reopening the writer on the latest commit. May be a bit more difficult, bit there is the possibility it could be addressed.

          Show
          Mark Miller added a comment - Ah, right. This how we are reopening the writer on the latest commit. May be a bit more difficult, bit there is the possibility it could be addressed.
          Hide
          Michael Garski added a comment -

          Thanks for confirming my results Mark. I'll dig deeper into the test failures and come up with a few approaches to stop the loss of segment level caches on read-only slaves after replication.

          Show
          Michael Garski added a comment - Thanks for confirming my results Mark. I'll dig deeper into the test failures and come up with a few approaches to stop the loss of segment level caches on read-only slaves after replication.
          Hide
          Mark Miller added a comment -

          As far as I remember, we used to commit to do this - which meant the same IndexWriter - I think I turned it into a reopen of the IW so that we wouldn't have a commit on the slave and cause versions/generations to no longer match the master (this type of thing was causing other problems). I guess ideally, we would be able to not commit, but reopen the latest commit point as if we had committed.

          Show
          Mark Miller added a comment - As far as I remember, we used to commit to do this - which meant the same IndexWriter - I think I turned it into a reopen of the IW so that we wouldn't have a commit on the slave and cause versions/generations to no longer match the master (this type of thing was causing other problems). I guess ideally, we would be able to not commit, but reopen the latest commit point as if we had committed.
          Hide
          Michael Garski added a comment -

          In experimenting with a fix I altered the SnapPuller to only open a new writer if it has moved to a new index directory (isFullCopyNeeded == true) or if the instance is configured to be a replication master, which made all of the existing tests pass except for doTestIndexAndConfigReplication. The failure occurs when comparing the index version retrieved from the replication handler via the commits in the 'details' command and the value returned from 'indexversion' command - indexversion returns the proper version however the details do not contain all of the commits as the IndexDeletionPolicy is not aware of them. I'm not sure what the potential side effects of this would be on a read-only slave.

          Show
          Michael Garski added a comment - In experimenting with a fix I altered the SnapPuller to only open a new writer if it has moved to a new index directory (isFullCopyNeeded == true) or if the instance is configured to be a replication master, which made all of the existing tests pass except for doTestIndexAndConfigReplication. The failure occurs when comparing the index version retrieved from the replication handler via the commits in the 'details' command and the value returned from 'indexversion' command - indexversion returns the proper version however the details do not contain all of the commits as the IndexDeletionPolicy is not aware of them. I'm not sure what the potential side effects of this would be on a read-only slave.
          Hide
          Mark Miller added a comment -

          It seems like it would be nicer if the solution worked for 'repeaters' as well, but that could solve things for the slave case - the likely side effects are probably confusion if a user see the numbers are different and they should align - i dont like that so much. But currently you have to commit or reopen the writer to pick up the newest commit.

          Show
          Mark Miller added a comment - It seems like it would be nicer if the solution worked for 'repeaters' as well, but that could solve things for the slave case - the likely side effects are probably confusion if a user see the numbers are different and they should align - i dont like that so much. But currently you have to commit or reopen the writer to pick up the newest commit.
          Hide
          Michael Garski added a comment -

          Attached a fix that corrects the issue. Here is an overview of the fix:

          Simply having the slave not have a writer instance is not feasible as then old commit points would continue to pile up on the slave, eventually exhausting disk space. The writer is required to delete the old commit points via the deletion policy, and the only way for the writer to be aware of the new commit points retrieved from the master is to open a new writer, and if the reader is reopened with a different writer instance the segment-level caches are lost.

          To change this behavior the reader is disconnected from the writer when explicitly configured to do so with a new parameter in the indexConfig section of the solr config named createReaderFromWriter which defaults to true to make the current behavior the default. If the value is explicitly set to false, which would normally only be done in a read-only slave, the reader is always initialized and re-opened from a directory instance and not a writer instance.

          There is logic in SolrCore.openNewSearcher to open a new reader rather than re-open should the underlying directory instance in the current reader not match that of the new index writer as that means that a full copy of the index was downloaded into a new directory, as would happen during replication if the slave's version was ahead of the master's.

          The patch was created on the lucene_solr_4_3 branch with all tests passing & I can create versions for other branches if needed.

          Show
          Michael Garski added a comment - Attached a fix that corrects the issue. Here is an overview of the fix: Simply having the slave not have a writer instance is not feasible as then old commit points would continue to pile up on the slave, eventually exhausting disk space. The writer is required to delete the old commit points via the deletion policy, and the only way for the writer to be aware of the new commit points retrieved from the master is to open a new writer, and if the reader is reopened with a different writer instance the segment-level caches are lost. To change this behavior the reader is disconnected from the writer when explicitly configured to do so with a new parameter in the indexConfig section of the solr config named createReaderFromWriter which defaults to true to make the current behavior the default. If the value is explicitly set to false, which would normally only be done in a read-only slave, the reader is always initialized and re-opened from a directory instance and not a writer instance. There is logic in SolrCore.openNewSearcher to open a new reader rather than re-open should the underlying directory instance in the current reader not match that of the new index writer as that means that a full copy of the index was downloaded into a new directory, as would happen during replication if the slave's version was ahead of the master's. The patch was created on the lucene_solr_4_3 branch with all tests passing & I can create versions for other branches if needed.
          Hide
          Mark Miller added a comment -

          the reader is always initialized and re-opened from a directory instance and not a writer instance.

          Have to consider this carefully considering SOLR-4764 likely aims to drop opening from a directory at all.

          Show
          Mark Miller added a comment - the reader is always initialized and re-opened from a directory instance and not a writer instance. Have to consider this carefully considering SOLR-4764 likely aims to drop opening from a directory at all.
          Hide
          Michael Garski added a comment -

          SOLR-4764 likely aims to drop opening from a directory at all.

          Could SOLR-4764 use the same config logic to determine how to open/re-open the reader? Default behavior would be to open the reader from the writer (as necessary for NRT), but explicitly configured non-NRT instances would not open from the writer. Short of adding a way to re-open an index writer on a new commit point without resulting in dumping existing segment caches, I'm not sure how else the replication slave case can be addressed.

          Show
          Michael Garski added a comment - SOLR-4764 likely aims to drop opening from a directory at all. Could SOLR-4764 use the same config logic to determine how to open/re-open the reader? Default behavior would be to open the reader from the writer (as necessary for NRT), but explicitly configured non-NRT instances would not open from the writer. Short of adding a way to re-open an index writer on a new commit point without resulting in dumping existing segment caches, I'm not sure how else the replication slave case can be addressed.
          Hide
          Steve Rowe added a comment -

          Bulk move 4.4 issues to 4.5 and 5.0

          Show
          Steve Rowe added a comment - Bulk move 4.4 issues to 4.5 and 5.0
          Hide
          Michael Garski added a comment -

          I've updated the patch to handle the changes made for SOLR-4764 (SOLR-4909_v2.patch). It works the same as the original patch I attached.

          Show
          Michael Garski added a comment - I've updated the patch to handle the changes made for SOLR-4764 ( SOLR-4909 _v2.patch). It works the same as the original patch I attached.
          Hide
          Robert Muir added a comment -

          Hi Michael:

          So the idea here is an explicit option that allows to not reopen from indexwriter for these replication slaves (because a new IW is created when replication happens?)

          This piece one concerns me:

          There is logic in SolrCore.openNewSearcher to open a new reader rather than re-open should the underlying directory instance in the current reader not match that of the new index writer as that means that a full copy of the index was downloaded into a new directory, as would happen during replication if the slave's version was ahead of the master's.

          +            // during a replication that pulls the complete index into a new physical directory
          +            // the reader cannot be reopened and must be newly opened using the same directory as the writer
          +            if(writer != null && !currentReader.directory().equals(writer.get().getDirectory())) {
          

          Are you sure this really does what you want? I don't think anybody implements/tests equals() on Directory implementations, and if so I'm not sure what the semantics would be. Looking at other stuff around this code that tries to do similar things, it seems they are comparing strings (representing the directory path).

          Show
          Robert Muir added a comment - Hi Michael: So the idea here is an explicit option that allows to not reopen from indexwriter for these replication slaves (because a new IW is created when replication happens?) This piece one concerns me: There is logic in SolrCore.openNewSearcher to open a new reader rather than re-open should the underlying directory instance in the current reader not match that of the new index writer as that means that a full copy of the index was downloaded into a new directory, as would happen during replication if the slave's version was ahead of the master's. + // during a replication that pulls the complete index into a new physical directory + // the reader cannot be reopened and must be newly opened using the same directory as the writer + if(writer != null && !currentReader.directory().equals(writer.get().getDirectory())) { Are you sure this really does what you want? I don't think anybody implements/tests equals() on Directory implementations, and if so I'm not sure what the semantics would be. Looking at other stuff around this code that tries to do similar things, it seems they are comparing strings (representing the directory path).
          Hide
          Robert Muir added a comment -

          By the way (since i now look at your new config variable and i feel i somewhat made this situation worse/more confusing with SOLR-4764), i think its not ideal to have two config variables:
          1. reopenReaders
          2. createReaderFromWriter

          otherwise this would create 4 possibilities of behavior and I dont see a use case for 2 of them. (even if you are not using NRT and opening straight from the directory, why would you not want to reuse same segments when its possible)

          I think it would be much easier if there was just a single config variable like "nrt=true/false" and this determines if new readers are opened from the directory or indexwriter (IndexReaderFactory.newReader(Directory) vs IndexReaderFactory.newReader(IndexWriter), and otherwise DirectoryReader.doOpenIfChanged(existingReader) is always called (which does the right thing because it remembers its 'type').

          This could also prevent further user confusion: e.g. if nrt=false, errors should be issued if someone tries to do softcommit or configure autosoftcommit in solrconfig.xml.

          Show
          Robert Muir added a comment - By the way (since i now look at your new config variable and i feel i somewhat made this situation worse/more confusing with SOLR-4764 ), i think its not ideal to have two config variables: 1. reopenReaders 2. createReaderFromWriter otherwise this would create 4 possibilities of behavior and I dont see a use case for 2 of them. (even if you are not using NRT and opening straight from the directory, why would you not want to reuse same segments when its possible) I think it would be much easier if there was just a single config variable like "nrt=true/false" and this determines if new readers are opened from the directory or indexwriter (IndexReaderFactory.newReader(Directory) vs IndexReaderFactory.newReader(IndexWriter), and otherwise DirectoryReader.doOpenIfChanged(existingReader) is always called (which does the right thing because it remembers its 'type'). This could also prevent further user confusion: e.g. if nrt=false, errors should be issued if someone tries to do softcommit or configure autosoftcommit in solrconfig.xml.
          Hide
          Michael Garski added a comment -

          Thanks for the feedback Robert Muir! The use of a single config variable would be the simplest fix & I'll update my patch this week.

          Show
          Michael Garski added a comment - Thanks for the feedback Robert Muir ! The use of a single config variable would be the simplest fix & I'll update my patch this week.
          Hide
          Michael Garski added a comment -

          Updated patch attached (SOLR-4909_v3.patch)

          I re-named the 'reopenReaders' in the index config to 'nrtMode'. When nrtMode is set to true (the default), readers are opened from the writer. When set to false, readers are (re)opened from the directory.

          Patch applies to branch_4x.

          Show
          Michael Garski added a comment - Updated patch attached ( SOLR-4909 _v3.patch) I re-named the 'reopenReaders' in the index config to 'nrtMode'. When nrtMode is set to true (the default), readers are opened from the writer. When set to false, readers are (re)opened from the directory. Patch applies to branch_4x.
          Hide
          Robert Muir added a comment -

          Thanks Michael!

          I added some tests to your patch (just expanded TestNRTOpen a bit, and added a non-NRT version of it called TestNonNRTOpen). I will think about some more tests to add (i wanted to get it back to you for now), like to assert its actually fixing what you want to fix and that segments are shared But for now i wanted some simple explicit tests in both cases for all the hair here, e.g. we arent going backwards on core reload, etc etc

          This seemed to find a directory leak during core reload: I think it was due to the logic in the patch:

          if(getSolrConfig().nrtMode) {
            // if in NRT mode, need to open from the previous writer
            return DirectoryReader.open(iw, true);
          } else {
            // if not NRT, need to create a new reader from the directory
            String indexDir = getDirectoryFactory().normalize(getNewIndexDir());
            return DirectoryReader.open(directoryFactory.get(indexDir, DirContext.DEFAULT, getSolrConfig().indexConfig.lockType));
          }
          

          I changed the logic to the following and it seems to resolve the issue, since this code is only invoked when the iw != null anyway:

          if(getSolrConfig().nrtMode) {
            // if in NRT mode, need to open from the previous writer
            return DirectoryReader.open(iw, true);
          } else {
            // if not NRT, need to create a new reader from the directory
            return DirectoryReader.open(iw.getDirectory());
          }
          

          I think this is still technically wrong: since we are opening "new readers" we should be calling indexReaderFactory.newReader methods here?

          Show
          Robert Muir added a comment - Thanks Michael! I added some tests to your patch (just expanded TestNRTOpen a bit, and added a non-NRT version of it called TestNonNRTOpen). I will think about some more tests to add (i wanted to get it back to you for now), like to assert its actually fixing what you want to fix and that segments are shared But for now i wanted some simple explicit tests in both cases for all the hair here, e.g. we arent going backwards on core reload, etc etc This seemed to find a directory leak during core reload: I think it was due to the logic in the patch: if (getSolrConfig().nrtMode) { // if in NRT mode, need to open from the previous writer return DirectoryReader.open(iw, true ); } else { // if not NRT, need to create a new reader from the directory String indexDir = getDirectoryFactory().normalize(getNewIndexDir()); return DirectoryReader.open(directoryFactory.get(indexDir, DirContext.DEFAULT, getSolrConfig().indexConfig.lockType)); } I changed the logic to the following and it seems to resolve the issue, since this code is only invoked when the iw != null anyway: if (getSolrConfig().nrtMode) { // if in NRT mode, need to open from the previous writer return DirectoryReader.open(iw, true ); } else { // if not NRT, need to create a new reader from the directory return DirectoryReader.open(iw.getDirectory()); } I think this is still technically wrong: since we are opening "new readers" we should be calling indexReaderFactory.newReader methods here?
          Hide
          Michael Garski added a comment -

          Thanks Robert Muir, is the updated patch you attached for a different issue? It contains changes for TestPostingsHighlighterRanking.java and PassageScorer.java...

          Show
          Michael Garski added a comment - Thanks Robert Muir , is the updated patch you attached for a different issue? It contains changes for TestPostingsHighlighterRanking.java and PassageScorer.java...
          Hide
          Robert Muir added a comment -

          uh oh... its likely i screwed this up. Lemme fix

          Show
          Robert Muir added a comment - uh oh... its likely i screwed this up. Lemme fix
          Hide
          Robert Muir added a comment -

          attached SOLR-4909.patch not LUCENE!

          Show
          Robert Muir added a comment - attached SOLR -4909.patch not LUCENE!
          Hide
          Michael Garski added a comment -

          Thanks for the feedback Robert, I'll look into the additional tests as well.

          Show
          Michael Garski added a comment - Thanks for the feedback Robert, I'll look into the additional tests as well.
          Hide
          Michael Garski added a comment -

          I've updated the patch to include the initial directory opened via the cores indexReaderFactory & included a test that verifies the value of the core cache key's hash code after a commit.

          Show
          Michael Garski added a comment - I've updated the patch to include the initial directory opened via the cores indexReaderFactory & included a test that verifies the value of the core cache key's hash code after a commit.
          Hide
          Robert Muir added a comment -

          Thanks Michael: at a glance the patch looks good to me.

          I wonder if we can improve the test: I'm a bit concerned with random merge policies that it might sporatically fail. Maybe we can change the test to use LogDocMergePolicy in its configuration and explicitly assert the segment structure.

          I'll take a closer look as soon as I have a chance: its not your fault, the code around here is just a bit scary.

          Show
          Robert Muir added a comment - Thanks Michael: at a glance the patch looks good to me. I wonder if we can improve the test: I'm a bit concerned with random merge policies that it might sporatically fail. Maybe we can change the test to use LogDocMergePolicy in its configuration and explicitly assert the segment structure. I'll take a closer look as soon as I have a chance: its not your fault, the code around here is just a bit scary.
          Hide
          Robert Muir added a comment -

          Updated patch: I beefed up tests for both nrt/non-NRT case.

          This is ready.

          Show
          Robert Muir added a comment - Updated patch: I beefed up tests for both nrt/non-NRT case. This is ready.
          Hide
          ASF subversion and git services added a comment -

          Commit 1521556 from Robert Muir in branch 'dev/trunk'
          [ https://svn.apache.org/r1521556 ]

          SOLR-4909: Use DirectoryReader.openIfChanged in non-NRT mode

          Show
          ASF subversion and git services added a comment - Commit 1521556 from Robert Muir in branch 'dev/trunk' [ https://svn.apache.org/r1521556 ] SOLR-4909 : Use DirectoryReader.openIfChanged in non-NRT mode
          Hide
          ASF subversion and git services added a comment -

          Commit 1521563 from Robert Muir in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1521563 ]

          SOLR-4909: Use DirectoryReader.openIfChanged in non-NRT mode

          Show
          ASF subversion and git services added a comment - Commit 1521563 from Robert Muir in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1521563 ] SOLR-4909 : Use DirectoryReader.openIfChanged in non-NRT mode
          Hide
          Robert Muir added a comment -

          Thank you Michael!

          Show
          Robert Muir added a comment - Thank you Michael!
          Hide
          Adrien Grand added a comment -

          4.5 release -> bulk close

          Show
          Adrien Grand added a comment - 4.5 release -> bulk close

            People

            • Assignee:
              Unassigned
              Reporter:
              Michael Garski
            • Votes:
              1 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development