Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 7.0, 6.5
    • Component/s: None
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Today flushed segments built by an index writer with an index sort specified are not sorted. The merge is responsible of sorting these segments potentially with others that are already sorted (resulted from another merge).
      I'd like to investigate the cost of sorting the segment directly during the flush. This could make the merge faster since they are some cheap optimizations that can be done only if all segments to be merged are sorted.
      For instance the merge of the points could use the bulk merge instead of rebuilding the points from scratch.
      I made a small prototype which sort the segment on flush here:
      https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort

      The idea is simple, for points, norms, docvalues and terms I use the SortingLeafReader implementation to translate the values that we have in RAM in a sorted enumeration for the writers.
      For stored fields I use a two pass scheme where the documents are first written to disk unsorted and then copied to another file with the correct sorting. I use the same stored field format for the two steps and just remove the file produced by the first pass at the end of the process.
      This prototype has no implementation for index sorting that use term vectors yet. I'll add this later if the tests are good enough.
      Speaking of testing, I tried this branch on Michael McCandless benchmark scripts and compared master with index sorting against my branch with index sorting on flush. I tried with sparsetaxis and wikipedia and the first results are weird. When I use the SerialScheduler and only one thread to write the docs, index sorting on flush is slower. But when I use two threads the sorting on flush is much faster even with the SerialScheduler. I'll continue to run the tests in order to be able to share something more meaningful.

      The tests are passing except one about concurrent DV updates. I don't know this part at all so I did not fix the test yet. I don't even know if we can make it work with index sorting .

      Michael McCandless I would love to have your feedback about the prototype. Could you please take a look ? I am sure there are plenty of bugs, ... but I think it's a good start to evaluate the feasibility of this feature.

        Issue Links

          Activity

          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          I ran the test from a clean state and I can see a nice improvement with the sparsetaxis use case.

          I use https://github.com/mikemccand/luceneutil/blob/master/src/python/sparsetaxis/runBenchmark.py and compare two checkouts of Lucene, one with my branch and the other with master.
          For the master branch I have:

          838.0 sec:  20.0 M docs;  23.9 K docs/sec
          

          ... vs the branch with the flush sort:

           612.2 sec:  20.0 M docs;  32.7 K docs/sec
          

          I reproduce the same diff on each run

          Show
          jim.ferenczi Jim Ferenczi added a comment - I ran the test from a clean state and I can see a nice improvement with the sparsetaxis use case. I use https://github.com/mikemccand/luceneutil/blob/master/src/python/sparsetaxis/runBenchmark.py and compare two checkouts of Lucene, one with my branch and the other with master. For the master branch I have: 838.0 sec: 20.0 M docs; 23.9 K docs/sec ... vs the branch with the flush sort: 612.2 sec: 20.0 M docs; 32.7 K docs/sec I reproduce the same diff on each run
          Hide
          mikemccand Michael McCandless added a comment -

          Thanks Jim Ferenczi, I also see comparable speedups on the taxis benchmark. I'll have a look at the change! It looks like a doozie

          Show
          mikemccand Michael McCandless added a comment - Thanks Jim Ferenczi , I also see comparable speedups on the taxis benchmark. I'll have a look at the change! It looks like a doozie
          Hide
          mikemccand Michael McCandless added a comment -

          This is a nice approach! Basically, the codec remains unaware index
          sorting is happening, which is a the right way to do it. Instead, the
          indexing chain takes care of it. And to build the doc comparators you take
          advantage of the in-heap buffered doc values.

          I like that to sort stored fields, you are still just using the codec
          APIs, writing to temp files, then using the codec to read the stored
          fields back for sorting.

          I also like how you were able to re-use the SortingXXX from
          SortingLeafReader. Later on we can maybe optimize some of these;
          e.g. SortingFields and CachedXXXDVs should be able to take
          advantage of the fact that the things they are sorting are all already
          in heap (the indexing buffer), the way you did with
          MutableSortingPointValues (cool).

          Can we rename freezed to frozen in BinaryDocValuesWriter?
          But: why would freezed ever be true when we call flush?
          Shouldn't it only be called once, even in the sorting case?

          I think the 6.x back port here is going to be especially tricky

          Can we block creating a SortingLeafReader now (make its
          constructor private)? We only now ever use its inner classes I think?
          And it is a dangerous class in the first place... if we can do that,
          maybe we rename it SortingCodecUtils or something, just for its
          inner classes.

          Do any of the exceptions tests for IndexWriter get angry? Seems like
          if we hit an IOException e.g. during the renaming that
          SortingStoredFieldsConsumer.flush does we may leave undeleted
          files? Hmm or perhaps IW takes care of that by wrapping the directory
          itself...

          Can't you just pass sortMap::newToOld directly (method reference)
          instead of making the lambda here?:

                writer.sort(state.segmentInfo.maxDoc(), mergeReader, state.fieldInfos,
                    (docID) -> (sortMap.newToOld(docID)));
          
          Show
          mikemccand Michael McCandless added a comment - This is a nice approach! Basically, the codec remains unaware index sorting is happening, which is a the right way to do it. Instead, the indexing chain takes care of it. And to build the doc comparators you take advantage of the in-heap buffered doc values. I like that to sort stored fields, you are still just using the codec APIs, writing to temp files, then using the codec to read the stored fields back for sorting. I also like how you were able to re-use the SortingXXX from SortingLeafReader . Later on we can maybe optimize some of these; e.g. SortingFields and CachedXXXDVs should be able to take advantage of the fact that the things they are sorting are all already in heap (the indexing buffer), the way you did with MutableSortingPointValues (cool). Can we rename freezed to frozen in BinaryDocValuesWriter ? But: why would freezed ever be true when we call flush ? Shouldn't it only be called once, even in the sorting case? I think the 6.x back port here is going to be especially tricky Can we block creating a SortingLeafReader now (make its constructor private)? We only now ever use its inner classes I think? And it is a dangerous class in the first place... if we can do that, maybe we rename it SortingCodecUtils or something, just for its inner classes. Do any of the exceptions tests for IndexWriter get angry? Seems like if we hit an IOException e.g. during the renaming that SortingStoredFieldsConsumer.flush does we may leave undeleted files? Hmm or perhaps IW takes care of that by wrapping the directory itself... Can't you just pass sortMap::newToOld directly (method reference) instead of making the lambda here?: writer.sort(state.segmentInfo.maxDoc(), mergeReader, state.fieldInfos, (docID) -> (sortMap.newToOld(docID)));
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          Thanks Mike,

          Can we rename freezed to frozen in BinaryDocValuesWriter?
          But: why would freezed ever be true when we call flush?
          Shouldn't it only be called once, even in the sorting case?

          This is a leftover that is not needed. The naming was wrong and it's useless so I removed it.

          I also like how you were able to re-use the SortingXXX from
          SortingLeafReader. Later on we can maybe optimize some of these;
          e.g. SortingFields and CachedXXXDVs should be able to take
          advantage of the fact that the things they are sorting are all already
          in heap (the indexing buffer), the way you did with
          MutableSortingPointValues (cool).

          Totally agree, we can revisit later and see if we can optimize memory. I think it's already an optim vs master in terms of memory usage since we only "sort" the segment to be flushed instead of all "unsorted" segments during the merge.

          Can we block creating a SortingLeafReader now (make its
          constructor private)? We only now ever use its inner classes I think?
          And it is a dangerous class in the first place... if we can do that,
          maybe we rename it SortingCodecUtils or something, just for its
          inner classes.

          We still need to wrap unsorted segments during the merge for BWC so SortingLeafReader should remain. I have no idea when we can remove it since indices on older versions should still be compatible with this new one ?

          Do any of the exceptions tests for IndexWriter get angry? Seems like
          if we hit an IOException e.g. during the renaming that
          SortingStoredFieldsConsumer.flush does we may leave undeleted
          files? Hmm or perhaps IW takes care of that by wrapping the directory
          itself...

          Honestly I have no idea. I will dig.

          Can't you just pass sortMap::newToOld directly (method reference)
          instead of making the lambda here?:

          Indeed, thanks.

          I think the 6.x back port here is going to be especially tricky

          I bet but as it is the main part is done by reusing SortingLeafReader inner classes that exist in 6.x.

          I've also removed a nocommit in the AssertingLiveDocsFormat that now checks live docs even when they are sorted.

          Show
          jim.ferenczi Jim Ferenczi added a comment - Thanks Mike, Can we rename freezed to frozen in BinaryDocValuesWriter? But: why would freezed ever be true when we call flush? Shouldn't it only be called once, even in the sorting case? This is a leftover that is not needed. The naming was wrong and it's useless so I removed it. I also like how you were able to re-use the SortingXXX from SortingLeafReader. Later on we can maybe optimize some of these; e.g. SortingFields and CachedXXXDVs should be able to take advantage of the fact that the things they are sorting are all already in heap (the indexing buffer), the way you did with MutableSortingPointValues (cool). Totally agree, we can revisit later and see if we can optimize memory. I think it's already an optim vs master in terms of memory usage since we only "sort" the segment to be flushed instead of all "unsorted" segments during the merge. Can we block creating a SortingLeafReader now (make its constructor private)? We only now ever use its inner classes I think? And it is a dangerous class in the first place... if we can do that, maybe we rename it SortingCodecUtils or something, just for its inner classes. We still need to wrap unsorted segments during the merge for BWC so SortingLeafReader should remain. I have no idea when we can remove it since indices on older versions should still be compatible with this new one ? Do any of the exceptions tests for IndexWriter get angry? Seems like if we hit an IOException e.g. during the renaming that SortingStoredFieldsConsumer.flush does we may leave undeleted files? Hmm or perhaps IW takes care of that by wrapping the directory itself... Honestly I have no idea. I will dig. Can't you just pass sortMap::newToOld directly (method reference) instead of making the lambda here?: Indeed, thanks. I think the 6.x back port here is going to be especially tricky I bet but as it is the main part is done by reusing SortingLeafReader inner classes that exist in 6.x. I've also removed a nocommit in the AssertingLiveDocsFormat that now checks live docs even when they are sorted.
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          I pushed another iteration to https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort

          I cleaned up the nocommit and added the implementation for sorting term vectors.

          Do any of the exceptions tests for IndexWriter get angry? Seems like
          if we hit an IOException e.g. during the renaming that
          SortingStoredFieldsConsumer.flush does we may leave undeleted
          files? Hmm or perhaps IW takes care of that by wrapping the directory
          itself...

          I added an abort method on the StoredFieldsWriter which deletes the remaining temporary files and did the same for the SortingTermVectorsConsumer.

          Michael McCandless can you take a look ?

          Show
          jim.ferenczi Jim Ferenczi added a comment - I pushed another iteration to https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort I cleaned up the nocommit and added the implementation for sorting term vectors. Do any of the exceptions tests for IndexWriter get angry? Seems like if we hit an IOException e.g. during the renaming that SortingStoredFieldsConsumer.flush does we may leave undeleted files? Hmm or perhaps IW takes care of that by wrapping the directory itself... I added an abort method on the StoredFieldsWriter which deletes the remaining temporary files and did the same for the SortingTermVectorsConsumer. Michael McCandless can you take a look ?
          Hide
          mikemccand Michael McCandless added a comment -

          Thanks Jim Ferenczi, I'll have a look.

          Show
          mikemccand Michael McCandless added a comment - Thanks Jim Ferenczi , I'll have a look.
          Hide
          mikemccand Michael McCandless added a comment -

          We still need to wrap unsorted segments during the merge for BWC so SortingLeafReader should remain.

          OK I agree, oh well

          The latest squashed commit looks great; it passes tests and precommit for me. I'll test with sparse taxis benchmark too, but this looks ready!

          I noticed you also optimized merging of stored fields in the sorted case, when field infos are congruent (common), by permuting the docIDs to the sort, but simply bulk-copying the already serialized bytes.

          I'll wait a day or so before committing to give others a chance to review; it's a large change.

          I think we should push first to master, and let that bake some, and in the mean time work out the challenging 6.x back port?

          Show
          mikemccand Michael McCandless added a comment - We still need to wrap unsorted segments during the merge for BWC so SortingLeafReader should remain. OK I agree, oh well The latest squashed commit looks great; it passes tests and precommit for me. I'll test with sparse taxis benchmark too, but this looks ready! I noticed you also optimized merging of stored fields in the sorted case, when field infos are congruent (common), by permuting the docIDs to the sort, but simply bulk-copying the already serialized bytes. I'll wait a day or so before committing to give others a chance to review; it's a large change. I think we should push first to master, and let that bake some, and in the mean time work out the challenging 6.x back port?
          Hide
          mikemccand Michael McCandless added a comment -

          Wow, this patch brings the "sparse sorted" indexing time down from 448.5 seconds on master as a few days ago, to 299.8 seconds, a 33% speedup! Nice.

          Show
          mikemccand Michael McCandless added a comment - Wow, this patch brings the "sparse sorted" indexing time down from 448.5 seconds on master as a few days ago, to 299.8 seconds, a 33% speedup! Nice.
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          We still need to wrap unsorted segments during the merge for BWC so SortingLeafReader should remain.

          We can still rewrite it to a SortingCodecReader and remove the SlowCodecReaderWrapper but that's another issue

          I think we should push first to master, and let that bake some, and in the mean time work out the challenging 6.x back port?

          Agreed. I'll create a branch for the back port in my repo.

          I'll wait a day or so before committing to give others a chance to review; it's a large change.

          That's awesome Michael McCandless ! Thanks for the review and testing.

          Show
          jim.ferenczi Jim Ferenczi added a comment - We still need to wrap unsorted segments during the merge for BWC so SortingLeafReader should remain. We can still rewrite it to a SortingCodecReader and remove the SlowCodecReaderWrapper but that's another issue I think we should push first to master, and let that bake some, and in the mean time work out the challenging 6.x back port? Agreed. I'll create a branch for the back port in my repo. I'll wait a day or so before committing to give others a chance to review; it's a large change. That's awesome Michael McCandless ! Thanks for the review and testing.
          Hide
          jpountz Adrien Grand added a comment -

          Some questions/comments:

          • CompressingStoredFieldsWriter.sort should always have a CompressingStoredFieldsReader as an input, since the codec cannot change in the middle of the flush, so I think we should be able to skip the instanceof check?
          • It would personally help me to have comments eg. in MergeState.maybeSortReaders that the indexSort==null case may only happen for bwc reasons. Maybe we should also assert that if index sorting is configured, then the non-sorted segments can only have 6.2 or 6.3 as a version.

          Thanks for working on this change!

          Show
          jpountz Adrien Grand added a comment - Some questions/comments: CompressingStoredFieldsWriter.sort should always have a CompressingStoredFieldsReader as an input, since the codec cannot change in the middle of the flush, so I think we should be able to skip the instanceof check? It would personally help me to have comments eg. in MergeState.maybeSortReaders that the indexSort==null case may only happen for bwc reasons. Maybe we should also assert that if index sorting is configured, then the non-sorted segments can only have 6.2 or 6.3 as a version. Thanks for working on this change!
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          CompressingStoredFieldsWriter.sort should always have a CompressingStoredFieldsReader as an input, since the codec cannot change in the middle of the flush, so I think we should be able to skip the instanceof check?

          That's true for the only call we make to this new API but since it's public it could be call with a different fields reader in another use case ? I am not happy that I had to add this new public API in the StoredFieldsReader but it's the only way to make this optimized for the compressing case.

          It would personally help me to have comments eg. in MergeState.maybeSortReaders that the indexSort==null case may only happen for bwc reasons. Maybe we should also assert that if index sorting is configured, then the non-sorted segments can only have 6.2 or 6.3 as a version

          Agreed, I'll add an assert for the non-sorted case. I'll also add a comment to make it clear that index==null is handled for BWC reason in maybeSortReader.

          Thanks for having a look Adrien Grand

          Show
          jim.ferenczi Jim Ferenczi added a comment - CompressingStoredFieldsWriter.sort should always have a CompressingStoredFieldsReader as an input, since the codec cannot change in the middle of the flush, so I think we should be able to skip the instanceof check? That's true for the only call we make to this new API but since it's public it could be call with a different fields reader in another use case ? I am not happy that I had to add this new public API in the StoredFieldsReader but it's the only way to make this optimized for the compressing case. It would personally help me to have comments eg. in MergeState.maybeSortReaders that the indexSort==null case may only happen for bwc reasons. Maybe we should also assert that if index sorting is configured, then the non-sorted segments can only have 6.2 or 6.3 as a version Agreed, I'll add an assert for the non-sorted case. I'll also add a comment to make it clear that index==null is handled for BWC reason in maybeSortReader. Thanks for having a look Adrien Grand
          Hide
          jpountz Adrien Grand added a comment -

          I am not happy that I had to add this new public API in the StoredFieldsReader but it's the only way to make this optimized for the compressing case.

          I was thinking about it too and I suspect the optimization does not bring much in the case that blocks contain multiple documents (ie. small docs) since I would expect the fact that sorting the stored fields format keeps decompressing blocks of 16KB for every single document to be the bottleneck? Maybe we should not try to reuse the codec's stored fields format for the temporary stored fields and rather do the buffering in memory or on disk with a custom format that has faster random-access? I would expect it to be faster in many cases, and would allow to get rid of this new API?

          Show
          jpountz Adrien Grand added a comment - I am not happy that I had to add this new public API in the StoredFieldsReader but it's the only way to make this optimized for the compressing case. I was thinking about it too and I suspect the optimization does not bring much in the case that blocks contain multiple documents (ie. small docs) since I would expect the fact that sorting the stored fields format keeps decompressing blocks of 16KB for every single document to be the bottleneck? Maybe we should not try to reuse the codec's stored fields format for the temporary stored fields and rather do the buffering in memory or on disk with a custom format that has faster random-access? I would expect it to be faster in many cases, and would allow to get rid of this new API?
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          This new API is maybe a premature optim that should not be part of this change. What about removing the API and rollback to a non optimized copy that "visits" each doc and copy it like the StoredFieldsReader is doing? This way the function would be private on the StoredFieldsConsumer. We can still add the optimization you're describing later but it can be confusing if the writes of the index writer are not compressed the same way than the other writes for stored fields ?

          Show
          jim.ferenczi Jim Ferenczi added a comment - This new API is maybe a premature optim that should not be part of this change. What about removing the API and rollback to a non optimized copy that "visits" each doc and copy it like the StoredFieldsReader is doing? This way the function would be private on the StoredFieldsConsumer. We can still add the optimization you're describing later but it can be confusing if the writes of the index writer are not compressed the same way than the other writes for stored fields ?
          Hide
          jpountz Adrien Grand added a comment -

          +1

          Show
          jpountz Adrien Grand added a comment - +1
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          I pushed another commit that removes the specialized API for sorting a StoredFieldsWriter. This is now done directly in the StoredFieldsConsumer with a custom CopyVisitor (copied from MergeVisitor).
          I've also added some asserts that check if unsorted segments were built from version prior to Lucene 7.0. We'll need to change the assert when this gets backported to 6.x. I could not add the assert on maybeSortReaders because IndexWriter.addIndexes uses the merge to add indices that could be unsorted. I don't know if this should be allowed or not but we can revisit this later. Other than that I think it's ready !

          Show
          jim.ferenczi Jim Ferenczi added a comment - I pushed another commit that removes the specialized API for sorting a StoredFieldsWriter. This is now done directly in the StoredFieldsConsumer with a custom CopyVisitor (copied from MergeVisitor). I've also added some asserts that check if unsorted segments were built from version prior to Lucene 7.0. We'll need to change the assert when this gets backported to 6.x. I could not add the assert on maybeSortReaders because IndexWriter.addIndexes uses the merge to add indices that could be unsorted. I don't know if this should be allowed or not but we can revisit this later. Other than that I think it's ready !
          Hide
          jpountz Adrien Grand added a comment -

          Thanks, the diff looks good to me!

          IndexWriter.addIndexes uses the merge to add indices that could be unsorted

          I think we should look into forbidding that (in a different issue).

          Show
          jpountz Adrien Grand added a comment - Thanks, the diff looks good to me! IndexWriter.addIndexes uses the merge to add indices that could be unsorted I think we should look into forbidding that (in a different issue).
          Hide
          mikemccand Michael McCandless added a comment -

          Patch looks great to me!

          I think (later, separate issue) we could use a more naive stored fields (and term vectors) format for the temp files written at flush ... a format that does no compression, just writes bytes to disk, maybe has simple in-memory array pointing to offset in the file for each document ... this format would be package private to oal.index. Later! This patch is great, progress not perfection.

          I think we should look into forbidding that (in a different issue).

          +1

          I'll merge this soon to master so we can get it baking ...

          Show
          mikemccand Michael McCandless added a comment - Patch looks great to me! I think (later, separate issue) we could use a more naive stored fields (and term vectors) format for the temp files written at flush ... a format that does no compression, just writes bytes to disk, maybe has simple in-memory array pointing to offset in the file for each document ... this format would be package private to oal.index. Later! This patch is great, progress not perfection. I think we should look into forbidding that (in a different issue). +1 I'll merge this soon to master so we can get it baking ...
          Hide
          jpountz Adrien Grand added a comment -

          I think (later, separate issue) we could use a more naive stored fields (and term vectors) format for the temp files written at flush ... a format that does no compression, just writes bytes to disk, maybe has simple in-memory array pointing to offset in the file for each document ...

          +1

          Show
          jpountz Adrien Grand added a comment - I think (later, separate issue) we could use a more naive stored fields (and term vectors) format for the temp files written at flush ... a format that does no compression, just writes bytes to disk, maybe has simple in-memory array pointing to offset in the file for each document ... +1
          Hide
          jim.ferenczi Jim Ferenczi added a comment -
          Show
          jim.ferenczi Jim Ferenczi added a comment - Thanks Adrien Grand and Michael McCandless !
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit 4ccb9fbd2bbc3afd075aa4bc2b6118f845ea4726 in lucene-solr's branch refs/heads/master from Mike McCandless
          [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4ccb9fb ]

          LUCENE-7579: sort segments at flush too

          Show
          jira-bot ASF subversion and git services added a comment - Commit 4ccb9fbd2bbc3afd075aa4bc2b6118f845ea4726 in lucene-solr's branch refs/heads/master from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4ccb9fb ] LUCENE-7579 : sort segments at flush too
          Hide
          mikemccand Michael McCandless added a comment -

          Thank you Jim Ferenczi! I just pushed your last (squashed) commits to master ... let's let it bake for a while. Maybe you can work on the 6.x back port in the meantime

          Show
          mikemccand Michael McCandless added a comment - Thank you Jim Ferenczi ! I just pushed your last (squashed) commits to master ... let's let it bake for a while. Maybe you can work on the 6.x back port in the meantime
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          Maybe you can work on the 6.x back port in the meantime

          I am on it !

          Show
          jim.ferenczi Jim Ferenczi added a comment - Maybe you can work on the 6.x back port in the meantime I am on it !
          Hide
          mikemccand Michael McCandless added a comment -

          Wow this change gave a big jump in indexing throughput when index sorting is used: https://home.apache.org/~mikemccand/lucenebench/sparseResults.html#index_throughput

          You can see the flush time went up but the merge time went way down.

          Show
          mikemccand Michael McCandless added a comment - Wow this change gave a big jump in indexing throughput when index sorting is used: https://home.apache.org/~mikemccand/lucenebench/sparseResults.html#index_throughput You can see the flush time went up but the merge time went way down.
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          I have a candidate branch for the backport in 6.x:
          https://github.com/apache/lucene-solr/compare/branch_6x...jimczi:flush_sort_6x?expand=1

          I had to adapt the code to use the random access DocValues so it's more a rewrite than a back port.

          Adrien Grand Michael McCandless could you please take a look ?

          Show
          jim.ferenczi Jim Ferenczi added a comment - I have a candidate branch for the backport in 6.x: https://github.com/apache/lucene-solr/compare/branch_6x...jimczi:flush_sort_6x?expand=1 I had to adapt the code to use the random access DocValues so it's more a rewrite than a back port. Adrien Grand Michael McCandless could you please take a look ?
          Hide
          jpountz Adrien Grand added a comment -

          Thanks Jim, I just had a look. It looks good overall, this backport makes me realize how much better master is by taking doc values APIs in its consumers rather than iterables of numbers or BytesRefs!

          • In NumericDocValuesWriter and SortedNumericDocValuesWriter, I think it'd be cleaner to set finalValues in finish than in flush
          • Do we really need the count method on LongSelector? On a related note, it seems to me that we could save some copy-pasting by extracting the logic to get a long value from a given doc id? Right now for all sort fields we duplicate the logic that first checks docsWithField to return the missing value twice.
          • OrdSelector looks unused

          I know we let it through on master, but now that I look at them again, I don't like the catch Trowable blocks we have around abort(), can get rid of them?

          Show
          jpountz Adrien Grand added a comment - Thanks Jim, I just had a look. It looks good overall, this backport makes me realize how much better master is by taking doc values APIs in its consumers rather than iterables of numbers or BytesRefs! In NumericDocValuesWriter and SortedNumericDocValuesWriter, I think it'd be cleaner to set finalValues in finish than in flush Do we really need the count method on LongSelector? On a related note, it seems to me that we could save some copy-pasting by extracting the logic to get a long value from a given doc id? Right now for all sort fields we duplicate the logic that first checks docsWithField to return the missing value twice. OrdSelector looks unused I know we let it through on master, but now that I look at them again, I don't like the catch Trowable blocks we have around abort(), can get rid of them?
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          Thanks @jpountz !
          I pushed another iteration that hopefully addresses your comments.
          finalValues are now set in finish rather than in flush or getDocComparator.
          The LongSelector has been replaced by a LongToIntFunction and the duplicated code is removed.
          I've also removed the catch Throwable when we abort the stored fields consumer.

          Show
          jim.ferenczi Jim Ferenczi added a comment - Thanks @jpountz ! I pushed another iteration that hopefully addresses your comments. finalValues are now set in finish rather than in flush or getDocComparator. The LongSelector has been replaced by a LongToIntFunction and the duplicated code is removed. I've also removed the catch Throwable when we abort the stored fields consumer.
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          Michael McCandless I think the branch for the backport in the 6x branch is ready:
          https://github.com/apache/lucene-solr/compare/branch_6x...jimczi:flush_sort_6x?expand=1
          Can you take a look ?

          Show
          jim.ferenczi Jim Ferenczi added a comment - Michael McCandless I think the branch for the backport in the 6x branch is ready: https://github.com/apache/lucene-solr/compare/branch_6x...jimczi:flush_sort_6x?expand=1 Can you take a look ?
          Hide
          mikemccand Michael McCandless added a comment -

          Oh yes I will have a look! Sorry for the delay! And since 6.4 is now branched we should push this to 6.x for future 6.5.

          Show
          mikemccand Michael McCandless added a comment - Oh yes I will have a look! Sorry for the delay! And since 6.4 is now branched we should push this to 6.x for future 6.5.
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          Thanks Mike ! Yes branch_6.x: 6.5 is the target.

          Show
          jim.ferenczi Jim Ferenczi added a comment - Thanks Mike ! Yes branch_6.x: 6.5 is the target.
          Hide
          mikemccand Michael McCandless added a comment -

          this backport makes me realize how much better master is by taking doc values APIs in its consumers rather than iterables of numbers or BytesRefs!

          ++

          I know we let it through on master, but now that I look at them again, I don't like the catch Trowable blocks we have around abort(), can get rid of them?

          Let's be sure to fix this (and other feedback here) in master too?

          Can you upgrade this assert in IndexWriter.java to instead throw a CorruptIndexException?

          +        } else if (segmentIndexSort == null) {
          +          // Flushed segments are not sorted if they were built with a version prior to 6.4.0
          +          assert info.info.getVersion().onOrAfter(Version.LUCENE_6_4_0) == false;
          

          Maybe that's overly paranoid, but I want to make sure we can safely assume this going forward: no segment should even be unsorted if you are using an index sort.

          In SortingLeafReader.java a small typo (fo BWC -> for BWC):

          * {@link Sort}. This is package private and is only used by Lucene fo BWC when it needs to merge
          

          Otherwise this looks great! It's a big change ... let's push it for jenkins to chew on! Thank you Jim Ferenczi.

          Show
          mikemccand Michael McCandless added a comment - this backport makes me realize how much better master is by taking doc values APIs in its consumers rather than iterables of numbers or BytesRefs! ++ I know we let it through on master, but now that I look at them again, I don't like the catch Trowable blocks we have around abort(), can get rid of them? Let's be sure to fix this (and other feedback here) in master too? Can you upgrade this assert in IndexWriter.java to instead throw a CorruptIndexException ? + } else if (segmentIndexSort == null) { + // Flushed segments are not sorted if they were built with a version prior to 6.4.0 + assert info.info.getVersion().onOrAfter(Version.LUCENE_6_4_0) == false; Maybe that's overly paranoid, but I want to make sure we can safely assume this going forward: no segment should even be unsorted if you are using an index sort. In SortingLeafReader.java a small typo ( fo BWC -> for BWC ): * {@link Sort}. This is package private and is only used by Lucene fo BWC when it needs to merge Otherwise this looks great! It's a big change ... let's push it for jenkins to chew on! Thank you Jim Ferenczi .
          Hide
          jim.ferenczi Jim Ferenczi added a comment -

          I've modified the version check for sorted segments

          onOrAfter(Version.LUCENE_6_5_0)
          Show
          jim.ferenczi Jim Ferenczi added a comment - I've modified the version check for sorted segments onOrAfter(Version.LUCENE_6_5_0)
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit 7d96f9f7981dbadda837b5b2cacc3855d19f71aa in lucene-solr's branch refs/heads/branch_6x from Mike McCandless
          [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d96f9f ]

          LUCENE-7579: sort segments at flush too

          Segments are now also sorted during flush, and merging
          on a sorted index is substantially faster by using some of the same
          bulk merge optimizations that non-sorted merging uses

          Show
          jira-bot ASF subversion and git services added a comment - Commit 7d96f9f7981dbadda837b5b2cacc3855d19f71aa in lucene-solr's branch refs/heads/branch_6x from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d96f9f ] LUCENE-7579 : sort segments at flush too Segments are now also sorted during flush, and merging on a sorted index is substantially faster by using some of the same bulk merge optimizations that non-sorted merging uses
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit 8f5b5a393d94500e6c7a8beff54e010c45c3b0e3 in lucene-solr's branch refs/heads/branch_6x from Jim Ferenczi
          [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8f5b5a3 ]

          LUCENE-7579: sort segments at flush too

          Segments are now also sorted during flush, and merging
          on a sorted index is substantially faster by using some of the same
          bulk merge optimizations that non-sorted merging uses

          (cherry picked from commit 4ccb9fb)

          Show
          jira-bot ASF subversion and git services added a comment - Commit 8f5b5a393d94500e6c7a8beff54e010c45c3b0e3 in lucene-solr's branch refs/heads/branch_6x from Jim Ferenczi [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8f5b5a3 ] LUCENE-7579 : sort segments at flush too Segments are now also sorted during flush, and merging on a sorted index is substantially faster by using some of the same bulk merge optimizations that non-sorted merging uses (cherry picked from commit 4ccb9fb)
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit d73e3fb05c917739bcf0899171a024897d1b0269 in lucene-solr's branch refs/heads/branch_6x from Jim Ferenczi
          [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d73e3fb ]

          LUCENE-7579: fix 6.x backport compilation errors

          Show
          jira-bot ASF subversion and git services added a comment - Commit d73e3fb05c917739bcf0899171a024897d1b0269 in lucene-solr's branch refs/heads/branch_6x from Jim Ferenczi [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d73e3fb ] LUCENE-7579 : fix 6.x backport compilation errors

            People

            • Assignee:
              Unassigned
              Reporter:
              jim.ferenczi Jim Ferenczi
            • Votes:
              2 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development