Cassandra
  1. Cassandra
  2. CASSANDRA-4937

CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Fix Version/s: 2.0 beta 1
    • Component/s: None
    • Labels:

      Description

      After good amount of testing on one of the clusters it was found that in order to improve read latency we need to minimize allocation rate that compression involves, that minimizes GC (as well as heap usage) and substantially decreases latency on read heavy workloads.

      I have also discovered that RAR skip cache harms performance in situation when reads are done in parallel with compaction working with relatively big SSTable files (few GB and more). The attached patch removes possibility to skip cache from compressed files (I can also add changes to RAR to remove skip cache functionality as a separate patch).

      1. 4937-v3.txt
        32 kB
        Jonathan Ellis
      2. CASSANDRA-4937.patch
        14 kB
        Pavel Yaskevich
      3. CASSANDRA-4937-trunk.patch
        31 kB
        Pavel Yaskevich
      4. CASSANDRA-4937-v4.patch
        34 kB
        Pavel Yaskevich
      There are no Sub-Tasks for this issue.

        Activity

        Hide
        Pavel Yaskevich added a comment -

        As part of RAR changes instead of actively skipping cache while file is still in use, it would only release it when SSTable is no longer needed (in SSTableReader.releaseReference() before data/index cleanup).

        Show
        Pavel Yaskevich added a comment - As part of RAR changes instead of actively skipping cache while file is still in use, it would only release it when SSTable is no longer needed (in SSTableReader.releaseReference() before data/index cleanup).
        Hide
        Jonathan Ellis added a comment -

        we need to minimize allocation rate that compression involves

        How much memory do you see CM using, prior to this change?

        RAR skip cache harms performance in situation when reads are done in parallel with compaction

        But it's still the case that we absolutely want to keep compaction of cold data from thrashing the cache. What if we used the read metrics we collect to decide whether to skip cache instead of turning it off entirely?

        Feedback on the code:

        • I'm not sure how much can CRAR "objectCache" help if we trash the queue on reference release. This would be more useful if we (1) made it per-SSTR instead of global and (2) allowed CRAR to be used by more than one request. Either way, let's split this out to a separate patch – think this should be 1.2-only.
        • CM.close is added but I don't see it called anywhere
        • If we're going to move CompressionMetadata off-heap, why not use a single Memory object instead of BLA? This should also be 1.2.

        ISTM this needs to split into three patches – objectCache, off-heap CM, and cache skipping.

        Show
        Jonathan Ellis added a comment - we need to minimize allocation rate that compression involves How much memory do you see CM using, prior to this change? RAR skip cache harms performance in situation when reads are done in parallel with compaction But it's still the case that we absolutely want to keep compaction of cold data from thrashing the cache. What if we used the read metrics we collect to decide whether to skip cache instead of turning it off entirely? Feedback on the code: I'm not sure how much can CRAR "objectCache" help if we trash the queue on reference release. This would be more useful if we (1) made it per-SSTR instead of global and (2) allowed CRAR to be used by more than one request. Either way, let's split this out to a separate patch – think this should be 1.2-only. CM.close is added but I don't see it called anywhere If we're going to move CompressionMetadata off-heap, why not use a single Memory object instead of BLA? This should also be 1.2. ISTM this needs to split into three patches – objectCache, off-heap CM, and cache skipping.
        Hide
        Pavel Yaskevich added a comment -

        How much memory do you see CM using, prior to this change?

        I'm not sure how much can CRAR "objectCache" help if we trash the queue on reference release. This would be more useful if we (1) made it per-SSTR instead of global and (2) allowed CRAR to be used by more than one request. Either way, let's split this out to a separate patch – think this should be 1.2-only.

        It depends on the data size per-SSTable, but on each 1GB overhead is about 20MB, which would definitely be promoted to old gen and kept there. That is not the biggest problem, the problem with compression allocation rate is that we are opening one CRAR per row read (CompressedSegmentedFile.getSegment) which allocates 128KB (64KB for compression/decompression buffers) plus has additional small memory overhead of checksum buffer and fields, that is what "objectCache" here to solve because it was figured that even when up to 12 SSTables are involved per read we wouldn't have more then "concurrent_reads" CRARs per-SSTable in the cache.

        RAR skip cache harms performance in situation when reads are done in parallel with compaction

        The problem with the code we have right now is that it doesn't actually skip blocks that compaction read, it drops whole file page cache after fixed internal (128MB) so when you have long running compactions (trottled or big data files, for example) normal reads would hit the cold data very frequently. The only thing is working correctly right now in terms of skiping cache is SW. I have done some benchmarks with and without skiping cache and it shows that page replacement done by kernel is much better then our skip suggestions via recent read latency histograms.

        CM.close is added but I don't see it called anywhere

        It should be called on CRAR.deallocate(), sorry, I must have missed that when I was merging, I will fix that and update the patch.

        If we're going to move CompressionMetadata off-heap, why not use a single Memory object instead of BLA? This should also be 1.2.

        Yes, that could be done as a single object, I just didn't want to remove historical paging change especially actual overhead of paging is very-very low

        Show
        Pavel Yaskevich added a comment - How much memory do you see CM using, prior to this change? I'm not sure how much can CRAR "objectCache" help if we trash the queue on reference release. This would be more useful if we (1) made it per-SSTR instead of global and (2) allowed CRAR to be used by more than one request. Either way, let's split this out to a separate patch – think this should be 1.2-only. It depends on the data size per-SSTable, but on each 1GB overhead is about 20MB, which would definitely be promoted to old gen and kept there. That is not the biggest problem, the problem with compression allocation rate is that we are opening one CRAR per row read (CompressedSegmentedFile.getSegment) which allocates 128KB (64KB for compression/decompression buffers) plus has additional small memory overhead of checksum buffer and fields, that is what "objectCache" here to solve because it was figured that even when up to 12 SSTables are involved per read we wouldn't have more then "concurrent_reads" CRARs per-SSTable in the cache. RAR skip cache harms performance in situation when reads are done in parallel with compaction The problem with the code we have right now is that it doesn't actually skip blocks that compaction read, it drops whole file page cache after fixed internal (128MB) so when you have long running compactions (trottled or big data files, for example) normal reads would hit the cold data very frequently. The only thing is working correctly right now in terms of skiping cache is SW. I have done some benchmarks with and without skiping cache and it shows that page replacement done by kernel is much better then our skip suggestions via recent read latency histograms. CM.close is added but I don't see it called anywhere It should be called on CRAR.deallocate(), sorry, I must have missed that when I was merging, I will fix that and update the patch. If we're going to move CompressionMetadata off-heap, why not use a single Memory object instead of BLA? This should also be 1.2. Yes, that could be done as a single object, I just didn't want to remove historical paging change especially actual overhead of paging is very-very low
        Hide
        Pavel Yaskevich added a comment -

        updated the patch with missing CRAR metadata.close()

        Show
        Pavel Yaskevich added a comment - updated the patch with missing CRAR metadata.close()
        Hide
        Jonathan Ellis added a comment -

        Broke the CompressionMetadata chunk changes into CASSANDRA-4941.

        Still don't like the global objectCache code. SSTR queue (no Map) would be better. Or, what about SSTR Threadlocal?

        The problem with the code we have right now is that it doesn't actually skip blocks that compaction read, it drops whole file page cache after fixed internal (128MB) so when you have long running compactions (trottled or big data files, for example) normal reads would hit the cold data very frequently.

        Isn't the right fix then to only drop the recently-read blocks during sequential i/o?

        Show
        Jonathan Ellis added a comment - Broke the CompressionMetadata chunk changes into CASSANDRA-4941 . Still don't like the global objectCache code. SSTR queue (no Map) would be better. Or, what about SSTR Threadlocal? The problem with the code we have right now is that it doesn't actually skip blocks that compaction read, it drops whole file page cache after fixed internal (128MB) so when you have long running compactions (trottled or big data files, for example) normal reads would hit the cold data very frequently. Isn't the right fix then to only drop the recently-read blocks during sequential i/o?
        Hide
        Pavel Yaskevich added a comment -

        Still don't like the global objectCache code. SSTR queue (no Map) would be better. Or, what about SSTR Threadlocal?

        The reason why it's done via queues associated with filenames is that we want to get rid of CRAR instances as soon as we done with individual SSTables and I don't want to involve SSTR there more then just cleaning queues upon termination.

        Isn't the right fix then to only drop the recently-read blocks during sequential i/o?

        We can't because it would require (e.g. compaction) to know of how something is using files, it could remove cache from somebody's fit anyway and we would use too many system calls if we do that.

        Show
        Pavel Yaskevich added a comment - Still don't like the global objectCache code. SSTR queue (no Map) would be better. Or, what about SSTR Threadlocal? The reason why it's done via queues associated with filenames is that we want to get rid of CRAR instances as soon as we done with individual SSTables and I don't want to involve SSTR there more then just cleaning queues upon termination. Isn't the right fix then to only drop the recently-read blocks during sequential i/o? We can't because it would require (e.g. compaction) to know of how something is using files, it could remove cache from somebody's fit anyway and we would use too many system calls if we do that.
        Hide
        Jonathan Ellis added a comment -

        The reason why it's done via queues associated with filenames is that we want to get rid of CRAR instances as soon as we done with individual SSTables

        Still feels like a hack to me... Will think about alternatives.

        We can't because it would require (e.g. compaction) to know of how something is using files

        But isn't that the point, that we do know that compaction only* does sequential i/o?

        • not quite, but close enough for this purpose
        Show
        Jonathan Ellis added a comment - The reason why it's done via queues associated with filenames is that we want to get rid of CRAR instances as soon as we done with individual SSTables Still feels like a hack to me... Will think about alternatives. We can't because it would require (e.g. compaction) to know of how something is using files But isn't that the point, that we do know that compaction only* does sequential i/o? not quite, but close enough for this purpose
        Hide
        Pavel Yaskevich added a comment -

        Still feels like a hack to me... Will think about alternatives.

        You are welcome to think how to do so without involving more then one file, I have exploited lhf possibility

        But isn't that the point, that we do know that compaction only* does sequential i/o?

        Even so, we will need to expose RAR method to skip cache to SSTable iterators so they can hint RAR to hit OS to remove cache after they are done with the row for example, which, for example, would generate too many system calls if you have a small rows (or try to somehow batch them together which would require coordination between sstable iterators) as well we would have to teach other components, which want to skip cache, how to do so (e.g. streaming). That individual approach creates nothing but complexing because reads would still be suffering even if we change granularity of hints.

        Show
        Pavel Yaskevich added a comment - Still feels like a hack to me... Will think about alternatives. You are welcome to think how to do so without involving more then one file, I have exploited lhf possibility But isn't that the point, that we do know that compaction only* does sequential i/o? Even so, we will need to expose RAR method to skip cache to SSTable iterators so they can hint RAR to hit OS to remove cache after they are done with the row for example, which, for example, would generate too many system calls if you have a small rows (or try to somehow batch them together which would require coordination between sstable iterators) as well we would have to teach other components, which want to skip cache, how to do so (e.g. streaming). That individual approach creates nothing but complexing because reads would still be suffering even if we change granularity of hints.
        Hide
        Jonathan Ellis added a comment -

        Back up a second ... how are you opening multiple CRAR for the same sstable in a single request? Secondary index reads?

        Show
        Jonathan Ellis added a comment - Back up a second ... how are you opening multiple CRAR for the same sstable in a single request? Secondary index reads?
        Hide
        Pavel Yaskevich added a comment -

        You are welcome to think how to do so without involving more then one file, I have exploited lhf possibility

        That probably sounded confusing, I didn't mean to say that we are opening the same file multiple times per one request, I was just trying to say that I wanted to keep caching changes as local to CRAR as possible, without involving other "files" (or classes) in handling that cache, which allowed to preserve open/close calls without involving anything specific to CRAR into file handling.

        Show
        Pavel Yaskevich added a comment - You are welcome to think how to do so without involving more then one file, I have exploited lhf possibility That probably sounded confusing, I didn't mean to say that we are opening the same file multiple times per one request, I was just trying to say that I wanted to keep caching changes as local to CRAR as possible, without involving other "files" (or classes) in handling that cache, which allowed to preserve open/close calls without involving anything specific to CRAR into file handling.
        Hide
        Jonathan Ellis added a comment -

        But doesn't your caching only last for a single request? What is the point then if not opening the same file multiple times?

        Show
        Jonathan Ellis added a comment - But doesn't your caching only last for a single request? What is the point then if not opening the same file multiple times?
        Hide
        Pavel Yaskevich added a comment -

        If you are talking about CRAR object cache then it lasts until SSTableReader has no more references and SSTable itself is cleaned and prepared to be deleted, every caller upon using close method on CRAR would return instance to the queue to be reused by other threads (readers, compaction, streaming etc.).

        Show
        Pavel Yaskevich added a comment - If you are talking about CRAR object cache then it lasts until SSTableReader has no more references and SSTable itself is cleaned and prepared to be deleted, every caller upon using close method on CRAR would return instance to the queue to be reused by other threads (readers, compaction, streaming etc.).
        Hide
        Jonathan Ellis added a comment -

        We can't [only drop the recently-read blocks during sequential i/o] because it would require (e.g. compaction) to know of how something is using files, it could remove cache from somebody's fit anyway and we would use too many system calls if we do that.

        You mean because RandomAccess reader doesn't know if we've been seeking back and forth? Technically that is true, but it looks like we only actually create RandomAccessReader with skipCache=True when we're doing sequential reads. So I'd be okay with just assuming it's sequential in reBuffer if skipCache=True.

        If you wanted to make it more robust you could subclass RAR to MostlySequentialRAR (still need seek for when compaction needs two passes on wide rows) and only do skipCache in that subclass.

        This feels like a good compromise to me: it's a lot better than what we have now for your use case: if an sstable is in the page cache we'd only dontneed it once gradually, instead of repeatedly with each trySkipCache. And it doesn't take us back to the Dark Ages of totally stomping the page cache with cold data during compaction.

        Show
        Jonathan Ellis added a comment - We can't [only drop the recently-read blocks during sequential i/o] because it would require (e.g. compaction) to know of how something is using files, it could remove cache from somebody's fit anyway and we would use too many system calls if we do that. You mean because RandomAccess reader doesn't know if we've been seeking back and forth? Technically that is true, but it looks like we only actually create RandomAccessReader with skipCache=True when we're doing sequential reads. So I'd be okay with just assuming it's sequential in reBuffer if skipCache=True. If you wanted to make it more robust you could subclass RAR to MostlySequentialRAR (still need seek for when compaction needs two passes on wide rows) and only do skipCache in that subclass. This feels like a good compromise to me: it's a lot better than what we have now for your use case: if an sstable is in the page cache we'd only dontneed it once gradually, instead of repeatedly with each trySkipCache. And it doesn't take us back to the Dark Ages of totally stomping the page cache with cold data during compaction.
        Hide
        Pavel Yaskevich added a comment -

        This feels like a good compromise to me: it's a lot better than what we have now for your use case: if an sstable is in the page cache we'd only dontneed it once gradually, instead of repeatedly with each trySkipCache. And it doesn't take us back to the Dark Ages of totally stomping the page cache with cold data during compaction.

        I agree that it would be better than what we have, but we still would be degrading read performance if we do so, at least that wouldn't make impact as significant as right now. Practice shows that kernel is pretty efficient on reclaiming once used pages, when in need, and we still have writer that skips cache. The only 100% correct place to skip cache from SSTables is after they are compacted and before they put to the delete queue, which would give us good kernel cache margin for a new-coming SSTable.

        I have actually experimented with preheating rows is we have compaction_preheat_key_cache enabled - for each of keys in preheat cache I WILLNEEDed first block of row in data file, that shows some good results and doesn't overcommit the cache because it uses memory freed after previous (compacted) sstables.

        Show
        Pavel Yaskevich added a comment - This feels like a good compromise to me: it's a lot better than what we have now for your use case: if an sstable is in the page cache we'd only dontneed it once gradually, instead of repeatedly with each trySkipCache. And it doesn't take us back to the Dark Ages of totally stomping the page cache with cold data during compaction. I agree that it would be better than what we have, but we still would be degrading read performance if we do so, at least that wouldn't make impact as significant as right now. Practice shows that kernel is pretty efficient on reclaiming once used pages, when in need, and we still have writer that skips cache. The only 100% correct place to skip cache from SSTables is after they are compacted and before they put to the delete queue, which would give us good kernel cache margin for a new-coming SSTable. I have actually experimented with preheating rows is we have compaction_preheat_key_cache enabled - for each of keys in preheat cache I WILLNEEDed first block of row in data file, that shows some good results and doesn't overcommit the cache because it uses memory freed after previous (compacted) sstables.
        Hide
        Jonathan Ellis added a comment -

        Practice shows that kernel is pretty efficient on reclaiming once used pages, when in need, and we still have writer that skips cache

        Shouldn't we just take out cache-skipping on [C]RAR entirely then?

        I have actually experimented with preheating rows is we have compaction_preheat_key_cache enabled

        Why does this work better than the old mincore approach we tried?

        Show
        Jonathan Ellis added a comment - Practice shows that kernel is pretty efficient on reclaiming once used pages, when in need, and we still have writer that skips cache Shouldn't we just take out cache-skipping on [C] RAR entirely then? I have actually experimented with preheating rows is we have compaction_preheat_key_cache enabled Why does this work better than the old mincore approach we tried?
        Hide
        Pavel Yaskevich added a comment -

        Shouldn't we just take out cache-skipping on [C]RAR entirely then?

        Sorry, we probably had some communication issues, that is exactly what I propose and what attached patch does I can make separate one that just removes page skiping from read path and does it only when SSTable life ends.

        Why does this work better than the old mincore approach we tried?

        Basically because it doesn't try to be smart about anything, it just preheats first page of each row listed in that cache which gives a room for adoptive readahead.

        Show
        Pavel Yaskevich added a comment - Shouldn't we just take out cache-skipping on [C] RAR entirely then? Sorry, we probably had some communication issues, that is exactly what I propose and what attached patch does I can make separate one that just removes page skiping from read path and does it only when SSTable life ends. Why does this work better than the old mincore approach we tried? Basically because it doesn't try to be smart about anything, it just preheats first page of each row listed in that cache which gives a room for adoptive readahead.
        Hide
        Jonathan Ellis added a comment -

        Separate patch would be good since the rest has already been split out and committed. Thanks!

        Show
        Jonathan Ellis added a comment - Separate patch would be good since the rest has already been split out and committed. Thanks!
        Hide
        Pavel Yaskevich added a comment -

        Patch (based on trunk) removes skip cache from readers and adds advice on SSTableReader last dereference and preheating if key cache was enabled.

        Show
        Pavel Yaskevich added a comment - Patch (based on trunk) removes skip cache from readers and adds advice on SSTableReader last dereference and preheating if key cache was enabled.
        Hide
        Jonathan Ellis added a comment -

        Thanks, looking good.

        v3 attached that cleans up file i/o to rethrow as FSReadError and only preheats row data if 90% of the rows in the sstable are under the page size that we're fadvising.

        Show
        Jonathan Ellis added a comment - Thanks, looking good. v3 attached that cleans up file i/o to rethrow as FSReadError and only preheats row data if 90% of the rows in the sstable are under the page size that we're fadvising.
        Hide
        Pavel Yaskevich added a comment -

        preheats row data if 90% of the rows in the sstable are under the page size that we're fadvising.

        I see the reason to do that if we have big rows (index promoted to the Index component so we don't touch first page of a row) and we don't know where we would be hitting them but this is why I don't think that 90% is a good idea

        • we don't know distribution of those big rows so if we small row which was sharing page with big row it's still good to preheat as we read on page basis.
        • if we still preheat first page that we didn't need it would actually be migrated by kernel automatically with adoptive read-ahead for example.
        • if rows grow over time it would be a sadden change (flip-flop) in behavior/latencies.
        • even if 90% are bigger of the page size it's quiet possible that keys that we actually migrated in the cache are in other 10%.
        Show
        Pavel Yaskevich added a comment - preheats row data if 90% of the rows in the sstable are under the page size that we're fadvising. I see the reason to do that if we have big rows (index promoted to the Index component so we don't touch first page of a row) and we don't know where we would be hitting them but this is why I don't think that 90% is a good idea we don't know distribution of those big rows so if we small row which was sharing page with big row it's still good to preheat as we read on page basis. if we still preheat first page that we didn't need it would actually be migrated by kernel automatically with adoptive read-ahead for example. if rows grow over time it would be a sadden change (flip-flop) in behavior/latencies. even if 90% are bigger of the page size it's quiet possible that keys that we actually migrated in the cache are in other 10%.
        Hide
        Jonathan Ellis added a comment -

        90% is much more likely to be useful than blindly WILLNEEDing everything, though, which would be a shot in the foot for wide-row HDD deployments (of which there are many).

        In my mind the sane options are:

        • Don't bother preheating
        • Attempt to preheat only rows where it will do some good
        Show
        Jonathan Ellis added a comment - 90% is much more likely to be useful than blindly WILLNEEDing everything, though, which would be a shot in the foot for wide-row HDD deployments (of which there are many). In my mind the sane options are: Don't bother preheating Attempt to preheat only rows where it will do some good
        Hide
        Pavel Yaskevich added a comment -

        90% itself doesn't give enough information to heuristic so i see two options here 1). make in a config option disabled by default 2). don't bother doing that at all.

        Show
        Pavel Yaskevich added a comment - 90% itself doesn't give enough information to heuristic so i see two options here 1). make in a config option disabled by default 2). don't bother doing that at all.
        Hide
        Jonathan Ellis added a comment -

        90% itself doesn't give enough information to heuristic

        I don't think I buy that. We make similar assumptions all over the place (bloom filters, tombstone compaction, redundant requests...)

        Of course it's not going to be right all the time; it just needs to be right 10x as often as it's wrong, for us to win.

        Show
        Jonathan Ellis added a comment - 90% itself doesn't give enough information to heuristic I don't think I buy that. We make similar assumptions all over the place (bloom filters, tombstone compaction, redundant requests...) Of course it's not going to be right all the time; it just needs to be right 10x as often as it's wrong, for us to win.
        Hide
        Pavel Yaskevich added a comment -

        If that is optimization for big rows than it renders that feature useless if we have combination of big/small rows mixed together because it doesn't take into account what actually is in key cache. Also it doesn't take into account how big rows are distributed inside of the SSTable so if they share a page with few small rows (which are in page cache) that we effectively missing out on benefits that preheat feature gives us.

        I'm -1 committing that with that option because it's bad from operations perspective as it could result in sudden degradation of latencies and would be impossible to understand without actually knowing the code once 90% switch flips even if read subset of all data is still smaller than page size.

        Show
        Pavel Yaskevich added a comment - If that is optimization for big rows than it renders that feature useless if we have combination of big/small rows mixed together because it doesn't take into account what actually is in key cache. Also it doesn't take into account how big rows are distributed inside of the SSTable so if they share a page with few small rows (which are in page cache) that we effectively missing out on benefits that preheat feature gives us. I'm -1 committing that with that option because it's bad from operations perspective as it could result in sudden degradation of latencies and would be impossible to understand without actually knowing the code once 90% switch flips even if read subset of all data is still smaller than page size.
        Hide
        Jonathan Ellis added a comment -

        This is useless for wide rows, so what I am trying to do is keep us from wasting seeks in that case.

        Show
        Jonathan Ellis added a comment - This is useless for wide rows, so what I am trying to do is keep us from wasting seeks in that case.
        Hide
        Pavel Yaskevich added a comment -

        I understand that and suggesting to make it option instead of people are aware of trade-off, also main idea of preheating first page was to minimize latency of random -> sequential I/O inside of the row even if it's bigger than one page (also pretty useful in 1.1 where the is no promoted indexes) for rows that we know we were hitting before.

        Show
        Pavel Yaskevich added a comment - I understand that and suggesting to make it option instead of people are aware of trade-off, also main idea of preheating first page was to minimize latency of random -> sequential I/O inside of the row even if it's bigger than one page (also pretty useful in 1.1 where the is no promoted indexes) for rows that we know we were hitting before.
        Hide
        Jonathan Ellis added a comment -

        I don't see any point in making it an option. Users are typically going to have a less accurate idea of what is going on, than we do with sstable stats.

        Show
        Jonathan Ellis added a comment - I don't see any point in making it an option. Users are typically going to have a less accurate idea of what is going on, than we do with sstable stats.
        Hide
        Pavel Yaskevich added a comment -

        I just don't want to make situation when latencies suddenly spike and stayed in that state but there is no indication what is going on (the explanation is that rows exceeded page threshold and preheat was turned off).

        Show
        Pavel Yaskevich added a comment - I just don't want to make situation when latencies suddenly spike and stayed in that state but there is no indication what is going on (the explanation is that rows exceeded page threshold and preheat was turned off).
        Hide
        Jonathan Ellis added a comment -

        What do you think, Yuki Morishita?

        Show
        Jonathan Ellis added a comment - What do you think, Yuki Morishita ?
        Hide
        Yuki Morishita added a comment -

        Honestly, I cannot judge which is better and it seems case by case. So I like "make in a config option disabled by default" idea that Pavel suggested before.

        Show
        Yuki Morishita added a comment - Honestly, I cannot judge which is better and it seems case by case. So I like "make in a config option disabled by default" idea that Pavel suggested before.
        Hide
        Jonathan Ellis added a comment -

        All right. I'm still not a fan, but I am outvoted.

        Show
        Jonathan Ellis added a comment - All right. I'm still not a fan, but I am outvoted.
        Hide
        Pavel Yaskevich added a comment -

        Ok, attaching v4 with a config option (disabled by default).

        Show
        Pavel Yaskevich added a comment - Ok, attaching v4 with a config option (disabled by default).
        Hide
        Yuki Morishita added a comment -

        Patch does not cache key at compaction when preheat_kernel_page_cache is false, ant that causes KeyCacheTest to fail.
        Otherwise looks good.

        Show
        Yuki Morishita added a comment - Patch does not cache key at compaction when preheat_kernel_page_cache is false, ant that causes KeyCacheTest to fail. Otherwise looks good.
        Hide
        Pavel Yaskevich added a comment - - edited

        There is only one place where shouldPreheatPageCache() is used (CompactionTask:233) it shouldn't anyhow prevent key cache from being populated after compaction, that only could be done by getPreheatKeyCache() flag in config.

        Show
        Pavel Yaskevich added a comment - - edited There is only one place where shouldPreheatPageCache() is used (CompactionTask:233) it shouldn't anyhow prevent key cache from being populated after compaction, that only could be done by getPreheatKeyCache() flag in config.
        Hide
        Pavel Yaskevich added a comment -

        Ah, I see how it happens, will fix.

        Show
        Pavel Yaskevich added a comment - Ah, I see how it happens, will fix.
        Hide
        Pavel Yaskevich added a comment -

        Re-attached v4 with key cache population fix and KeyCacheTest changed to use assertEquals instead of 'assert'.

        Show
        Pavel Yaskevich added a comment - Re-attached v4 with key cache population fix and KeyCacheTest changed to use assertEquals instead of 'assert'.
        Hide
        Yuki Morishita added a comment -

        +1

        Show
        Yuki Morishita added a comment - +1
        Hide
        Pavel Yaskevich added a comment -

        Committed.

        Show
        Pavel Yaskevich added a comment - Committed.
        Hide
        Jonathan Ellis added a comment -

        Reverted from 1.2, kept in trunk as planned above.

        Show
        Jonathan Ellis added a comment - Reverted from 1.2, kept in trunk as planned above.

          People

          • Assignee:
            Pavel Yaskevich
            Reporter:
            Pavel Yaskevich
            Reviewer:
            Yuki Morishita
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development