Description
There are a few lines in `ThreadCache` that I think should be optimized. `sizeBytes` is called at least once, and potentially many times in every `put` and is linear in the number of caches (= number of state stores, so typically proportional to number of tasks). That means, with every additional task, every put gets a little slower.Compare the throughput of TIME_ROCKS on trunk (green graph):
This is the throughput of TIME_ROCKS is 20% higher when a constant time `sizeBytes` implementation is used:
The same seems to apply for the MEM backend (initial throughput >8000 instead of 6000), however, I cannot run the same benchmark here because the memory is filled too quickly.
Attachments
Issue Links
- links to