Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
None
-
None
-
2021-10-DragonGate
Description
The compaction rate limiter becomes invalid when use deserialize page compaction.
Before serialization, we need to estimate the serialized size and apply throughput from limiter.
Throughput = a * serializedsize
The throughput is used to block some time of limiter.
Currently, we use the getCurrentChunkSize in ChunkWriter, which is actually the serializedChunkSize, not estimated. If we haven't not flush already, it returns 0.
Then Throughput = a * 0 = 0, so this compaction will not block limiter, which means compaction throughput is not limited.
We should use estimateMaxSeriesMemSize in ChunkWriter.