Uploaded image for project: 'Apache IoTDB'
  1. Apache IoTDB
  2. IOTDB-1850

The compaction rate limiter becomes invalid when use deserialize page compaction

    XMLWordPrintableJSON

Details

    • 2021-10-DragonGate

    Description

      The compaction rate limiter becomes invalid when use deserialize page compaction.

      Before serialization, we need to estimate the serialized size and apply throughput from limiter.

       

      Throughput = a * serializedsize

       

      The throughput is used to block some time of limiter.

       

      Currently, we use the getCurrentChunkSize in ChunkWriter, which is actually the serializedChunkSize, not estimated. If we haven't not flush already, it returns 0.

       

      Then Throughput = a * 0 = 0, so this compaction will not block limiter, which means compaction throughput is not limited.

       

      We should use estimateMaxSeriesMemSize in ChunkWriter.

      Attachments

        Activity

          People

            ThuLiuxuxin Liuxuxin
            surevil0820 张凌哲
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Time Tracking

                Estimated:
                Original Estimate - 2h
                2h
                Remaining:
                Remaining Estimate - 2h
                2h
                Logged:
                Time Spent - Not Specified
                Not Specified