Description
As per the Tiered Storage feature introduced in KIP-405, users can configure the retention of remote tier based on time, by size, or both. The work of computing the log segments to be deleted based on the retention config is owned by RemoteLogManager (RLM).
To compute remote segments eligible for deletion based on retention by size config, RLM needs to compute the total_remote_log_size i.e. the total size of logs available in the remote tier for that topic-partition. RLM could use the RemoteLogMetadataManager.listRemoteLogSegments() to fetch metadata for all the remote segments and then aggregate the segment sizes by using RemoteLogSegmentMetadata.segmentSizeInBytes()to find the total log size stored in the remote tier.
The above method involves iterating through all metadata of all the segments i.e. O(num_remote_segments) on each execution of RLM thread. Since the main feature of tiered storage is storing a large amount of data, we expect num_remote_segments to be large and a frequent linear scan could be expensive (depending on the underlying storage used by RemoteLogMetadataManager).
Segment offloads and segment deletions are run together in the same task and a fixed size thread pool is shared among all topic-partitions. A slow logic for calculation of total_log_size could result in the loss of availability as demonstrated in the following scenario:
- Calculation of total_size is slow and the threads in the thread pool are busy with segment deletions
- Segment offloads are delayed (since they run together with deletions)
- Local disk fills up, since local deletion requires the segment to be offloaded
- If local disk is completely full, Kafka fails
Details are in KIP - https://cwiki.apache.org/confluence/display/KAFKA/KIP-852%3A+Optimize+calculation+of+size+for+log+in+remote+tier