Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-3630 Merge rocksdb in datanode
  3. HDDS-7321

Auto rocksDB small sst files compaction

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 1.4.0
    • None

    Description

      RocksDB has auto compaction itself, which is triggered by the total level file size. 

      Once the total level file size exceeds the threshold, RocksDB will schedule the compaction in backend.

       

      When replicating containers between datanodes, current implementation leverages RocksDB  SstFileWriter to export container meta data to individual sst files, and leverages RocksDB ingestExternalFile to import container meta data sst files directly into target datanode RocksDB. If the imported container meta data keys don't overlap with other sst files(Consider Merge RocksDB design, container ID is used as prefix of each meta data key,  this is true for most of time), the imported sst file will be kept and remain the same without compacting with other existing sst files.

      The worst case, if thousands or dozens of thousands of containers are imported on one datanode, there would be dozens of thousands of small sst files under one RocksDB, or across all RocksDB instances of one datanode.  By default,  RocksDB has no limit of open files.  Dozens of thousands of small sst files would exhaust the process open file quota, and service stability will be impacted.

      This task aims to provide a way to auto compact all these small sst files into merged big ones.  Of course, the compaction will have impact on user data read/write performance on this datanode.

      Attachments

        Issue Links

          Activity

            People

              Sammi Sammi Chen
              Sammi Sammi Chen
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: