In a Kudu cluster with tablet servers having 9 directories each backed by a separate HDD (spinning disks) and 3 maintenance manager threads, I noticed long period (2 hours or so) of 100% IO saturation of first one drive, and then a long period of 100% IO saturation of another drive.
I noticed that all 3 maintenance threads were hammering the same data directory for a long time (and that was the reason of 100% IO saturation on the backing drive). Then they switched do other data directory, saturating the IO there. That lead to extremes like tens of seconds waiting for fsync to complete. In case if higher number of data directories and higher number of maintenance threads that may become even more extreme.
It would be nice to schedule compactions and flushes to be spread between available directories, if possible.
Also, it would be great to establish a limit of concurrent compactions/flushes per one data directory, so even in case of higher number of data directories it will be possible to prevent hammering one data directory by all the flushing/compacting threads.
Another approach might be switching from multi-directory structure to some volume-based approach where the filesystem or a controller takes care of fanning out the IO to multitude of drives backing the volume.