(There's a previous version of this ticket, which was very wrong about the actual cause. Original is quoted below)
In java.org.cassandra.db.ColumnFamilyStore, the function scrubDataDirectories loops over all sstables and then for each sstable it cleans temporary files from its directory.
Since there are many sstables in a directory, this ends up cleaning the same directory many times.
When using leveledcompactionstrategy on a data set that is ~4TB per node, you can easily end up with 200k files.
Add N and N, and we get a N*N operation (scrubDataDirectories) which ends up taking an hour (or more).
(At this point I should probably point out that no, I am not sure about that. At all. But I do know this takes an hour and jstack blames this function)
As promised, original ticket below :
A Cassandra cluster of ours has nodes with up to 4TB of data, in a single table using leveled compaction having 200k files. While upgrading from 2.2.6 to 3.0.7 we noticed that it took a while to restart a node. And with "a while" I mean we measured it at more than 60 minutes.
jstack shows something interesting :
Going by the source of File.listFiles, it puts every file in a directory into an array and then applies the filter.
This is actually a known Java issue from 1999: http://bugs.java.com/view_bug.do?bug_id=4285834 – their "solution" was to introduce new APIs in JRE7. I guess that makes listFiles deprecated for larger directories (like when using LeveledCompactionStrategy).
tl;dr: because Cassandra uses java.io.File.listFiles, service startup can take an hour for larger data sets.