Some algorithms in Spark ML (e.g. LogisticRegression, LinearRegression, and I believe now KMeans) handle persistence internally. They check whether the input dataset is cached, and if not they cache it for performance.
However, the check is done using dataset.rdd.getStorageLevel == NONE. This will actually always be true, since even if the dataset itself is cached, the RDD returned by dataset.rdd will not be cached.
Hence if the input dataset is cached, the data will end up being cached twice, which is wasteful.
To see this:
SPARK-16063, there was no way to check the storage level of the input DataSet, but now we can, so the checks should be migrated to use dataset.storageLevel.