Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
1.1.0
-
None
Description
When the number of features is not known, it might be quite helpful to create sparse vectors using HashingTF.transform. KMeans transforms centers vectors to dense vectors (https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/clustering/KMeans.scala#L307), therefore leading to OutOfMemory (even with small k).
Any way to keep vectors sparse ?
Attachments
Issue Links
- is duplicated by
-
SPARK-12861 Changes to support KMeans with large feature space
- Resolved