Description
The second element in each row of "stats" is a list with one Vector for each document in the mini-batch. Those are collected to the driver in this line:
https://github.com/apache/spark/blob/5743c6476dbef50852b7f9873112a2d299966ebd/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala#L456
We should not collect those to the driver. Rather, we should do the necessary maps and aggregations in a distributed manner. This will involve modify the Dirichlet expectation implementation. (This JIRA should be done by someone knowledge about online LDA and Spark.)
Attachments
Attachments
Issue Links
- blocks
-
SPARK-22111 OnlineLDAOptimizer should filter out empty documents beforehand
-
- Resolved
-
- is required by
-
SPARK-5572 LDA improvement listing
-
- Resolved
-
- links to