Description
For data scientists & statisticians, PCA is of little use if they cannot estimate the proportion of variance explained by selecting k principal components (see here for the math details: https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 'Explained variance'). To estimate this, one only needs the eigenvalues of the covariance matrix.
Although the eigenvalues are currently computed during PCA model fitting, they are not returned; hence, as it stands now, PCA in Spark ML is of extremely limited practical use.
For details, see these SO questions
http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/ (pyspark)
http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/
Attachments
Issue Links
- relates to
-
SPARK-14661 Trim PCAModel by required explained variance
- Resolved
-
SPARK-12349 Make spark.ml PCAModel load backwards compatible
- Resolved
- links to