Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-5905

Note requirements for certain RowMatrix methods in docs

    XMLWordPrintableJSON

    Details

    • Type: Documentation
    • Status: Resolved
    • Priority: Trivial
    • Resolution: Fixed
    • Affects Version/s: 1.3.0
    • Fix Version/s: 1.6.0
    • Component/s: Documentation, MLlib
    • Labels:
      None
    • Target Version/s:

      Description

      From mbofb's comment in PR https://github.com/apache/spark/pull/4680:

      The description of RowMatrix.computeSVD and mllib-dimensionality-reduction.html should be more precise/explicit regarding the m x n matrix. In the current description I would conclude that n refers to the rows. According to http://math.stackexchange.com/questions/191711/how-many-rows-and-columns-are-in-an-m-x-n-matrix this way of describing a matrix is only used in particular domains. I as a reader interested on applying SVD would rather prefer the more common m x n way of rows x columns (e.g. http://en.wikipedia.org/wiki/Matrix_%28mathematics%29 ) which is also used in http://en.wikipedia.org/wiki/Latent_semantic_analysis (and also within the ARPACK manual:
      “
      N Integer. (INPUT) - Dimension of the eigenproblem. 
      NEV Integer. (INPUT) - Number of eigenvalues of OP to be computed. 0 < NEV < N. 
      NCV Integer. (INPUT) - Number of columns of the matrix V (less than or equal to N).
      “
      ).
      
      description of RowMatrix.computeSVD and mllib-dimensionality-reduction.html:
      "We assume n is smaller than m." Is this just a recommendation or a hard requirement. This condition seems not to be checked and causing an IllegalArgumentException – the processing finishes even though the vectors have a higher dimension than the number of vectors.
      
      description of RowMatrix. computePrincipalComponents or RowMatrix in general:
      I got a Exception.
      java.lang.IllegalArgumentException: Argument with more than 65535 cols: 7949273
      at org.apache.spark.mllib.linalg.distributed.RowMatrix.checkNumColumns(RowMatrix.scala:131)
      at org.apache.spark.mllib.linalg.distributed.RowMatrix.computeCovariance(RowMatrix.scala:318)
      at org.apache.spark.mllib.linalg.distributed.RowMatrix.computePrincipalComponents(RowMatrix.scala:373)
      This 65535 cols restriction would be nice to be written in the doc (if this still applies in 1.3).
      

        Attachments

          Activity

            People

            • Assignee:
              srowen Sean R. Owen
              Reporter:
              mengxr Xiangrui Meng
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: