Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-30641 ML algs blockify input vectors
  3. SPARK-31976

use MemoryUsage to control the size of block

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: In Progress
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 3.1.0
    • Fix Version/s: None
    • Component/s: ML, PySpark
    • Labels:
      None
    • Target Version/s:

      Description

      According to the performance test in https://issues.apache.org/jira/browse/SPARK-31783, the performance gain is mainly related to the nnz of block.

      So it maybe reasonable to control the size of block by memory usage, instead of number of rows.

       

      note1: param blockSize had already used in ALS and MLP to stack vectors (expected to be dense);

      note2: we may refer to the Strategy.maxMemoryInMB in tree models;

       

      There may be two ways to impl:

      1, compute the sparsity of input vectors ahead of train (this can be computed with other statistics computation, maybe no extra pass), and infer a reasonable number of vectors to stack;

      2, stack the input vectors adaptively, by monitoring the memory usage in a block;

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              podongfeng zhengruifeng
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated: