Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26410

Support per Pandas UDF configuration

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.1.0
    • None
    • PySpark
    • None

    Description

      We use a "maxRecordsPerBatch" conf to control the batch sizes. However, the "right" batch size usually depends on the task itself. It would be nice if user can configure the batch size when they declare the Pandas UDF.

      This is orthogonal to SPARK-23258 (using max buffer size instead of row count).

      Besides API, we should also discuss how to merge Pandas UDFs of different configurations. For example,

      df.select(predict1(col("features"), predict2(col("features")))
      

      when predict1 requests 100 rows per batch, while predict2 requests 120 rows per batch.

      cc: icexelloss bryanc holdenk hyukjin.kwon ueshin smilegator

      Attachments

        Activity

          People

            Unassigned Unassigned
            mengxr Xiangrui Meng
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: