Details
-
Improvement
-
Status: Reopened
-
Major
-
Resolution: Unresolved
-
3.1.0
-
None
-
None
Description
Currently when executing scalarĀ pandas_udf or using toPandas() the Arrow record batches are split up once the record count reaches a max value, which is configured with "spark.sql.execution.arrow.maxRecordsPerBatch". This is not ideal because the number of columns is not taken into account and if there are many columns, then OOMs can occur. An alternative approach could be to look at the size of the Arrow buffers being used and cap it at a certain size.