Description
Now, createDataFrame uses `1` for numPartitions by default, which isn't realistic. Should use larger number for default partitions.
In PySpark, we chunk the input data by 'spark.sql.execution.arrow.maxRecordsPerBatch' size. Should better follow that in SparkR.
Issue resolved by pull request 41307
https://github.com/apache/spark/pull/41307