I have a job which ends up calling org.apache.spark.sql.Row.toString on every row in a massive dataset (the reasons for this are slightly odd and it's a bit non-trivial to change the job to avoid this step).
Row.toString is implemented by first constructing a WrappedArray containing the Row's values (by calling toSeq) and then turning that array into a string with mkString. We might be able to get a small performance win by pipelining these steps, using an imperative loop to append fields to a StringBuilder as soon as they're retrieved (thereby cutting out a few layers of Scala collections indirection).
- links to