Okay, this is definitely more than a documentation bug, because PySpark and Scala work differently if spark.default.parallelism isn't set by the user. I'm testing using wordcount.
Pyspark: reduceByKey will use the value of sc.defaultParallelism. That value is set to the number of threads when running locally. On my Spark Standalone "cluster" which has a single node with a single core, the value is 2. If I set spark.default.parallelism, it will set sc.defaultParallelism to that value and use that.
Scala: reduceByKey will use the number of partitions in my file/map stage and ignore the value of sc.defaultParallelism. sc.defaultParallism is set by the same logic as pyspark (number of threads for local, 2 for my microcluster), it is just ignored.
I'm not sure which approach is correct. Scala works as described here: http://spark.apache.org/docs/latest/tuning.html
Spark automatically sets the number of “map” tasks to run on each file according to its size (though you can control it through optional parameters to SparkContext.textFile, etc), and for distributed “reduce” operations, such as groupByKey and reduceByKey, it uses the largest parent RDD’s number of partitions. You can pass the level of parallelism as a second argument (see the spark.PairRDDFunctions documentation), or set the config property spark.default.parallelism to change the default. In general, we recommend 2-3 tasks per CPU core in your cluster.