The percentile_approx function in Spark SQL takes much longer than the previous Hive implementation for large data sets (7B rows grouped into 200k buckets, percentile is on each bucket). Tested with Spark 2.3.1 vs Spark 2.1.0.
The below code finishes in around 24 minutes on spark 2.1.0, on spark 2.3.1, this does not finish at all in more than 2 hours. Also tried this with different accuracy values 5000,1000,500, the timing does get better with smaller datasets with the new version, but the speed difference is insignificant
AWS EMR -> Spark 2.1.0
AWS EMR -> Spark 2.3.1
spark-shell --conf spark.driver.memory=12g --conf spark.executor.memory=10g --conf spark.sql.shuffle.partitions=2000 --conf spark.default.parallelism=2000 --num-executors=75 --executor-cores=2