Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
v2.3.0, v2.3.1, v2.4.0
-
None
Description
In a comparison between Spark and MR cubing, I noticed the cuboid files that Spark engine generated is 3x lager than MR, and took 4x larger more disk on HDFS than MR.
The reason is, the "dfs.replication=2" didn't work when Spark save to HDFS. And by default no compression for spark.
The converted HFiles are in the same size, the query results are the same. So this difference may easily be overlooked.