Details
-
Sub-task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
Description
Currently files generated by SparkHashTableSinkOperator for small tables are written directly on HDFS with a high replication factor. When map join happens, map join operator is going to load these files into hash tables. Since on multiple partitions can be process on the same worker node, reading the same set of files multiple times are not ideal. The improvment can be done by calling SparkContext.addFiles() on these files, and use SparkFiles.getFile() to download them to the worker node just once.
Please note that SparkFiles.getFile() is a static method. Code invoking this method needs to be in a static method. This calling method needs to be synchronized because it may get called in different threads.
Attachments
Attachments
Issue Links
- depends upon
-
SPARK-4687 SparkContext#addFile doesn't keep file folder information
- Resolved
- is related to
-
HIVE-10302 Load small tables (for map join) in executor memory only once [Spark Branch]
- Closed