Details
-
Improvement
-
Status: In Progress
-
Major
-
Resolution: Unresolved
-
None
-
None
Description
The current implementation of the hash-join node current queues in memory the hashtable, the entire build side input, and the entire probe side input (e.g. the entire dataset). This means the current implementation will run out of memory and crash if the input dataset is larger than the memory on the system.
By spilling to disk when memory starts to fill up we can allow the hash-join node to process datasets larger than the available memory on the machine.
Attachments
Issue Links
- supercedes
-
ARROW-14163 [C++] Naive spillover implementation for join
- Closed
- links to