Details
-
Sub-task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
-
None
Description
If we have some stats like how many records an input has or the size of the input, it would be good to make decisions in the Processor. For eg: In case of broadcast input from a order by sample vertex, if we know the number of records in advance we can initialize the hashmap with that size and be more efficient. Similarly for skewed join, replicated join, etc. While orderby and skewed join have broadcast input coming from only one task, in case of replicated join it would come from multiple tasks from the previous vertex. In that case an approximation would be good enough. i.e If we know the number of inputs and number of records in the inputs we have downloaded so far (completed tasks) we can extrapolate.