Details
-
Sub-task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
Most of our parallelized units of work can be modeled as an HBase scan in Phoenix (as that's what gets run at the end of the day for the client/server RPC). It's annotated with attributes which the coprocessor uses to drive the scan.
Our hash join is different, though, as it makes two scans that are coordinated by the client, both parallelized. The first one is the smaller side and ends up being cached in the region server. The second one then looks up the row in the cache and returns the joined row.
How would this broadcast hash join be most appropriately modeled in the Phoenix+Drill+Calcite world?
There may not be a big win of using our broadcast join versus Drills' as Drill's may be faster given the representation they use.