Description
Currently, only SparkMapRecordHandler populates this information. However, since in Spark branch a HashTableSinkOperator could also appear in a ReduceWork, and it needs to have a ExecMapperContext to get a MapredLocalWork, we need to do the same thing in SparkReduceRecordHandler as well.
Attachments
Attachments
Issue Links
- links to