Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
In the end, the spark caller context written into HDFS log will associate with task id, stage id, job id, app id, etc, but now Task does not know any job information, so job id will be passed to Task in the patch of this jira. That is good for Spark users to identify tasks especially if Spark supports multi-tenant environment in the future.
Attachments
Issue Links
- is part of
-
SPARK-16757 Set up caller context to HDFS and Yarn
- Resolved