Description
Currently when run in cluster mode on YARN, the Spark yarn.Client will print out the application report into the logs, to be easily viewed by users. For example:
INFO yarn.Client: client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: N/A ApplicationMaster host: X.X.X.X ApplicationMaster RPC port: 0 queue: default start time: 1602782566027 final status: UNDEFINED tracking URL: http://hostname:8888/proxy/application_<id>/ user: xkrogen
Typically, the tracking URL can be used to find the logs of the ApplicationMaster/driver while the application is running. Later, the Spark History Server can be used to track this information down, using the stdout/stderr links on the Executors page.
However, in the situation when the driver crashed before writing out a history file, the SHS may not be aware of this application, and thus does not contain links to the driver logs. When this situation arises, it can be difficult for users to debug further, since they can't easily find their driver logs.
It is possible to reach the logs by using the yarn logs commands, but the average Spark user isn't aware of this and shouldn't have to be.
I propose adding, alongside the application report, some additional lines like:
Driver Logs (stdout): http://hostname:8042/node/containerlogs/container_<id>/xkrogen/stdout?start=-4096 Driver Logs (stderr): http://hostname:8042/node/containerlogs/container_<id>/xkrogen/stderr?start=-4096
With this information available, users can quickly jump to their driver logs, even if it crashed before the SHS became aware of the application. This has the additional benefit of providing a quick way to access driver logs, which often contain useful information, in a single click (instead of navigating through the Spark UI).