Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.4.3
-
None
-
None
Description
I discovered that in cluster mode , when driver memory is provided via spark.driver.memory configuration at run time after creating spark session, spark doesn't pick this configurations at run time as application master is already launched by that time and picks the default spark driver memory configuration(1GB).
However, on spark UI page, in environment tab, it still shows driver memory as the value passed via configurations at run time, which makes identifying and debugging this scenario more difficult. Driver memory should be shown as the value which spark is actually using in job.