Currently, the AdaptiveScheduler fails a job execution if the ExecutionGraph creation fails. This can be problematic because the failure could result from a transient problem (e.g. filesystem is currently not available). In the case of a transient problem a job rescaling could lead to a job failure which might be a bit surprising for users. Instead, I would expect that Flink would retry the ExecutionGraph creation.
One idea could be to ask the restart policy for how to treat the failure and whether to retry the ExecutionGraph creation or not.
One thing to keep in mind, though, is that some failure might be permanent failures (e.g. wrongly specified savepoint path). In such as case we would ideally fail immediately. One way to address this problem could be to try to restore the savepoint once we create the AdaptiveScheduler.