I have run both LocalExecutor and CeleryWorker with a remote postgresDB.
On the initial run, the DAG completed successfully, but subsequent runs the execution of the tasks halts. The UI reports that the tasks are in the queue, but the executor fails to pick them up to process them. This has resulted in stopped DAGS, and ultimately the only way forward was an `airflow resetdb` to get a fresh state, restart all the airflow services (scheduler, worker, webserver & flower), and a flush of the redis queue.
I'd like to help troubleshoot what could be causing this issue, but probably need assistance with how to setup the logs, and replicate the problem with the debugging turned on fully. Once that's done, I'm hoping it will be a more obvious fix.
Opening this ticket as I will try and start the work by myself and post the logs as attachments, but will require support from other developers as and when I've got the logs here.