When running a KubernetesPodOperator task with the KubernetesExecutor, the Pod succeeds but Airflow sets the task instance state to Failed.
- k8s_pods.png - KubernetesExecutor pod and KubernetesPodOperator pod both succeeded
- kubernetes_executor_logs - Launched the KubernetesPodOperator successfully
- airflow_scheduler_logs - "Found matching task with current state failed"
- failed_task_instance - The failed task instance in the airflow UI
- failed_dag_run - The failed dag run in the airflow UI
It seems that the database is being updated with task state of failed, but I'm not sure whereabouts this state is being changed. Here is the line where the KubernetesExecutor queries the database and finds a failed task.