The job jar should be ill-packaged, meaning that we include too many dependencies in the user jar. We should include the Scala library, Hadoop and Flink itself to verify that there are no class loading issues.
The general purpose job should run with misbehavior activated. Additionally, we should simulate at least the following failure scenarios:
- Kill Flink processes
- Kill connection to storage system for checkpoints and jobs
- Simulate network partition
We should run the test at least with the following state backend: RocksDB incremental async and checkpointing to HDFS.