I use ElasticsearchIO to write my data to elasticSearch. However, the data is from other platform and not easy to check its validity. If we get the invalid data, we can ignore it( even though use batch insert, we can ignore all of them). So, I wish has a registered exception catch function to process it. From now on, I read the source code about write function in ProcessElement, it just throw the exception and cause my job to stop.
I can catch pipeline.run().waitUntilFinish() on direct runner and force it run again use while statement ungracefully. However, when it deploy to Flink, it will fail because Flink report exception that it cannot optimize the job.
If there is a method let user to decide how to process exception is required.