Description
When using spark structured streaming with kafka and writing data in Hudi,. when jobs sometimes cant keep up with the input rate and fails as the kafka offset goes out of range (i.e earliest kafka messages are cleaned up due to the retention policy) and when we try to restart the job by clearing the previous checkpoint and consume from latest offset we see that the batches are skipped by the 'HoodieStreamingSink'.
There is no way to restart these streams again currently.