If the HDFS Sink tries to close a file but it fails (e.g. due to timeout) the last block might not end up in COMPLETE state. In this case block recovery should happen but as the lease is still held by Flume the NameNode will start the recovery process only after the hard limit of 1 hour expires.
The lease recovery can be started manually by the hdfs debug recoverLease command.
For reproduction I removed the close call from the BucketWriter (https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java#L380) to simulate the failure and used the following config:
After ingesting 6 events ("a" - "f") 2 files were created on HDFS, as expected. But there are missing events when listing the contents in Spark shell:
hdfs fsck also confirms that the blocks are still in UNDER_CONSTRUCTION state:
These blocks need to be recovered by starting a lease recovery process on the NameNode (which will then run the block recovery as well). This can be triggered programmatically via the DFSClient.
Adding this call if the close fails solves the issue.