Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
1.7.0
Description
Automatic close of BucketWriters (when open file count reached hdfs.maxOpenFiles) and the file rolling thread can end up in deadlock.
When creating a new BucketWriter in HDFSEventSink it locks HDFSEventSink.sfWritersLock and the close() called in HDFSEventSink.sfWritersLock.removeEldestEntry tries to lock the BucketWriter instance.
On the other hand if the file is being rolled in BucketWriter.close(boolean) it locks the BucketWriter instance first and in the close callback it tries to lock the sfWritersLock.
The chances for this deadlock is higher when the hdfs.maxOpenFiles's value is low (1).
Script to reproduce: https://gist.github.com/adenes/96503a6e737f9604ab3ee9397a5809ff
(put to flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs)
Deadlock usually occurs before ~30 iterations.
Attachments
Attachments
Issue Links
- links to