When the AbstractHDFSWriter fails to close a file (due to exceeding the callTimeout or other hdfs issues), it will leave the the file open and never try again. The only way to close the open files is to restart the flume agent. There should be a configurable option to allow the sink to retry to close files that had previously failed to close.
|Status||Patch Available [ 10002 ]||Resolved [ 5 ]|
|Fix Version/s||v1.5.0 [ 12324642 ]|
|Resolution||Fixed [ 1 ]|
|Remote Link||This issue links to "Review (Web Link)" [ 14932 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
|Assignee||Hari Shreedharan [ hshreedharan ]|