Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
1.4.0
-
None
-
None
Description
the hdfs sink cached 5000 open files by default and it cost quite a lot of memory in total when using lzo CompressedStream. We should open the idleTimeout feature to resolve it. But there seems to be a bug with this feature. When stopping flume, HDFSWriter does not cancel the idle scheduler, which might cause flume not to stop. So I extend the current close() method in HDFSWriter as follows, and use it in HDFSEventSink when stop the sink component :
/**
- when stop flume, all schedulers should be canceled
- @param cancelIdleCallback
- @throws IOException
- @throws InterruptedException
*/
public void close(boolean cancelIdleCallback) throws IOException, InterruptedException{
close();
if(cancelIdleCallback){
if (idleFuture != null && !idleFuture.isDone())
{ idleFuture.cancel(false); // do not cancel myself if running! idleFuture = null; }}
}
Attachments
Issue Links
- duplicates
-
FLUME-2305 BucketWriter#close must cancel idleFuture
- Resolved