I have seen a scenerio where an exception was thrown during HDFSEventSink.process when the flush on the bucket writer was called but the BucketWriter was already closed.
1) In HDFSEventSink.process when done, we flush all buckets written to once channel returns null or batch size is exceeded
2) The BucketWriter.flush method does not check the isOpen flag.
3) Our time roll interval code assumes the next call to the bucket writer will be append as such the isOpen flag will be checked and the underlying writer re-opened.
As such, I think what is happening is this:
1) In HDFSEventSink.process the bucket writer is written to
2) In BucketWriter the time based roll trips
3) In HDFSEventSink.process the channel returns null or batch size is exceeded
4) In HDFSEventSink.process bucket writer flush is called throwing the exception logged above.