Uploaded image for project: 'Flume'
  1. Flume
  2. FLUME-2922

HDFSSequenceFile Should Sync Writer

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 1.6.0
    • Fix Version/s: 1.7.0
    • Component/s: Sinks+Sources
    • Labels:
      None

      Description

      There is a possibility of losing data with the current HDFS sequence file writer.

      Internally, the `SequenceFile.Writer` buffers data and periodically syncs it to the underlying output stream. The mechanism for doing this is dependent on whether you are using compression or not but in both scenarios, the key/values are appended to an internal buffer and only flushed to disk after the buffer reaches a certain size.

      Thus it is quite possible for Flume to lose messages if the agent crashes, or is stopped, before the internal buffer is flushed to disk.

      The correct action is to force the writer to sync its internal buffers to the underlying `FSDataOutputStream` first before calling hflush/sync.

      Additionally, I believe we should be calling hsync instead of hflush. Its my understanding writes with hsync should be more durable which I believe are the semantics we want here.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                kevinconaway Kevin Conaway
                Reporter:
                kevinconaway Kevin Conaway
              • Votes:
                0 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: