Flume
  1. Flume
  2. FLUME-2181

Optionally disable File Channel fsyncs

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: v1.5.0
    • Component/s: None
    • Labels:
      None

      Description

      This will give File Channel performance a big boost, at the cost of possible data loss if a crash happens between checkpoints.

      Also we should make it configurable, with default to false. If the user does not mind slight inconsistencies, this feature can be explicitly enabled through configuration. So if it is not configured, then the behavior will be exactly as it is now.

      1. FLUME-2181.patch
        15 kB
        Hari Shreedharan
      2. FLUME-2181-1.patch
        45 kB
        Hari Shreedharan
      3. FLUME-2181-2.patch
        47 kB
        Brock Noland
      4. FLUME-2181-3.patch
        51 kB
        Hari Shreedharan

        Issue Links

          Activity

          Hide
          Hari Shreedharan added a comment -

          Initial patch, will add tests soon - but this is the basic patch to handle this

          Show
          Hari Shreedharan added a comment - Initial patch, will add tests soon - but this is the basic patch to handle this
          Hide
          Hari Shreedharan added a comment -

          Hmm, looks like there are a couple of changes that need to be made:

          1. Each of the files need to fsync-ed before the checkpoint is written out, else it is possible that the checkpoint will have offsets to files that may not exist.
          2. We need to additionally safeguard against a situation where one file with the take for an event in another file may be fsync-ed while the file with the event itself was not (maybe because of timing, maybe because the system crashed before the file with the event fsync-ed etc). In that case, the take should really be ignored during a replay. (This is a problem during full replay - if a full checkpoint is available, the above fix would handle that).
          Show
          Hari Shreedharan added a comment - Hmm, looks like there are a couple of changes that need to be made: Each of the files need to fsync-ed before the checkpoint is written out, else it is possible that the checkpoint will have offsets to files that may not exist. We need to additionally safeguard against a situation where one file with the take for an event in another file may be fsync-ed while the file with the event itself was not (maybe because of timing, maybe because the system crashed before the file with the event fsync-ed etc). In that case, the take should really be ignored during a replay. (This is a problem during full replay - if a full checkpoint is available, the above fix would handle that).
          Hide
          Hari Shreedharan added a comment -

          (In 1 above, I meant to say that the offsets may not yet exist in the files - since they were not yet written out)

          Show
          Hari Shreedharan added a comment - (In 1 above, I meant to say that the offsets may not yet exist in the files - since they were not yet written out)
          Hide
          Brock Noland added a comment -

          Hari,

          Two review items: syncExecutor can be null during close and even if we don't sync, I do think we should flush, no? Otherwise a kill of the process can lose data without a kill of the machine.

          Can you speak to the action the user will take when the channel is corrupt? From what I can tell, it's possible holes develop in the file during a crash, possibly contained in an event or across event boundaries. Should this feature coincide with the ability to skip bad events in the logs and skip the end of file during replay?

          Show
          Brock Noland added a comment - Hari, Two review items: syncExecutor can be null during close and even if we don't sync, I do think we should flush, no? Otherwise a kill of the process can lose data without a kill of the machine. Can you speak to the action the user will take when the channel is corrupt? From what I can tell, it's possible holes develop in the file during a crash, possibly contained in an event or across event boundaries. Should this feature coincide with the ability to skip bad events in the logs and skip the end of file during replay?
          Hide
          Hari Shreedharan added a comment -

          Thanks for the feedback Brock.

          1. Yes, we must at least flush, but we do actually fsync during a close anyway. I will add a null check for syncExecutor.
          2. When the channel is corrupt (there is a partial event in the file or like I mentioned above in case 2 - there is an fsync-ed take for a not yet fsync-ed put), we basically need to ignore the take. In reality, all takes past that event is really not valid - we can check for this by simply checking the length of the file before doing a take - though this won't handle partial events. For partial events, we actually will need to read the event and have a separate exception to show that we read in a partial event. Does that make approach sound good? Basically, skip partial or non-existent events and log it - make it as efficient as we can.

          Show
          Hari Shreedharan added a comment - Thanks for the feedback Brock. 1. Yes, we must at least flush, but we do actually fsync during a close anyway. I will add a null check for syncExecutor. 2. When the channel is corrupt (there is a partial event in the file or like I mentioned above in case 2 - there is an fsync-ed take for a not yet fsync-ed put), we basically need to ignore the take. In reality, all takes past that event is really not valid - we can check for this by simply checking the length of the file before doing a take - though this won't handle partial events. For partial events, we actually will need to read the event and have a separate exception to show that we read in a partial event. Does that make approach sound good? Basically, skip partial or non-existent events and log it - make it as efficient as we can.
          Hide
          Brock Noland added a comment - - edited

          1) the close won't occur if the process is killed with SIGKILL so we need to flush explicitly.

          2)

          In reality, all takes past that event is really not valid

          Are we assuming that the disk will be written to disk sequentially? I don't believe that will always be the case. For example without any fsyncs the last half of a file could be written to disk before the first half. Thus it's possible the first half of the file is corrupt but no the second half. How do we handle this case?

          Additionally it's possible that the corruption occurs inside an event. That is the event header is correct and the next event header is correct, but the inside of the event is all null's. In this case we will be sending invalid data downstream.

          we can check for this by simply checking the length of the file before doing a take

          The file is pre-allocated so I don't follow how this will work?

          Show
          Brock Noland added a comment - - edited 1) the close won't occur if the process is killed with SIGKILL so we need to flush explicitly. 2) In reality, all takes past that event is really not valid Are we assuming that the disk will be written to disk sequentially? I don't believe that will always be the case. For example without any fsyncs the last half of a file could be written to disk before the first half. Thus it's possible the first half of the file is corrupt but no the second half. How do we handle this case? Additionally it's possible that the corruption occurs inside an event. That is the event header is correct and the next event header is correct, but the inside of the event is all null's. In this case we will be sending invalid data downstream. we can check for this by simply checking the length of the file before doing a take The file is pre-allocated so I don't follow how this will work?
          Hide
          Hari Shreedharan added a comment -

          1. Yes, that is correct. I am adding a flush at the end of each commit (sorry, if I was not clear).
          2. We actually have exactly one sequential writer to each file. So all writes before a sync call get fsynced to disk (we can't make the first half of a file dirty after we fsync the 2nd half - since all writes are sequential). Yes, it is possible that the OS flushes the pages corresponding to the 2nd half before flushing the ones corresponding to the first half. So we will actually need to seek to each offset, read the buffer - try to parse it and see if it makes sense. If it does not, then we assume that the event was not full sync-ed. I forgot that the files are pre-allocated - so yes, seeking and parsing an event to see if it is corrupt seems to be the only way around it.

          Show
          Hari Shreedharan added a comment - 1. Yes, that is correct. I am adding a flush at the end of each commit (sorry, if I was not clear). 2. We actually have exactly one sequential writer to each file. So all writes before a sync call get fsynced to disk (we can't make the first half of a file dirty after we fsync the 2nd half - since all writes are sequential). Yes, it is possible that the OS flushes the pages corresponding to the 2nd half before flushing the ones corresponding to the first half. So we will actually need to seek to each offset, read the buffer - try to parse it and see if it makes sense. If it does not, then we assume that the event was not full sync-ed. I forgot that the files are pre-allocated - so yes, seeking and parsing an event to see if it is corrupt seems to be the only way around it.
          Hide
          Brock Noland added a comment -

          We actually have exactly one sequential writer to each file. So all writes before a sync call get fsynced to disk (we can't make the first half of a file dirty after we fsync the 2nd half - since all writes are sequential). Yes, it is possible that the OS flushes the pages corresponding to the 2nd half before flushing the ones corresponding to the first half.

          Right, I was referring to the OS which we have no control over.

          So we will actually need to seek to each offset, read the buffer - try to parse it

          We'd use the checksum for this right?

          Show
          Brock Noland added a comment - We actually have exactly one sequential writer to each file. So all writes before a sync call get fsynced to disk (we can't make the first half of a file dirty after we fsync the 2nd half - since all writes are sequential). Yes, it is possible that the OS flushes the pages corresponding to the 2nd half before flushing the ones corresponding to the first half. Right, I was referring to the OS which we have no control over. So we will actually need to seek to each offset, read the buffer - try to parse it We'd use the checksum for this right?
          Hide
          Brock Noland added a comment -

          We'd use the checksum for this right?

          We'll probably have to create a new exception InvalidEventException or something and throw that whenever an event is bad in re-player handler.

          Show
          Brock Noland added a comment - We'd use the checksum for this right? We'll probably have to create a new exception InvalidEventException or something and throw that whenever an event is bad in re-player handler.
          Hide
          Hari Shreedharan added a comment -

          We need to verify more than the checksum, since we currently only checksum event body. These are the steps:
          1. Decrypt the buffer - failed decryption - corrupt event or bad credentials
          2. Parse the protobuf - failed parsing (protobuf library throws) - corrupt event
          3. Bad checksum - corrupt event.

          Show
          Hari Shreedharan added a comment - We need to verify more than the checksum, since we currently only checksum event body. These are the steps: 1. Decrypt the buffer - failed decryption - corrupt event or bad credentials 2. Parse the protobuf - failed parsing (protobuf library throws) - corrupt event 3. Bad checksum - corrupt event.
          Hide
          Brock Noland added a comment - - edited

          0. read header - bad event (bad file when doing replay)

          Show
          Brock Noland added a comment - - edited 0. read header - bad event (bad file when doing replay)
          Hide
          Hari Shreedharan added a comment -

          As of now, we already have a corruption check - but only for the event checksums. I am thinking of adding this check as described above, and then allowing the channel to proceed if optional fsync is enabled, else kill the channel and force the user to run the tool (which is current behavior). I am not entirely sure of this approach - I'd prefer to keep the behavior the same in both cases. Should we force the user to run the tool in either case?

          Show
          Hari Shreedharan added a comment - As of now, we already have a corruption check - but only for the event checksums. I am thinking of adding this check as described above, and then allowing the channel to proceed if optional fsync is enabled, else kill the channel and force the user to run the tool (which is current behavior). I am not entirely sure of this approach - I'd prefer to keep the behavior the same in both cases. Should we force the user to run the tool in either case?
          Hide
          Hari Shreedharan added a comment -

          Brock Noland - What do you think of the idea I posted above?

          Show
          Hari Shreedharan added a comment - Brock Noland - What do you think of the idea I posted above?
          Hide
          Hari Shreedharan added a comment -

          If during replay we hit a bad event, the sequential reader cannot really recover, because we don't know how to (unless we read byte by byte and try to read and parse the protobuf out of it). But in the previously discussed file format with sync marker we should be able to. So I am not sure whether we should go the byte-by-byte reading and parsing route or ignoring the test of the file route.

          Show
          Hari Shreedharan added a comment - If during replay we hit a bad event, the sequential reader cannot really recover, because we don't know how to (unless we read byte by byte and try to read and parse the protobuf out of it). But in the previously discussed file format with sync marker we should be able to. So I am not sure whether we should go the byte-by-byte reading and parsing route or ignoring the test of the file route.
          Hide
          Brock Noland added a comment -

          Ideally I'd only like to implement this if we implement the new format since it was have built in checksum.

          In the case of disabling fsyncs I think both take() of a bad event and sequential reader should just ignore bad data.

          Show
          Brock Noland added a comment - Ideally I'd only like to implement this if we implement the new format since it was have built in checksum. In the case of disabling fsyncs I think both take() of a bad event and sequential reader should just ignore bad data.
          Hide
          Hari Shreedharan added a comment -

          I am planning to start doing the new format once I am done with this and the mirroring patch. I feel the new format is a riskier patch.

          The take can actually ignore a bad put (since we know offset + length), but for sequential reader - ignoring bad data essentially means ignoring the rest of the file.

          Show
          Hari Shreedharan added a comment - I am planning to start doing the new format once I am done with this and the mirroring patch. I feel the new format is a riskier patch. The take can actually ignore a bad put (since we know offset + length), but for sequential reader - ignoring bad data essentially means ignoring the rest of the file.
          Hide
          Brock Noland added a comment -

          ignoring bad data essentially means ignoring the rest of the file

          Agreed...but when you turn this option on you agree that losing data is acceptable. However, once we have the new format, I think we should go back and add a seekNext() event which can be called by sequential reader in the case of bad data. Your thoughts?

          Show
          Brock Noland added a comment - ignoring bad data essentially means ignoring the rest of the file Agreed...but when you turn this option on you agree that losing data is acceptable. However, once we have the new format, I think we should go back and add a seekNext() event which can be called by sequential reader in the case of bad data. Your thoughts?
          Hide
          Hari Shreedharan added a comment -

          Patch that addresses the failure cases we discussed earlier in the jira.

          Show
          Hari Shreedharan added a comment - Patch that addresses the failure cases we discussed earlier in the jira.
          Hide
          Hari Shreedharan added a comment -

          Previous patch had a couple bugs in tests. This should fix it.

          Show
          Hari Shreedharan added a comment - Previous patch had a couple bugs in tests. This should fix it.
          Hide
          Brock Noland added a comment -

          Thank you very much of the new exception types! Can you make a RB item next update?

          I think this is wrong. I think we don't want to throw an IOException when fsyncPerTransaction == true. We should also have a test for this particular condition.

          -      open = false;
          -      throw new IOException("Corrupt event found. Please run File Channel " +
          -        "Integrity tool.", ex);
          +      if (fsyncPerTransaction) {
          +        open = false;
          +        throw new IOException("Corrupt event found. Please run File Channel " +
          +          "Integrity tool.", ex);
          +      }
          +      throw ex;
          

          I think the below has to catch throwable because scheduled executor does and then eats it.

          +            try {
          +              sync();
          +            } catch (Exception ex) {
          +              LOG.error("Data file, " + getFile().toString() + " could not " +
          +                "be synced to disk due to an error.", ex);
          +            }
          

          if(LOG.isDebugEnabled()) can be added here:

          +        LOG.debug("No events written to file, " + getFile().toString() +
          +          " in last " + fsyncInterval + " or since last commit.");
          

          Precondition.checkNotNull

          +          syncExecutor.shutdown(); // No need to wait for it to shutdown.
          

          is this really what we want? I think we want to throw an exception regardless of fsyncPerTransaction.

          +        if(operation != OP_RECORD) {
          +          if (!fsyncPerTransaction) {
          +            throw new CorruptEventException("Operation code is invalid. File " +
          +              "is corrupt. Please run File Channel Integrity tool.");
          +          }
          +        }
          
          Show
          Brock Noland added a comment - Thank you very much of the new exception types! Can you make a RB item next update? I think this is wrong. I think we don't want to throw an IOException when fsyncPerTransaction == true. We should also have a test for this particular condition. - open = false; - throw new IOException("Corrupt event found. Please run File Channel " + - "Integrity tool.", ex); + if (fsyncPerTransaction) { + open = false; + throw new IOException("Corrupt event found. Please run File Channel " + + "Integrity tool.", ex); + } + throw ex; I think the below has to catch throwable because scheduled executor does and then eats it. + try { + sync(); + } catch (Exception ex) { + LOG.error("Data file, " + getFile().toString() + " could not " + + "be synced to disk due to an error.", ex); + } if(LOG.isDebugEnabled()) can be added here: + LOG.debug("No events written to file, " + getFile().toString() + + " in last " + fsyncInterval + " or since last commit."); Precondition.checkNotNull + syncExecutor.shutdown(); // No need to wait for it to shutdown. is this really what we want? I think we want to throw an exception regardless of fsyncPerTransaction. + if(operation != OP_RECORD) { + if (!fsyncPerTransaction) { + throw new CorruptEventException("Operation code is invalid. File " + + "is corrupt. Please run File Channel Integrity tool."); + } + }
          Hide
          Hari Shreedharan added a comment -

          If fsyncPerTransaction is true, then it is current behavior. I am not throwing any exception if a read ends up hitting a corrupt header or event when the fsyncPerTransaction is not true (that is every transactional commit does not cause an fsync, only periodic fsync happens). The first one you mentioned is what I intended, don't throw if fsyncPerTransaction is false (if you meant throwing an IOException is wrong, we can change that to a CorruptEventException). The last one is a bug, we should throw an exception only if fsyncPerTransaction is true.

          Show
          Hari Shreedharan added a comment - If fsyncPerTransaction is true, then it is current behavior. I am not throwing any exception if a read ends up hitting a corrupt header or event when the fsyncPerTransaction is not true (that is every transactional commit does not cause an fsync, only periodic fsync happens). The first one you mentioned is what I intended, don't throw if fsyncPerTransaction is false (if you meant throwing an IOException is wrong, we can change that to a CorruptEventException). The last one is a bug, we should throw an exception only if fsyncPerTransaction is true.
          Hide
          Brock Noland added a comment -

          The first one you mentioned is what I intended

          Yep makes sense

          The last one is a bug, we should throw an exception only if fsyncPerTransaction is true.

          I think in the case if operation != OP_RECORD we always want to throw. That is just bad data in both cases.

          Show
          Brock Noland added a comment - The first one you mentioned is what I intended Yep makes sense The last one is a bug, we should throw an exception only if fsyncPerTransaction is true. I think in the case if operation != OP_RECORD we always want to throw. That is just bad data in both cases.
          Hide
          Hari Shreedharan added a comment -

          OK. Thanks. I will post a new patch on RB with these changes.

          Show
          Hari Shreedharan added a comment - OK. Thanks. I will post a new patch on RB with these changes.
          Hide
          Brock Noland added a comment -

          Hari Shreedharan can you post an updated patch? I'd like to get this in.

          Show
          Brock Noland added a comment - Hari Shreedharan can you post an updated patch? I'd like to get this in.
          Hide
          Brock Noland added a comment -

          Hari Shreedharan I had some spare time so I rebased the patch on trunk. The tests pass as well.

          [INFO] Apache Flume ...................................... SUCCESS [1.074s]
          [INFO] Flume NG SDK ...................................... SUCCESS [1:15.330s]
          [INFO] Flume NG Configuration ............................ SUCCESS [1.417s]
          [INFO] Flume NG Core ..................................... SUCCESS [6:45.411s]
          [INFO] Flume NG Sinks .................................... SUCCESS [0.060s]
          [INFO] Flume NG HDFS Sink ................................ SUCCESS [1:20.956s]
          [INFO] Flume NG IRC Sink ................................. SUCCESS [1.247s]
          [INFO] Flume NG Channels ................................. SUCCESS [0.067s]
          [INFO] Flume NG JDBC channel ............................. SUCCESS [2:07.739s]
          [INFO] Flume NG file-based channel ....................... SUCCESS [11:07.351s]
          [INFO] Flume NG Spillable Memory channel ................. SUCCESS [3:44.141s]
          [INFO] Flume NG Node ..................................... SUCCESS [27.442s]
          [INFO] Flume NG Embedded Agent ........................... SUCCESS [11.879s]
          [INFO] Flume NG HBase Sink ............................... SUCCESS [3:18.274s]
          [INFO] Flume NG ElasticSearch Sink ....................... SUCCESS [42.384s]
          [INFO] Flume NG Morphline Solr Sink ...................... SUCCESS [13.304s]
          [INFO] Flume Sources ..................................... SUCCESS [0.030s]
          [INFO] Flume Scribe Source ............................... SUCCESS [0.601s]
          [INFO] Flume JMS Source .................................. SUCCESS [13.362s]
          [INFO] Flume Twitter Source .............................. SUCCESS [0.906s]
          [INFO] Flume legacy Sources .............................. SUCCESS [0.028s]
          [INFO] Flume legacy Avro source .......................... SUCCESS [1.510s]
          [INFO] Flume legacy Thrift Source ........................ SUCCESS [2.302s]
          [INFO] Flume NG Clients .................................. SUCCESS [0.026s]
          [INFO] Flume NG Log4j Appender ........................... SUCCESS [23.146s]
          [INFO] Flume NG Tools .................................... SUCCESS [6.739s]
          [INFO] Flume NG distribution ............................. SUCCESS [7.759s]
          [INFO] Flume NG Integration Tests ........................ SUCCESS [53.966s]
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 33:08.887s
          [INFO] Finished at: Thu Apr 10 18:19:13 CDT 2014
          [INFO] Final Memory: 267M/973M
          [INFO] ------------------------------------------------------------------------
          
          Show
          Brock Noland added a comment - Hari Shreedharan I had some spare time so I rebased the patch on trunk. The tests pass as well. [INFO] Apache Flume ...................................... SUCCESS [1.074s] [INFO] Flume NG SDK ...................................... SUCCESS [1:15.330s] [INFO] Flume NG Configuration ............................ SUCCESS [1.417s] [INFO] Flume NG Core ..................................... SUCCESS [6:45.411s] [INFO] Flume NG Sinks .................................... SUCCESS [0.060s] [INFO] Flume NG HDFS Sink ................................ SUCCESS [1:20.956s] [INFO] Flume NG IRC Sink ................................. SUCCESS [1.247s] [INFO] Flume NG Channels ................................. SUCCESS [0.067s] [INFO] Flume NG JDBC channel ............................. SUCCESS [2:07.739s] [INFO] Flume NG file-based channel ....................... SUCCESS [11:07.351s] [INFO] Flume NG Spillable Memory channel ................. SUCCESS [3:44.141s] [INFO] Flume NG Node ..................................... SUCCESS [27.442s] [INFO] Flume NG Embedded Agent ........................... SUCCESS [11.879s] [INFO] Flume NG HBase Sink ............................... SUCCESS [3:18.274s] [INFO] Flume NG ElasticSearch Sink ....................... SUCCESS [42.384s] [INFO] Flume NG Morphline Solr Sink ...................... SUCCESS [13.304s] [INFO] Flume Sources ..................................... SUCCESS [0.030s] [INFO] Flume Scribe Source ............................... SUCCESS [0.601s] [INFO] Flume JMS Source .................................. SUCCESS [13.362s] [INFO] Flume Twitter Source .............................. SUCCESS [0.906s] [INFO] Flume legacy Sources .............................. SUCCESS [0.028s] [INFO] Flume legacy Avro source .......................... SUCCESS [1.510s] [INFO] Flume legacy Thrift Source ........................ SUCCESS [2.302s] [INFO] Flume NG Clients .................................. SUCCESS [0.026s] [INFO] Flume NG Log4j Appender ........................... SUCCESS [23.146s] [INFO] Flume NG Tools .................................... SUCCESS [6.739s] [INFO] Flume NG distribution ............................. SUCCESS [7.759s] [INFO] Flume NG Integration Tests ........................ SUCCESS [53.966s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 33:08.887s [INFO] Finished at: Thu Apr 10 18:19:13 CDT 2014 [INFO] Final Memory: 267M/973M [INFO] ------------------------------------------------------------------------
          Hide
          Hari Shreedharan added a comment -

          Thanks Brock. I will update the patch addressing your comments early next week.

          Show
          Hari Shreedharan added a comment - Thanks Brock. I will update the patch addressing your comments early next week.
          Hide
          Hari Shreedharan added a comment -

          Here is an updated patch which fixes the issues Brock mentioned above and also adds a test to verify that the channel dies if corrupt events are detected with fsync per transaction enabled, and does not fail if fsync per transaction is false.

          Show
          Hari Shreedharan added a comment - Here is an updated patch which fixes the issues Brock mentioned above and also adds a test to verify that the channel dies if corrupt events are detected with fsync per transaction enabled, and does not fail if fsync per transaction is false.
          Hide
          Brock Noland added a comment -

          +1

          Thank you Hari! I will run tests and commit as soon as I can.

          Show
          Brock Noland added a comment - +1 Thank you Hari! I will run tests and commit as soon as I can.
          Hide
          Hari Shreedharan added a comment -

          Brock Noland - Thanks for the review! Once you commit this one, I will push the release related changes.

          Show
          Hari Shreedharan added a comment - Brock Noland - Thanks for the review! Once you commit this one, I will push the release related changes.
          Hide
          ASF subversion and git services added a comment -

          Commit 6115e7d6d611d2b82dc2583b95a13d4c0886a93f in flume's branch refs/heads/trunk from Brock Noland
          [ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=6115e7d ]

          FLUME-2181 - Optionally disable File Channel fsyncs (Hari via Brock)

          Show
          ASF subversion and git services added a comment - Commit 6115e7d6d611d2b82dc2583b95a13d4c0886a93f in flume's branch refs/heads/trunk from Brock Noland [ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=6115e7d ] FLUME-2181 - Optionally disable File Channel fsyncs (Hari via Brock)
          Hide
          ASF subversion and git services added a comment -

          Commit 7cf6746ed63018188b530118535c144ff6682201 in flume's branch refs/heads/flume-1.5 from Brock Noland
          [ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=7cf6746 ]

          FLUME-2181 - Optionally disable File Channel fsyncs (Hari via Brock)

          Show
          ASF subversion and git services added a comment - Commit 7cf6746ed63018188b530118535c144ff6682201 in flume's branch refs/heads/flume-1.5 from Brock Noland [ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=7cf6746 ] FLUME-2181 - Optionally disable File Channel fsyncs (Hari via Brock)
          Hide
          Brock Noland added a comment -

          Thank you Hari! I committed this to trunk and flume-1.5!

          Show
          Brock Noland added a comment - Thank you Hari! I committed this to trunk and flume-1.5!
          Hide
          Hudson added a comment -

          FAILURE: Integrated in flume-trunk #633 (See https://builds.apache.org/job/flume-trunk/633/)
          FLUME-2181 - Optionally disable File Channel fsyncs (Hari via Brock) (brock: http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=6115e7d6d611d2b82dc2583b95a13d4c0886a93f)

          • flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestLog.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/Log.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/ReplayHandler.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/encryption/DecryptionFailureException.java
          • flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestLogFile.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
          • flume-tools/src/main/java/org/apache/flume/tools/FileChannelIntegrityTool.java
          • flume-tools/src/test/java/org/apache/flume/tools/TestFileChannelIntegrityTool.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannel.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFileV2.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFileFactory.java
          • flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestUtils.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/encryption/AESCTRNoPaddingProvider.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFileV3.java
          • flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestFileChannel.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/CheckpointRebuilder.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannelConfiguration.java
          • flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestCheckpointRebuilder.java
          • flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/TransactionEventRecord.java
          Show
          Hudson added a comment - FAILURE: Integrated in flume-trunk #633 (See https://builds.apache.org/job/flume-trunk/633/ ) FLUME-2181 - Optionally disable File Channel fsyncs (Hari via Brock) (brock: http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=6115e7d6d611d2b82dc2583b95a13d4c0886a93f ) flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestLog.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/Log.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/ReplayHandler.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/encryption/DecryptionFailureException.java flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestLogFile.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java flume-tools/src/main/java/org/apache/flume/tools/FileChannelIntegrityTool.java flume-tools/src/test/java/org/apache/flume/tools/TestFileChannelIntegrityTool.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannel.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFileV2.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFileFactory.java flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestUtils.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/encryption/AESCTRNoPaddingProvider.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFileV3.java flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestFileChannel.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/CheckpointRebuilder.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannelConfiguration.java flume-ng-channels/flume-file-channel/src/test/java/org/apache/flume/channel/file/TestCheckpointRebuilder.java flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/TransactionEventRecord.java

            People

            • Assignee:
              Hari Shreedharan
              Reporter:
              Hari Shreedharan
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development