Kafka
  1. Kafka
  2. KAFKA-1112

broker can not start itself after kafka is killed with -9

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.8.0, 0.8.1
    • Fix Version/s: 0.8.1
    • Component/s: log
    • Labels:
      None

      Description

      When I kill kafka with -9, broker cannot start itself because of corrupted index logs. I think kafka should try to delete/rebuild indexes itself without manual intervention.

      1. KAFKA-1112-v4.patch
        11 kB
        Jun Rao
      2. KAFKA-1112-v3.patch
        14 kB
        Jay Kreps
      3. KAFKA-1112-v2.patch
        11 kB
        Jay Kreps
      4. KAFKA-1112-v1.patch
        10 kB
        Jay Kreps
      5. KAFKA-1112.out
        3 kB
        Guozhang Wang

        Issue Links

          Activity

          Hide
          Denis Serduik added a comment -

          We've also faced with such behavior. Moreover, it doesn't fail right after startup, it start listening for requests(at least open the port) before checking the index and syncing internal state. Thus it kind of difficult to figure out from startup script whether Kafka actually started without adding some ugly sleeps.

          Show
          Denis Serduik added a comment - We've also faced with such behavior. Moreover, it doesn't fail right after startup, it start listening for requests(at least open the port) before checking the index and syncing internal state. Thus it kind of difficult to figure out from startup script whether Kafka actually started without adding some ugly sleeps.
          Hide
          Neha Narkhede added a comment -

          Stack trace -

          [2013-11-01 17:46:02,685] INFO Loading log 'foo-4' (kafka.log.LogManager)
          [2013-11-01 17:46:04,898] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
          java.lang.IllegalArgumentException: requirement failed: Corrupt index found, index file (/mnt/u001/temp/kafka-logs/foo-4/00000000000000000000.index) has non-zero size but the last offset is 0 and the base offset is 0
          at scala.Predef$.require(Predef.scala:145)
          at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:161)
          at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:160)
          at scala.collection.Iterator$class.foreach(Iterator.scala:631)
          at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
          at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
          at scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaCo

          Show
          Neha Narkhede added a comment - Stack trace - [2013-11-01 17:46:02,685] INFO Loading log 'foo-4' (kafka.log.LogManager) [2013-11-01 17:46:04,898] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.IllegalArgumentException: requirement failed: Corrupt index found, index file (/mnt/u001/temp/kafka-logs/foo-4/00000000000000000000.index) has non-zero size but the last offset is 0 and the base offset is 0 at scala.Predef$.require(Predef.scala:145) at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:161) at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:160) at scala.collection.Iterator$class.foreach(Iterator.scala:631) at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474) at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaCo
          Hide
          Guozhang Wang added a comment -

          Attached another stack trace with more debugging turned on. The root cause is that when we load the index file, the initial size is set to the limit of the file, and hence the position is pointing to the last entry. In most cases the last entry will be 0, and the recoveryLog process will skip since it shows the latest offset is smaller than the checkpoint. But when doing the sanity check it finds the number of entries is non-zero (actually the max number of entries) while the last offset is equal to the base offset since reading the last entry gives you 0 relative offset.

          Show
          Guozhang Wang added a comment - Attached another stack trace with more debugging turned on. The root cause is that when we load the index file, the initial size is set to the limit of the file, and hence the position is pointing to the last entry. In most cases the last entry will be 0, and the recoveryLog process will skip since it shows the latest offset is smaller than the checkpoint. But when doing the sanity check it finds the number of entries is non-zero (actually the max number of entries) while the last offset is equal to the base offset since reading the last entry gives you 0 relative offset.
          Hide
          Denis Serduik added a comment -

          Could be related to this issue KAFKA-757

          Show
          Denis Serduik added a comment - Could be related to this issue KAFKA-757
          Hide
          Guozhang Wang added a comment -

          Yes. We have realized this bug before and tried to fix it, but it seems we still missed some cases.

          Show
          Guozhang Wang added a comment - Yes. We have realized this bug before and tried to fix it, but it seems we still missed some cases.
          Hide
          Jay Kreps added a comment -

          The way the check was supposed to work was this: if the last offset in the file is the recoveryPoint-1 then skip the recovery (because the whole file is flushed). The way this was implemented was by using the last entry in the index to find the final message.

          Overall I feel this is a bit of a hack, but we wanted to separate out the "fsync is async" feature from a full incremental recovery implementation that only recovers unflushed data.

          The immediate problem was that we broke the short circuit by adding code to try to handle a corner case: what if log is truncated after to a flush and hence the end of the log is < recovery point. This was just totally broken and we were short circuiting out of the check in virtually all cases including corrupt index.

          This issue wasn't caught because there was a bug in the log corruption unit test that gave a false pass on all index corruptions.

          The fix is the following:
          1. Fix the logical bug
          2. Add LogSegment.needsRecovery() which is a more paranoid version of what we were doing before that attempts to be safe regardless of any index or log corruption that may have occurred. Having this method here is a little hacky but probably okay until we get a full incremental recovery impl.
          3. Fix the unit test that covers this.

          Show
          Jay Kreps added a comment - The way the check was supposed to work was this: if the last offset in the file is the recoveryPoint-1 then skip the recovery (because the whole file is flushed). The way this was implemented was by using the last entry in the index to find the final message. Overall I feel this is a bit of a hack, but we wanted to separate out the "fsync is async" feature from a full incremental recovery implementation that only recovers unflushed data. The immediate problem was that we broke the short circuit by adding code to try to handle a corner case: what if log is truncated after to a flush and hence the end of the log is < recovery point. This was just totally broken and we were short circuiting out of the check in virtually all cases including corrupt index. This issue wasn't caught because there was a bug in the log corruption unit test that gave a false pass on all index corruptions. The fix is the following: 1. Fix the logical bug 2. Add LogSegment.needsRecovery() which is a more paranoid version of what we were doing before that attempts to be safe regardless of any index or log corruption that may have occurred. Having this method here is a little hacky but probably okay until we get a full incremental recovery impl. 3. Fix the unit test that covers this.
          Hide
          Jun Rao added a comment -

          Thanks for the patch. A few comments.

          1. I am a bit concerned of depending on a potentially corrupted index to look for recoveryPoint - 1 in LogSegment.needsRecovery(). If the index points to an arbitrary position in FileMessageSet, the offset value that FileMessageSet.searchFor() finds is garbage. If that value happens to be larger than targetOffset, we will assume that we find targetOffset, but in fact we haven't.

          2. LogTest.testCorruptLog(): Is the println statement needed?

          3. Could you rebase?

          Show
          Jun Rao added a comment - Thanks for the patch. A few comments. 1. I am a bit concerned of depending on a potentially corrupted index to look for recoveryPoint - 1 in LogSegment.needsRecovery(). If the index points to an arbitrary position in FileMessageSet, the offset value that FileMessageSet.searchFor() finds is garbage. If that value happens to be larger than targetOffset, we will assume that we find targetOffset, but in fact we haven't. 2. LogTest.testCorruptLog(): Is the println statement needed? 3. Could you rebase?
          Hide
          Neha Narkhede added a comment -

          Thanks for the patch. Few comments -

          1. In the most common case of needsRecovery, the position of the last entry will be zero. In this case, we will search the entire log segment up until the recovery point. This will slow down server startup but probably only when we really need recovery.
          2. LogSegment: We have to be carefully => We have to be careful
          3. Log: If sanityCheck throws an exception, can we automatically invoke index rebuild instead of bailing out?
          4. Could you rebase?
          5. Could you give the patch review tool a spin? The setup is minimal and we can save time for this and future reviews - https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool#Kafkapatchreviewtool-1.Setup
          Usage:
          python kafka-patch-review.py -j KAFKA-1112 -b trunk

          Show
          Neha Narkhede added a comment - Thanks for the patch. Few comments - 1. In the most common case of needsRecovery, the position of the last entry will be zero. In this case, we will search the entire log segment up until the recovery point. This will slow down server startup but probably only when we really need recovery. 2. LogSegment: We have to be carefully => We have to be careful 3. Log: If sanityCheck throws an exception, can we automatically invoke index rebuild instead of bailing out? 4. Could you rebase? 5. Could you give the patch review tool a spin? The setup is minimal and we can save time for this and future reviews - https://cwiki.apache.org/confluence/display/KAFKA/Kafka+patch+review+tool#Kafkapatchreviewtool-1.Setup Usage: python kafka-patch-review.py -j KAFKA-1112 -b trunk
          Hide
          Jay Kreps added a comment -

          Actually I am attempting to cover every possible case here, so the only case that should go through is the one where the offset of the final message is recoveryPoint-1 exactly. Notice that the unit test actually runs through 50 cases of random garbage appended to the index so assuming that test is write I think this does work.

          Show
          Jay Kreps added a comment - Actually I am attempting to cover every possible case here, so the only case that should go through is the one where the offset of the final message is recoveryPoint-1 exactly. Notice that the unit test actually runs through 50 cases of random garbage appended to the index so assuming that test is write I think this does work.
          Hide
          Guozhang Wang added a comment -

          Regarding Jun's comment #1, I am more concerned about using searchFor function on a FileMessageSet that might be corrupted. From the code it seems if FileMessageSet is corrupted the searchFor function may actually not return due to variable position not monotonically increasing?

          Show
          Guozhang Wang added a comment - Regarding Jun's comment #1, I am more concerned about using searchFor function on a FileMessageSet that might be corrupted. From the code it seems if FileMessageSet is corrupted the searchFor function may actually not return due to variable position not monotonically increasing?
          Hide
          David Lao added a comment -

          I hitting this exact issue. For what it's worth the content of the corrupt index file is consist of 00's for the entire file.

          Show
          David Lao added a comment - I hitting this exact issue. For what it's worth the content of the corrupt index file is consist of 00's for the entire file.
          Hide
          David Lao added a comment -

          Jay , can you provide a patch for the 0.8 branch as well? Thanks.

          Show
          David Lao added a comment - Jay , can you provide a patch for the 0.8 branch as well? Thanks.
          Hide
          Jay Kreps added a comment -

          Added a new patch that addresses issues raised.

          Jun
          1. I don't think this is true. The check is for exact match.
          2. Removed.
          3. Done

          Neha
          1. I think I am handling this--in the case of zero we don't do a full scan.
          2. Ack, fixed.
          3. Well the sanity check is POST recovery. So if we have a corrupt index after recovery we have a bug, I don't think we should automatically try to recovery from this (that would be another recovery).
          4. done
          5. Yeah, haven't had time yet.

          Show
          Jay Kreps added a comment - Added a new patch that addresses issues raised. Jun 1. I don't think this is true. The check is for exact match. 2. Removed. 3. Done Neha 1. I think I am handling this--in the case of zero we don't do a full scan. 2. Ack, fixed. 3. Well the sanity check is POST recovery. So if we have a corrupt index after recovery we have a bug, I don't think we should automatically try to recovery from this (that would be another recovery). 4. done 5. Yeah, haven't had time yet.
          Hide
          Jun Rao added a comment -

          The following is my confusion.

          The patch relies on a potentially corrupted index to find the right starting position in the segment file. What if the starting position given by the last index entry is corrupted? Then, the position could point to the middle of a message in the segment file. Then, the offset value we read from the segment file could be anything. If that value happens to match recoverPoint - 1, we could think no recovery is needed, but the segment file is actually corrupted.

          Similarly, even if the index file is not corrupted, the segment file could still be corrupted before recoverPoint - 1 (since the unit of flushing is a page). It's also possible that we read a corrupted piece of data as the offset that happens to match recoverPoint - 1, and therefore incorrectly think that recovery is not needed.

          Show
          Jun Rao added a comment - The following is my confusion. The patch relies on a potentially corrupted index to find the right starting position in the segment file. What if the starting position given by the last index entry is corrupted? Then, the position could point to the middle of a message in the segment file. Then, the offset value we read from the segment file could be anything. If that value happens to match recoverPoint - 1, we could think no recovery is needed, but the segment file is actually corrupted. Similarly, even if the index file is not corrupted, the segment file could still be corrupted before recoverPoint - 1 (since the unit of flushing is a page). It's also possible that we read a corrupted piece of data as the offset that happens to match recoverPoint - 1, and therefore incorrectly think that recovery is not needed.
          Hide
          Jay Kreps added a comment -

          David, this should not be happening in 0.8. If it is I suspect it is a different problem that causes the same bad outcome. Are you seeing this on 0.8? If so how reproducable is it?

          Show
          Jay Kreps added a comment - David, this should not be happening in 0.8. If it is I suspect it is a different problem that causes the same bad outcome. Are you seeing this on 0.8? If so how reproducable is it?
          Hide
          Jay Kreps added a comment -

          Jun, this is true.

          However, if you think about it recovery of the log has the same problem. We read a message and then compare it to its CRC. The CRC is a 32 bit number. The crc could certainly match the message by chance.

          In this case we compare to a 64 bit number so this should be less likely. But in reality there are many rare events here: (1) we hard crash, (2) hard crash leads to corruption, (3) corruption of index points to a location that exactly matches the recovery offset.

          In general I think peoples concern with this approach is that it is just kind of hacky. I agree with this complaint and am sort of disappointed with this set of changes overall.

          I will post a slightly more paranoid version of the check, and then let's discuss that.

          Show
          Jay Kreps added a comment - Jun, this is true. However, if you think about it recovery of the log has the same problem. We read a message and then compare it to its CRC. The CRC is a 32 bit number. The crc could certainly match the message by chance. In this case we compare to a 64 bit number so this should be less likely. But in reality there are many rare events here: (1) we hard crash, (2) hard crash leads to corruption, (3) corruption of index points to a location that exactly matches the recovery offset. In general I think peoples concern with this approach is that it is just kind of hacky. I agree with this complaint and am sort of disappointed with this set of changes overall. I will post a slightly more paranoid version of the check, and then let's discuss that.
          Hide
          Jay Kreps added a comment -

          Okay here is a maximally paranoid patch.

          Show
          Jay Kreps added a comment - Okay here is a maximally paranoid patch.
          Hide
          Guozhang Wang added a comment -

          How about we resort back to the clean shutdown file for recovery checking, and if recovery is needed, we can use the recovery point to optimize recovery overhead.

          Show
          Guozhang Wang added a comment - How about we resort back to the clean shutdown file for recovery checking, and if recovery is needed, we can use the recovery point to optimize recovery overhead.
          Hide
          Jay Kreps added a comment -

          Yeah I would not be opposed to that as an alternative. Both are really a hack.

          I guess the questions is what should the end state be?

          Show
          Jay Kreps added a comment - Yeah I would not be opposed to that as an alternative. Both are really a hack. I guess the questions is what should the end state be?
          Hide
          Jun Rao added a comment -

          Thinking about this a bit more. The end state is that we want to only recover the portion of the log segment from the recovery point, instead of recovering the whole log segment. The dilemma is that we are not sure what portion of the index is valid. Scanning from the beginning of the log segment defeats the purpose of incremental recovery. One possible solution is to checkpoint an index recovery point, in addition to the recovery offset per log. The index recovery point is the # of valid index entries in the segment to which the recovery offset belongs. This way, on startup, we will be sure that the data in the last valid index entry is not corrupted and we can use it to quickly locate the recovery offset in the log file.

          Show
          Jun Rao added a comment - Thinking about this a bit more. The end state is that we want to only recover the portion of the log segment from the recovery point, instead of recovering the whole log segment. The dilemma is that we are not sure what portion of the index is valid. Scanning from the beginning of the log segment defeats the purpose of incremental recovery. One possible solution is to checkpoint an index recovery point, in addition to the recovery offset per log. The index recovery point is the # of valid index entries in the segment to which the recovery offset belongs. This way, on startup, we will be sure that the data in the last valid index entry is not corrupted and we can use it to quickly locate the recovery offset in the log file.
          Hide
          Guozhang Wang added a comment -

          Did some research on network about "fsync", and it seems fsync can be reliable even with disk's block-write behavior since it is sequential, which means even file system crashed during fsync we will not expect random behavior.

          Show
          Guozhang Wang added a comment - Did some research on network about "fsync", and it seems fsync can be reliable even with disk's block-write behavior since it is sequential, which means even file system crashed during fsync we will not expect random behavior.
          Hide
          Neha Narkhede added a comment -

          Jun Rao This approach seems reasonable unless I'm missing any caveats in Log. Jay Kreps what do you think?

          Show
          Neha Narkhede added a comment - Jun Rao This approach seems reasonable unless I'm missing any caveats in Log. Jay Kreps what do you think?
          Hide
          Jay Kreps added a comment -

          Yeah at a high-level there are a couple of things we could do:
          1. Non-incremental
          a. Harden the current approach (what the attached patches do)
          b. Use the clean shutdown file
          2. Implement incremental recovery (what Jun is proposing)

          All of these are good. 1a is implemented, but is arguably gross. I am open to 1b or 2 or a short-term/long-term thing.

          For 2 I think the details to figure out would be
          1. OffsetCheckpoint is shared so adding the position to that file will impact other use cases how will that be handled?
          2. I suspect that if we want to move to positions we should do something like (file, log_position, index_position) rather than a mixture of logical and physical.
          3. We need to ensure that log compaction is thought through. This could cause the physical position to change. That could be fine but we need to reason through it.
          4. We need to ensure that we handle truncation which implies that a position X could be stable, then deleted, then rewritten differently without flush. This may be fine we just have to think it through.

          Show
          Jay Kreps added a comment - Yeah at a high-level there are a couple of things we could do: 1. Non-incremental a. Harden the current approach (what the attached patches do) b. Use the clean shutdown file 2. Implement incremental recovery (what Jun is proposing) All of these are good. 1a is implemented, but is arguably gross. I am open to 1b or 2 or a short-term/long-term thing. For 2 I think the details to figure out would be 1. OffsetCheckpoint is shared so adding the position to that file will impact other use cases how will that be handled? 2. I suspect that if we want to move to positions we should do something like (file, log_position, index_position) rather than a mixture of logical and physical. 3. We need to ensure that log compaction is thought through. This could cause the physical position to change. That could be fine but we need to reason through it. 4. We need to ensure that we handle truncation which implies that a position X could be stable, then deleted, then rewritten differently without flush. This may be fine we just have to think it through.
          Hide
          Jun Rao added a comment -

          Ok, so it seems that the end state is not that simple and may need some more thoughts. I took patch v3 , removed the recovery part in LogSegment and replaced it with the simpler approach using the clean shutdown file.

          Show
          Jun Rao added a comment - Ok, so it seems that the end state is not that simple and may need some more thoughts. I took patch v3 , removed the recovery part in LogSegment and replaced it with the simpler approach using the clean shutdown file.
          Hide
          Neha Narkhede added a comment -

          Thanks for the patch, Jun! Overall, looks good (+1). Few minor comments that you can address on checkin -

          1. Log

          • okay we need to actually recovery this log => okay we need to actually recover this log

          2. OffsetIndex

          • In sanityCheck(), in one error message, we print the index file's absolute path and in another, we print only the name. Can we standardize on one? It is better to print the entire path since we can have more than one data directories.
          Show
          Neha Narkhede added a comment - Thanks for the patch, Jun! Overall, looks good (+1). Few minor comments that you can address on checkin - 1. Log okay we need to actually recovery this log => okay we need to actually recover this log 2. OffsetIndex In sanityCheck(), in one error message, we print the index file's absolute path and in another, we print only the name. Can we standardize on one? It is better to print the entire path since we can have more than one data directories.
          Hide
          Jay Kreps added a comment -

          +1 lgtm.

          Show
          Jay Kreps added a comment - +1 lgtm.
          Hide
          Jun Rao added a comment -

          Thanks for the reviews. Committed to trunk after addressing Neha's comments.

          Show
          Jun Rao added a comment - Thanks for the reviews. Committed to trunk after addressing Neha's comments.
          Hide
          Drew Goya added a comment -

          Commenting here as requested.

          After migrating a cluster from 0.8.0 to 0.8.1 (trunk/87efda7) I had a few brokers that wouldn't come up.

          This is the exception I ran into, I was able to fix it by deleting the /data/kafka/logs/Events2-124/ directory. That directory contained a non zero size index file and a zero size log file. I had a bunch of these directories scattered around the cluster. I suspect they were there from partition reassignment failures which happened when the cluster was at 0.8.0.

          [2013-12-18 02:40:37,163] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
          java.lang.IllegalArgumentException: requirement failed: Corrupt index found, index file (/data/kafka/logs/Events2-124/00000000000000000000.index) has non-zero size but the last offset is 0 and the base offset is 0
          at scala.Predef$.require(Predef.scala:145)
          at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:160)
          at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159)
          at scala.collection.Iterator$class.foreach(Iterator.scala:631)
          at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
          at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
          at scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495)
          at kafka.log.Log.loadSegments(Log.scala:159)
          at kafka.log.Log.<init>(Log.scala:64)
          at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:120)
          at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:115)
          at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
          at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
          at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:115)
          at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:107)
          at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
          at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
          at kafka.log.LogManager.loadLogs(LogManager.scala:107)
          at kafka.log.LogManager.<init>(LogManager.scala:59)

          Show
          Drew Goya added a comment - Commenting here as requested. After migrating a cluster from 0.8.0 to 0.8.1 (trunk/87efda7) I had a few brokers that wouldn't come up. This is the exception I ran into, I was able to fix it by deleting the /data/kafka/logs/Events2-124/ directory. That directory contained a non zero size index file and a zero size log file. I had a bunch of these directories scattered around the cluster. I suspect they were there from partition reassignment failures which happened when the cluster was at 0.8.0. [2013-12-18 02:40:37,163] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.IllegalArgumentException: requirement failed: Corrupt index found, index file (/data/kafka/logs/Events2-124/00000000000000000000.index) has non-zero size but the last offset is 0 and the base offset is 0 at scala.Predef$.require(Predef.scala:145) at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:160) at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159) at scala.collection.Iterator$class.foreach(Iterator.scala:631) at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474) at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495) at kafka.log.Log.loadSegments(Log.scala:159) at kafka.log.Log.<init>(Log.scala:64) at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:120) at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$3.apply(LogManager.scala:115) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34) at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34) at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:115) at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:107) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34) at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) at kafka.log.LogManager.loadLogs(LogManager.scala:107) at kafka.log.LogManager.<init>(LogManager.scala:59)
          Hide
          Alexis Midon added a comment -

          Hello,

          I suffered from the same error using Kafka 0.8.1. Should I reopen this issue or create a new one?

          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,696 INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], starting
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,698 INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], Connecting to zookeeper on zk-main0.XXX:2181,zk-main1.XXX:2181,zk-main2.XXXX:2181/production/kafka/main
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,708 INFO ZkClient-EventThread-14-zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main org.I0Itec.zkclient.ZkEventThread.run - Starting ZkClient event thread.
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:host.name=i-6b948138.inst.aws.airbnb.com
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.version=1.7.0_55
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.vendor=Oracle Corporation
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.home=/usr/lib/jvm/jre-7-oracle-x64/jre
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.class.path=libs/snappy-java-1.0.5.jar:libs/scala-library-2.10.1.jar:libs/slf4j-api-1.7.2.jar:libs/jopt-simple-3.2.jar:libs/metrics-annotation-2.2.0.jar:libs/log4j-1.2.15.jar:libs/kafka_2.10-0.8.1.jar:libs/zkclient-0.3.jar:libs/zookeeper-3.3.4.jar:libs/metrics-core-2.2.0.jar
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.io.tmpdir=/tmp
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.compiler=<NA>
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:os.name=Linux
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:os.arch=amd64
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:os.version=3.2.0-61-virtual
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:user.name=kafka
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:user.home=/srv/kafka
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:user.dir=/srv/kafka/kafka_2.10-0.8.1
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,718 INFO main org.apache.zookeeper.ZooKeeper.<init> - Initiating client connection, connectString=zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4758af63
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,733 INFO main-SendThread() org.apache.zookeeper.ClientCnxn.startConnect - Opening socket connection to server zk-main1.XXX.com/10.12.135.61:2181
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,738 INFO main-SendThread(zk-main1.XXX.com:2181) org.apache.zookeeper.ClientCnxn.primeConnection - Socket connection established to zk-main1.XXX.com/10.12.135.61:2181, initiating session
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,745 INFO main-SendThread(zk-main1.XXX.com:2181) org.apache.zookeeper.ClientCnxn.readConnectResult - Session establishment complete on server zk-main1.XXX.com/10.12.135.61:2181, sessionid = 0x646838f07761601, negotiated timeout = 6000
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,747 INFO main-EventThread org.I0Itec.zkclient.ZkClient.processStateChanged - zookeeper state changed (SyncConnected)
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,961 INFO main kafka.log.LogManager.info - Found clean shutdown file. Skipping recovery for all logs in data directory '/mnt/kafka_logs'
          2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,962 INFO main kafka.log.LogManager.info - Loading log 'flog-30'
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg 2014-07-11 - 00:53:18,349 FATAL main kafka.server.KafkaServerStartable.fatal - Fatal error during KafkaServerStable startup. Prepare to shutdown
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg java.lang.IllegalArgumentException: - requirement failed: Corrupt index found, index file (/mnt/kafka_logs/flog-30/00000000000121158146.index) has non-zero size but the last offset is 121158146 and the base offset is 121158146
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.Predef$.require(Predef.scala:233)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.OffsetIndex.sanityCheck(OffsetIndex.scala:352)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:158)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.Log.loadSegments(Log.scala:158)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.Log.<init>(Log.scala:64)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.LogManager.loadLogs(LogManager.scala:105)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.log.LogManager.<init>(LogManager.scala:57)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.Kafka$.main(Kafka.scala:46)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg  -    at kafka.Kafka.main(Kafka.scala)
          2014-07-11T00:53:18+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:18,351 INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], shutting down
          2014-07-11T00:53:18+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:18,353 INFO ZkClient-EventThread-14-zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main org.I0Itec.zkclient.ZkEventThread.run - Terminate ZkClient event thread.
          
          Show
          Alexis Midon added a comment - Hello, I suffered from the same error using Kafka 0.8.1. Should I reopen this issue or create a new one? 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,696 INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], starting 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,698 INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], Connecting to zookeeper on zk-main0.XXX:2181,zk-main1.XXX:2181,zk-main2.XXXX:2181/production/kafka/main 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,708 INFO ZkClient-EventThread-14-zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main org.I0Itec.zkclient.ZkEventThread.run - Starting ZkClient event thread. 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:host.name=i-6b948138.inst.aws.airbnb.com 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.version=1.7.0_55 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.vendor=Oracle Corporation 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.home=/usr/lib/jvm/jre-7-oracle-x64/jre 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.class.path=libs/snappy-java-1.0.5.jar:libs/scala-library-2.10.1.jar:libs/slf4j-api-1.7.2.jar:libs/jopt-simple-3.2.jar:libs/metrics-annotation-2.2.0.jar:libs/log4j-1.2.15.jar:libs/kafka_2.10-0.8.1.jar:libs/zkclient-0.3.jar:libs/zookeeper-3.3.4.jar:libs/metrics-core-2.2.0.jar 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.io.tmpdir=/tmp 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:java.compiler=<NA> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:os.name=Linux 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:os.arch=amd64 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:os.version=3.2.0-61-virtual 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:user.name=kafka 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:user.home=/srv/kafka 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client environment:user.dir=/srv/kafka/kafka_2.10-0.8.1 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,718 INFO main org.apache.zookeeper.ZooKeeper.<init> - Initiating client connection, connectString=zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4758af63 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,733 INFO main-SendThread() org.apache.zookeeper.ClientCnxn.startConnect - Opening socket connection to server zk-main1.XXX.com/10.12.135.61:2181 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,738 INFO main-SendThread(zk-main1.XXX.com:2181) org.apache.zookeeper.ClientCnxn.primeConnection - Socket connection established to zk-main1.XXX.com/10.12.135.61:2181, initiating session 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,745 INFO main-SendThread(zk-main1.XXX.com:2181) org.apache.zookeeper.ClientCnxn.readConnectResult - Session establishment complete on server zk-main1.XXX.com/10.12.135.61:2181, sessionid = 0x646838f07761601, negotiated timeout = 6000 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,747 INFO main-EventThread org.I0Itec.zkclient.ZkClient.processStateChanged - zookeeper state changed (SyncConnected) 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,961 INFO main kafka.log.LogManager.info - Found clean shutdown file. Skipping recovery for all logs in data directory '/mnt/kafka_logs' 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,962 INFO main kafka.log.LogManager.info - Loading log 'flog-30' 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg 2014-07-11 - 00:53:18,349 FATAL main kafka.server.KafkaServerStartable.fatal - Fatal error during KafkaServerStable startup. Prepare to shutdown 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg java.lang.IllegalArgumentException: - requirement failed: Corrupt index found, index file (/mnt/kafka_logs/flog-30/00000000000121158146.index) has non-zero size but the last offset is 121158146 and the base offset is 121158146 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.Predef$.require(Predef.scala:233) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.OffsetIndex.sanityCheck(OffsetIndex.scala:352) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:158) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.Iterator$class.foreach(Iterator.scala:727) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.AbstractIterable.foreach(Iterable.scala:54) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.Log.loadSegments(Log.scala:158) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.Log.<init>(Log.scala:64) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.LogManager.loadLogs(LogManager.scala:105) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.log.LogManager.<init>(LogManager.scala:57) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.server.KafkaServer.startup(KafkaServer.scala:72) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.Kafka$.main(Kafka.scala:46) 2014-07-11T00:53:18+00:00 i-6b948138 local3.emerg - at kafka.Kafka.main(Kafka.scala) 2014-07-11T00:53:18+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:18,351 INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], shutting down 2014-07-11T00:53:18+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:18,353 INFO ZkClient-EventThread-14-zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main org.I0Itec.zkclient.ZkEventThread.run - Terminate ZkClient event thread.

            People

            • Assignee:
              Jay Kreps
              Reporter:
              Kane Kim
            • Votes:
              3 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development