2016-11-22 16:17:19.558 WARN 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to commit StreamTask 0_1 state: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:281) at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:576) at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:562) at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:538) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:456) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:19.558 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_3 2016-11-22 16:17:19.848 WARN 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to commit StreamTask 0_3 state: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:281) at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:576) at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:562) at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:538) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:456) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:19.857 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_5 2016-11-22 16:17:20.148 WARN 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to commit StreamTask 0_5 state: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:281) at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:576) at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:562) at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:538) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:456) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:20.150 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_0 2016-11-22 16:17:20.953 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_2 2016-11-22 16:17:25.457 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_4 2016-11-22 16:17:27.940 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing all tasks because the commit interval 30000ms has elapsed 2016-11-22 16:17:27.952 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_1 2016-11-22 16:17:27.952 WARN 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to commit StreamTask 0_1 state: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:281) at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:576) at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:562) at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:538) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:456) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:27.952 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_3 2016-11-22 16:17:27.953 WARN 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to commit StreamTask 0_3 state: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:281) at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:576) at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:562) at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:538) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:456) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:27.953 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_5 2016-11-22 16:17:27.953 WARN 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to commit StreamTask 0_5 state: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:281) at org.apache.kafka.streams.processor.internals.StreamThread.commitOne(StreamThread.java:576) at org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:562) at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:538) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:456) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:27.953 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_0 2016-11-22 16:17:28.143 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_2 2016-11-22 16:17:28.262 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing task 0_4 2016-11-22 16:17:28.465 INFO 56830 --- [StreamThread-1895] o.a.k.c.consumer.internals.Fetcher : Fetch offset 99933120 is out of range for partition message-events-deduplicated-0, resetting offset 2016-11-22 16:17:28.465 INFO 56830 --- [StreamThread-1895] o.a.k.c.consumer.internals.Fetcher : Fetch offset 97338361 is out of range for partition message-events-deduplicated-4, resetting offset 2016-11-22 16:17:28.465 INFO 56830 --- [StreamThread-1895] o.a.k.c.consumer.internals.Fetcher : Fetch offset 101722183 is out of range for partition message-events-deduplicated-2, resetting offset 2016-11-22 16:17:28.663 INFO 56830 --- [StreamThread-1895] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [message-events-3, message-events-1, message-events-deduplicated-3, message-events-5, message-events-deduplicated-5, message-events-deduplicated-1] for group message-events-deduplication-1 2016-11-22 16:17:28.663 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] partitions [[message-events-3, message-events-1, message-events-deduplicated-3, message-events-5, message-events-deduplicated-5, message-events-deduplicated-1]] revoked at the beginning of consumer rebalance. 2016-11-22 16:17:28.663 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Committing consumer offsets of task 0_1 2016-11-22 16:17:28.691 ERROR 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed while executing StreamTask 0_1 duet to commit consumer offsets: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamThread$3.apply(StreamThread.java:359) at org.apache.kafka.streams.processor.internals.StreamThread.performOnAllTasks(StreamThread.java:328) at org.apache.kafka.streams.processor.internals.StreamThread.commitOffsets(StreamThread.java:355) at org.apache.kafka.streams.processor.internals.StreamThread.shutdownTasksAndState(StreamThread.java:297) at org.apache.kafka.streams.processor.internals.StreamThread.access$900(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsRevoked(StreamThread.java:143) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:336) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:303) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:28.722 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Removing all active tasks [[0_1, 0_3, 0_5]] 2016-11-22 16:17:28.722 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Removing all standby tasks [[0_0, 0_2, 0_4]] 2016-11-22 16:17:28.722 ERROR 56830 --- [StreamThread-1895] o.a.k.c.c.internals.ConsumerCoordinator : User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group message-events-deduplication-1 failed on partition revocation org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498) at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104) at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:297) at org.apache.kafka.streams.processor.internals.StreamThread$3.apply(StreamThread.java:359) at org.apache.kafka.streams.processor.internals.StreamThread.performOnAllTasks(StreamThread.java:328) at org.apache.kafka.streams.processor.internals.StreamThread.commitOffsets(StreamThread.java:355) at org.apache.kafka.streams.processor.internals.StreamThread.shutdownTasksAndState(StreamThread.java:297) at org.apache.kafka.streams.processor.internals.StreamThread.access$900(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsRevoked(StreamThread.java:143) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:336) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:303) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) 2016-11-22 16:17:28.722 INFO 56830 --- [StreamThread-1895] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group message-events-deduplication-1 2016-11-22 16:17:31.877 INFO 56830 --- [StreamThread-1895] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group message-events-deduplication-1 with generation 3166 2016-11-22 16:17:31.879 INFO 56830 --- [StreamThread-1895] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [message-events-3, message-events-1, message-events-deduplicated-3, message-events-5, message-events-deduplicated-5, message-events-deduplicated-1] for group message-events-deduplication-1 2016-11-22 16:17:31.879 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] New partitions [[message-events-3, message-events-1, message-events-deduplicated-3, message-events-5, message-events-deduplicated-5, message-events-deduplicated-1]] assigned at the end of consumer rebalance. 2016-11-22 16:17:31.879 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Creating active task 0_1 with assigned partitions [[message-events-1, message-events-deduplicated-1]] 2016-11-22 16:17:31.892 INFO 56830 --- [StreamThread-1895] o.a.k.s.processor.internals.StreamTask : task [0_1] Initializing state stores 2016-11-22 16:17:31.893 WARN 56830 --- [StreamThread-1895] o.a.k.s.state.internals.RocksDBStore : Using TTL (18000seconds) with store message-events-deduplicated-store. 2016-11-22 16:17:32.158 ERROR 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Failed to create an active task %s: org.apache.kafka.streams.errors.ProcessorStateException: Error opening store message-events-deduplicated-store at location /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:203) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:163) at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:168) at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85) at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62) at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120) at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633) at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660) at org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) Caused by: org.rocksdb.RocksDBException: IO error: lock /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store/LOCK: No locks available at org.rocksdb.TtlDB.open(Native Method) at org.rocksdb.TtlDB.open(TtlDB.java:87) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:200) ... 18 common frames omitted 2016-11-22 16:17:32.159 ERROR 56830 --- [StreamThread-1895] o.a.k.c.c.internals.ConsumerCoordinator : User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group message-events-deduplication-1 failed on partition assignment org.apache.kafka.streams.errors.ProcessorStateException: Error opening store message-events-deduplicated-store at location /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:203) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:163) at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:168) at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85) at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62) at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120) at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633) at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660) at org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) Caused by: org.rocksdb.RocksDBException: IO error: lock /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store/LOCK: No locks available at org.rocksdb.TtlDB.open(Native Method) at org.rocksdb.TtlDB.open(TtlDB.java:87) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:200) ... 18 common frames omitted 2016-11-22 16:17:32.163 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Shutting down 2016-11-22 16:17:32.163 INFO 56830 --- [StreamThread-1895] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 2016-11-22 16:17:32.166 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Removing all active tasks [[]] 2016-11-22 16:17:32.166 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Removing all standby tasks [[]] 2016-11-22 16:17:32.166 INFO 56830 --- [StreamThread-1895] o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-1895] Stream thread shutdown complete 2016-11-22 16:17:32.166 ERROR 56830 --- [StreamThread-1895] o.i.b.streams.app.AbstractStreamApp : Error in stream app: message-events-deduplication. Thread: StreamThread-1895 org.apache.kafka.streams.errors.StreamsException: stream-thread [StreamThread-1895] Failed to rebalance at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:410) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error opening store message-events-deduplicated-store at location /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:203) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:163) at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:168) at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:85) at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:62) at org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81) at org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120) at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633) at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660) at org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69) at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407) ... 1 common frames omitted Caused by: org.rocksdb.RocksDBException: IO error: lock /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store/LOCK: No locks available at org.rocksdb.TtlDB.open(Native Method) at org.rocksdb.TtlDB.open(TtlDB.java:87) at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:200) ... 18 common frames omitted 2016-11-22 16:17:32.166 INFO 56830 --- [StreamThread-1895] o.i.b.streams.app.AbstractStreamApp : About to restart stream app 'message-events-deduplication' after transient exception: AbstractStreamApp.AnalyzedException(exception=org.apache.kafka.streams.errors.ProcessorStateException: Error opening store message-events-deduplicated-store at location /kafka-streams/message-events-deduplication/message-events-deduplication-1/0_1/rocksdb/message-events-deduplicated-store, transientException=true, errorWithStateStore=true) 2016-11-22 16:17:32.166 INFO 56830 --- [StreamThread-1895] o.i.b.streams.app.AbstractStreamApp : Restart mode: restartStreamApp, environment: Environment(autostart=false, production=true, profile=production, test=false) 2016-11-22 16:17:32.609 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Stopping stream app 'message-events-deduplication'. 2016-11-22 16:17:32.619 INFO 56830 --- [app-restart-automatic] org.apache.kafka.streams.KafkaStreams : Stopped Kafka Stream process 2016-11-22 16:17:32.619 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:32.784 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:32.785 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:33.067 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:43.230 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:43.230 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:43.230 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:43.420 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:43.421 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:43.421 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Cleaning state store for streams app-id 'message-events-deduplication-1' at location: /kafka-streams/message-events-deduplication 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Deleted: /kafka-streams/message-events-deduplication/message-events-deduplication-1 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Starting stream app 'message-events-deduplication'. 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.660 INFO 56830 --- [app-restart-automatic] o.i.b.streams.app.AbstractStreamApp : Closing state store 'message-events-deduplicated-store' 2016-11-22 16:17:44.668 INFO 56830 --- [app-restart-automatic] org.apache.kafka.streams.StreamsConfig : StreamsConfig values: