Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-8441

Flaky Test RegexSourceIntegrationTest#testRegexMatchesTopicsAWhenCreated

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Invalid
    • 2.3.0
    • None
    • streams, unit tests

    Description

      Stacktrace:

      java.lang.AssertionError: Condition not met within timeout 30000. Topics not deleted after 30000 milli seconds.
      	at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:375)
      	at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:352)
      	at org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteTopicsAndWait(EmbeddedKafkaCluster.java:265)
      	at org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.deleteAndRecreateTopics(EmbeddedKafkaCluster.java:288)
      	at org.apache.kafka.streams.integration.RegexSourceIntegrationTest.setUp(RegexSourceIntegrationTest.java:118)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
      	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
      	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
      	at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
      	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
      	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
      	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
      	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
      	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
      	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
      	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
      	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
      	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
      	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
      	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
      	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
      	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
      	at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
      	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
      	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
      	at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
      	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
      	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
      	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
      	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
      	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
      	at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
      	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
      	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
      	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
      	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
      	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
      	at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
      	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
      	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:175)
      	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:157)
      	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404)
      	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
      	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
      	at java.lang.Thread.run(Thread.java:748)
      

      Standard Error:

      Exception in thread "regex-source-integration-test-f66ad22b-dd62-4c81-be1d-21c34a86ee59-StreamThread-1" org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic foo is already matched for another regex pattern foo.* and hence cannot be matched to this regex pattern f.* any more.
      	at org.apache.kafka.streams.processor.internals.InternalTopologyBuilder$SourceNodeFactory.getTopics(InternalTopologyBuilder.java:255)
      	at org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.setRegexMatchedTopicsToSourceNodes(InternalTopologyBuilder.java:1067)
      	at org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.updateSubscriptions(InternalTopologyBuilder.java:1214)
      	at org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.updateSubscribedTopics(InternalTopologyBuilder.java:1876)
      	at org.apache.kafka.streams.processor.internals.TaskManager.updateSubscriptionsFromMetadata(TaskManager.java:402)
      	at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:347)
      	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:186)
      	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:513)
      	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:462)
      	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:414)
      	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
      	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:353)
      	at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1251)
      	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
      	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201)
      	at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:941)
      	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:850)
      	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
      	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
      

       

      Standard Output:

      [2019-05-25 04:48:44,792] INFO Created server with tickTime 800 minSessionTimeout 1600 maxSessionTimeout 16000 datadir /tmp/kafka-8825395051688662620/version-2 snapdir /tmp/kafka-3471244370955335496/version-2 (org.apache.zookeeper.server.ZooKeeperServer:174)
      [2019-05-25 04:48:44,793] INFO binding to port /127.0.0.1:0 (org.apache.zookeeper.server.NIOServerCnxnFactory:89)
      [2019-05-25 04:48:44,827] INFO KafkaConfig values: 
      	advertised.host.name = null
      	advertised.listeners = null
      	advertised.port = null
      	alter.config.policy.class.name = null
      	alter.log.dirs.replication.quota.window.num = 11
      	alter.log.dirs.replication.quota.window.size.seconds = 1
      	authorizer.class.name = 
      	auto.create.topics.enable = true
      	auto.leader.rebalance.enable = true
      	background.threads = 10
      	broker.id = 0
      	broker.id.generation.enable = true
      	broker.rack = null
      	client.quota.callback.class = null
      	compression.type = producer
      	connection.failed.authentication.delay.ms = 100
      	connections.max.idle.ms = 600000
      	connections.max.reauth.ms = 0
      	control.plane.listener.name = null
      	controlled.shutdown.enable = true
      	controlled.shutdown.max.retries = 3
      	controlled.shutdown.retry.backoff.ms = 5000
      	controller.socket.timeout.ms = 30000
      	create.topic.policy.class.name = null
      	default.replication.factor = 1
      	delegation.token.expiry.check.interval.ms = 3600000
      	delegation.token.expiry.time.ms = 86400000
      	delegation.token.master.key = null
      	delegation.token.max.lifetime.ms = 604800000
      	delete.records.purgatory.purge.interval.requests = 1
      	delete.topic.enable = true
      	fetch.purgatory.purge.interval.requests = 1000
      	group.initial.rebalance.delay.ms = 0
      	group.max.session.timeout.ms = 1800000
      	group.max.size = 2147483647
      	group.min.session.timeout.ms = 0
      	host.name = localhost
      	inter.broker.listener.name = null
      	inter.broker.protocol.version = 2.3-IV1
      	kafka.metrics.polling.interval.secs = 10
      	kafka.metrics.reporters = []
      	leader.imbalance.check.interval.seconds = 300
      	leader.imbalance.per.broker.percentage = 10
      	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      	listeners = null
      	log.cleaner.backoff.ms = 15000
      	log.cleaner.dedupe.buffer.size = 2097152
      	log.cleaner.delete.retention.ms = 86400000
      	log.cleaner.enable = true
      	log.cleaner.io.buffer.load.factor = 0.9
      	log.cleaner.io.buffer.size = 524288
      	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
      	log.cleaner.max.compaction.lag.ms = 9223372036854775807
      	log.cleaner.min.cleanable.ratio = 0.5
      	log.cleaner.min.compaction.lag.ms = 0
      	log.cleaner.threads = 1
      	log.cleanup.policy = [delete]
      	log.dir = /tmp/junit30138112216706354/junit261031386903701928
      	log.dirs = null
      	log.flush.interval.messages = 9223372036854775807
      	log.flush.interval.ms = null
      	log.flush.offset.checkpoint.interval.ms = 60000
      	log.flush.scheduler.interval.ms = 9223372036854775807
      	log.flush.start.offset.checkpoint.interval.ms = 60000
      	log.index.interval.bytes = 4096
      	log.index.size.max.bytes = 10485760
      	log.message.downconversion.enable = true
      	log.message.format.version = 2.3-IV1
      	log.message.timestamp.difference.max.ms = 9223372036854775807
      	log.message.timestamp.type = CreateTime
      	log.preallocate = false
      	log.retention.bytes = -1
      	log.retention.check.interval.ms = 300000
      	log.retention.hours = 168
      	log.retention.minutes = null
      	log.retention.ms = null
      	log.roll.hours = 168
      	log.roll.jitter.hours = 0
      	log.roll.jitter.ms = null
      	log.roll.ms = null
      	log.segment.bytes = 1073741824
      	log.segment.delete.delay.ms = 60000
      	max.connections = 2147483647
      	max.connections.per.ip = 2147483647
      	max.connections.per.ip.overrides = 
      	max.incremental.fetch.session.cache.slots = 1000
      	message.max.bytes = 1000000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	min.insync.replicas = 1
      	num.io.threads = 8
      	num.network.threads = 3
      	num.partitions = 1
      	num.recovery.threads.per.data.dir = 1
      	num.replica.alter.log.dirs.threads = null
      	num.replica.fetchers = 1
      	offset.metadata.max.bytes = 4096
      	offsets.commit.required.acks = -1
      	offsets.commit.timeout.ms = 5000
      	offsets.load.buffer.size = 5242880
      	offsets.retention.check.interval.ms = 600000
      	offsets.retention.minutes = 10080
      	offsets.topic.compression.codec = 0
      	offsets.topic.num.partitions = 50
      	offsets.topic.replication.factor = 1
      	offsets.topic.segment.bytes = 104857600
      	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
      	password.encoder.iterations = 4096
      	password.encoder.key.length = 128
      	password.encoder.keyfactory.algorithm = null
      	password.encoder.old.secret = null
      	password.encoder.secret = null
      	port = 0
      	principal.builder.class = null
      	producer.purgatory.purge.interval.requests = 1000
      	queued.max.request.bytes = -1
      	queued.max.requests = 500
      	quota.consumer.default = 9223372036854775807
      	quota.producer.default = 9223372036854775807
      	quota.window.num = 11
      	quota.window.size.seconds = 1
      	replica.fetch.backoff.ms = 1000
      	replica.fetch.max.bytes = 1048576
      	replica.fetch.min.bytes = 1
      	replica.fetch.response.max.bytes = 10485760
      	replica.fetch.wait.max.ms = 500
      	replica.high.watermark.checkpoint.interval.ms = 5000
      	replica.lag.time.max.ms = 10000
      	replica.socket.receive.buffer.bytes = 65536
      	replica.socket.timeout.ms = 30000
      	replication.quota.window.num = 11
      	replication.quota.window.size.seconds = 1
      	request.timeout.ms = 30000
      	reserved.broker.max.id = 1000
      	sasl.client.callback.handler.class = null
      	sasl.enabled.mechanisms = [GSSAPI]
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.principal.to.local.rules = [DEFAULT]
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism.inter.broker.protocol = GSSAPI
      	sasl.server.callback.handler.class = null
      	security.inter.broker.protocol = PLAINTEXT
      	socket.receive.buffer.bytes = 102400
      	socket.request.max.bytes = 104857600
      	socket.send.buffer.bytes = 102400
      	ssl.cipher.suites = []
      	ssl.client.auth = none
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.principal.mapping.rules = [DEFAULT]
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
      	transaction.max.timeout.ms = 900000
      	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
      	transaction.state.log.load.buffer.size = 5242880
      	transaction.state.log.min.isr = 2
      	transaction.state.log.num.partitions = 50
      	transaction.state.log.replication.factor = 3
      	transaction.state.log.segment.bytes = 104857600
      	transactional.id.expiration.ms = 604800000
      	unclean.leader.election.enable = false
      	zookeeper.connect = 127.0.0.1:39817
      	zookeeper.connection.timeout.ms = null
      	zookeeper.max.in.flight.requests = 10
      	zookeeper.session.timeout.ms = 10000
      	zookeeper.set.acl = false
      	zookeeper.sync.time.ms = 2000
       (kafka.server.KafkaConfig:346)
      [2019-05-25 04:48:44,828] INFO starting (kafka.server.KafkaServer:66)
      [2019-05-25 04:48:44,828] INFO Connecting to zookeeper on 127.0.0.1:39817 (kafka.server.KafkaServer:66)
      [2019-05-25 04:48:44,829] INFO [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:39817. (kafka.zookeeper.ZooKeeperClient:66)
      [2019-05-25 04:48:44,829] INFO Initiating client connection, connectString=127.0.0.1:39817 sessionTimeout=10000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5018c07d (org.apache.zookeeper.ZooKeeper:442)
      [2019-05-25 04:48:44,832] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient:66)
      [2019-05-25 04:48:44,835] INFO Opening socket connection to server localhost/127.0.0.1:39817. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:1025)
      [2019-05-25 04:48:44,836] INFO Accepted socket connection from /127.0.0.1:50092 (org.apache.zookeeper.server.NIOServerCnxnFactory:222)
      [2019-05-25 04:48:44,836] INFO Socket connection established to localhost/127.0.0.1:39817, initiating session (org.apache.zookeeper.ClientCnxn:879)
      [2019-05-25 04:48:44,836] INFO Client attempting to establish new session at /127.0.0.1:50092 (org.apache.zookeeper.server.ZooKeeperServer:949)
      [2019-05-25 04:48:44,837] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog:216)
      [2019-05-25 04:48:44,837] INFO Established session 0x100aaf4d5d70000 with negotiated timeout 10000 for client /127.0.0.1:50092 (org.apache.zookeeper.server.ZooKeeperServer:694)
      [2019-05-25 04:48:44,839] INFO Session establishment complete on server localhost/127.0.0.1:39817, sessionid = 0x100aaf4d5d70000, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:1299)
      [2019-05-25 04:48:44,841] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient:66)
      [2019-05-25 04:48:44,867] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:create cxid:0x2 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:44,877] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:create cxid:0x6 zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:44,880] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:create cxid:0x9 zxid:0xa txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:44,890] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:create cxid:0x15 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:44,891] INFO Cluster ID = RbC14HrzQwetftC44Ozhrw (kafka.server.KafkaServer:66)
      [2019-05-25 04:48:44,891] WARN No meta.properties file under dir /tmp/junit30138112216706354/junit261031386903701928/meta.properties (kafka.server.BrokerMetadataCheckpoint:70)
      [2019-05-25 04:48:44,894] INFO KafkaConfig values: 
      	advertised.host.name = null
      	advertised.listeners = null
      	advertised.port = null
      	alter.config.policy.class.name = null
      	alter.log.dirs.replication.quota.window.num = 11
      	alter.log.dirs.replication.quota.window.size.seconds = 1
      	authorizer.class.name = 
      	auto.create.topics.enable = true
      	auto.leader.rebalance.enable = true
      	background.threads = 10
      	broker.id = 0
      	broker.id.generation.enable = true
      	broker.rack = null
      	client.quota.callback.class = null
      	compression.type = producer
      	connection.failed.authentication.delay.ms = 100
      	connections.max.idle.ms = 600000
      	connections.max.reauth.ms = 0
      	control.plane.listener.name = null
      	controlled.shutdown.enable = true
      	controlled.shutdown.max.retries = 3
      	controlled.shutdown.retry.backoff.ms = 5000
      	controller.socket.timeout.ms = 30000
      	create.topic.policy.class.name = null
      	default.replication.factor = 1
      	delegation.token.expiry.check.interval.ms = 3600000
      	delegation.token.expiry.time.ms = 86400000
      	delegation.token.master.key = null
      	delegation.token.max.lifetime.ms = 604800000
      	delete.records.purgatory.purge.interval.requests = 1
      	delete.topic.enable = true
      	fetch.purgatory.purge.interval.requests = 1000
      	group.initial.rebalance.delay.ms = 0
      	group.max.session.timeout.ms = 1800000
      	group.max.size = 2147483647
      	group.min.session.timeout.ms = 0
      	host.name = localhost
      	inter.broker.listener.name = null
      	inter.broker.protocol.version = 2.3-IV1
      	kafka.metrics.polling.interval.secs = 10
      	kafka.metrics.reporters = []
      	leader.imbalance.check.interval.seconds = 300
      	leader.imbalance.per.broker.percentage = 10
      	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      	listeners = null
      	log.cleaner.backoff.ms = 15000
      	log.cleaner.dedupe.buffer.size = 2097152
      	log.cleaner.delete.retention.ms = 86400000
      	log.cleaner.enable = true
      	log.cleaner.io.buffer.load.factor = 0.9
      	log.cleaner.io.buffer.size = 524288
      	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
      	log.cleaner.max.compaction.lag.ms = 9223372036854775807
      	log.cleaner.min.cleanable.ratio = 0.5
      	log.cleaner.min.compaction.lag.ms = 0
      	log.cleaner.threads = 1
      	log.cleanup.policy = [delete]
      	log.dir = /tmp/junit30138112216706354/junit261031386903701928
      	log.dirs = null
      	log.flush.interval.messages = 9223372036854775807
      	log.flush.interval.ms = null
      	log.flush.offset.checkpoint.interval.ms = 60000
      	log.flush.scheduler.interval.ms = 9223372036854775807
      	log.flush.start.offset.checkpoint.interval.ms = 60000
      	log.index.interval.bytes = 4096
      	log.index.size.max.bytes = 10485760
      	log.message.downconversion.enable = true
      	log.message.format.version = 2.3-IV1
      	log.message.timestamp.difference.max.ms = 9223372036854775807
      	log.message.timestamp.type = CreateTime
      	log.preallocate = false
      	log.retention.bytes = -1
      	log.retention.check.interval.ms = 300000
      	log.retention.hours = 168
      	log.retention.minutes = null
      	log.retention.ms = null
      	log.roll.hours = 168
      	log.roll.jitter.hours = 0
      	log.roll.jitter.ms = null
      	log.roll.ms = null
      	log.segment.bytes = 1073741824
      	log.segment.delete.delay.ms = 60000
      	max.connections = 2147483647
      	max.connections.per.ip = 2147483647
      	max.connections.per.ip.overrides = 
      	max.incremental.fetch.session.cache.slots = 1000
      	message.max.bytes = 1000000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	min.insync.replicas = 1
      	num.io.threads = 8
      	num.network.threads = 3
      	num.partitions = 1
      	num.recovery.threads.per.data.dir = 1
      	num.replica.alter.log.dirs.threads = null
      	num.replica.fetchers = 1
      	offset.metadata.max.bytes = 4096
      	offsets.commit.required.acks = -1
      	offsets.commit.timeout.ms = 5000
      	offsets.load.buffer.size = 5242880
      	offsets.retention.check.interval.ms = 600000
      	offsets.retention.minutes = 10080
      	offsets.topic.compression.codec = 0
      	offsets.topic.num.partitions = 50
      	offsets.topic.replication.factor = 1
      	offsets.topic.segment.bytes = 104857600
      	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
      	password.encoder.iterations = 4096
      	password.encoder.key.length = 128
      	password.encoder.keyfactory.algorithm = null
      	password.encoder.old.secret = null
      	password.encoder.secret = null
      	port = 0
      	principal.builder.class = null
      	producer.purgatory.purge.interval.requests = 1000
      	queued.max.request.bytes = -1
      	queued.max.requests = 500
      	quota.consumer.default = 9223372036854775807
      	quota.producer.default = 9223372036854775807
      	quota.window.num = 11
      	quota.window.size.seconds = 1
      	replica.fetch.backoff.ms = 1000
      	replica.fetch.max.bytes = 1048576
      	replica.fetch.min.bytes = 1
      	replica.fetch.response.max.bytes = 10485760
      	replica.fetch.wait.max.ms = 500
      	replica.high.watermark.checkpoint.interval.ms = 5000
      	replica.lag.time.max.ms = 10000
      	replica.socket.receive.buffer.bytes = 65536
      	replica.socket.timeout.ms = 30000
      	replication.quota.window.num = 11
      	replication.quota.window.size.seconds = 1
      	request.timeout.ms = 30000
      	reserved.broker.max.id = 1000
      	sasl.client.callback.handler.class = null
      	sasl.enabled.mechanisms = [GSSAPI]
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.principal.to.local.rules = [DEFAULT]
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism.inter.broker.protocol = GSSAPI
      	sasl.server.callback.handler.class = null
      	security.inter.broker.protocol = PLAINTEXT
      	socket.receive.buffer.bytes = 102400
      	socket.request.max.bytes = 104857600
      	socket.send.buffer.bytes = 102400
      	ssl.cipher.suites = []
      	ssl.client.auth = none
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.principal.mapping.rules = [DEFAULT]
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
      	transaction.max.timeout.ms = 900000
      	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
      	transaction.state.log.load.buffer.size = 5242880
      	transaction.state.log.min.isr = 2
      	transaction.state.log.num.partitions = 50
      	transaction.state.log.replication.factor = 3
      	transaction.state.log.segment.bytes = 104857600
      	transactional.id.expiration.ms = 604800000
      	unclean.leader.election.enable = false
      	zookeeper.connect = 127.0.0.1:39817
      	zookeeper.connection.timeout.ms = null
      	zookeeper.max.in.flight.requests = 10
      	zookeeper.session.timeout.ms = 10000
      	zookeeper.set.acl = false
      	zookeeper.sync.time.ms = 2000
       (kafka.server.KafkaConfig:346)
      [2019-05-25 04:48:44,896] INFO KafkaConfig values: 
      	advertised.host.name = null
      	advertised.listeners = null
      	advertised.port = null
      	alter.config.policy.class.name = null
      	alter.log.dirs.replication.quota.window.num = 11
      	alter.log.dirs.replication.quota.window.size.seconds = 1
      	authorizer.class.name = 
      	auto.create.topics.enable = true
      	auto.leader.rebalance.enable = true
      	background.threads = 10
      	broker.id = 0
      	broker.id.generation.enable = true
      	broker.rack = null
      	client.quota.callback.class = null
      	compression.type = producer
      	connection.failed.authentication.delay.ms = 100
      	connections.max.idle.ms = 600000
      	connections.max.reauth.ms = 0
      	control.plane.listener.name = null
      	controlled.shutdown.enable = true
      	controlled.shutdown.max.retries = 3
      	controlled.shutdown.retry.backoff.ms = 5000
      	controller.socket.timeout.ms = 30000
      	create.topic.policy.class.name = null
      	default.replication.factor = 1
      	delegation.token.expiry.check.interval.ms = 3600000
      	delegation.token.expiry.time.ms = 86400000
      	delegation.token.master.key = null
      	delegation.token.max.lifetime.ms = 604800000
      	delete.records.purgatory.purge.interval.requests = 1
      	delete.topic.enable = true
      	fetch.purgatory.purge.interval.requests = 1000
      	group.initial.rebalance.delay.ms = 0
      	group.max.session.timeout.ms = 1800000
      	group.max.size = 2147483647
      	group.min.session.timeout.ms = 0
      	host.name = localhost
      	inter.broker.listener.name = null
      	inter.broker.protocol.version = 2.3-IV1
      	kafka.metrics.polling.interval.secs = 10
      	kafka.metrics.reporters = []
      	leader.imbalance.check.interval.seconds = 300
      	leader.imbalance.per.broker.percentage = 10
      	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      	listeners = null
      	log.cleaner.backoff.ms = 15000
      	log.cleaner.dedupe.buffer.size = 2097152
      	log.cleaner.delete.retention.ms = 86400000
      	log.cleaner.enable = true
      	log.cleaner.io.buffer.load.factor = 0.9
      	log.cleaner.io.buffer.size = 524288
      	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
      	log.cleaner.max.compaction.lag.ms = 9223372036854775807
      	log.cleaner.min.cleanable.ratio = 0.5
      	log.cleaner.min.compaction.lag.ms = 0
      	log.cleaner.threads = 1
      	log.cleanup.policy = [delete]
      	log.dir = /tmp/junit30138112216706354/junit261031386903701928
      	log.dirs = null
      	log.flush.interval.messages = 9223372036854775807
      	log.flush.interval.ms = null
      	log.flush.offset.checkpoint.interval.ms = 60000
      	log.flush.scheduler.interval.ms = 9223372036854775807
      	log.flush.start.offset.checkpoint.interval.ms = 60000
      	log.index.interval.bytes = 4096
      	log.index.size.max.bytes = 10485760
      	log.message.downconversion.enable = true
      	log.message.format.version = 2.3-IV1
      	log.message.timestamp.difference.max.ms = 9223372036854775807
      	log.message.timestamp.type = CreateTime
      	log.preallocate = false
      	log.retention.bytes = -1
      	log.retention.check.interval.ms = 300000
      	log.retention.hours = 168
      	log.retention.minutes = null
      	log.retention.ms = null
      	log.roll.hours = 168
      	log.roll.jitter.hours = 0
      	log.roll.jitter.ms = null
      	log.roll.ms = null
      	log.segment.bytes = 1073741824
      	log.segment.delete.delay.ms = 60000
      	max.connections = 2147483647
      	max.connections.per.ip = 2147483647
      	max.connections.per.ip.overrides = 
      	max.incremental.fetch.session.cache.slots = 1000
      	message.max.bytes = 1000000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	min.insync.replicas = 1
      	num.io.threads = 8
      	num.network.threads = 3
      	num.partitions = 1
      	num.recovery.threads.per.data.dir = 1
      	num.replica.alter.log.dirs.threads = null
      	num.replica.fetchers = 1
      	offset.metadata.max.bytes = 4096
      	offsets.commit.required.acks = -1
      	offsets.commit.timeout.ms = 5000
      	offsets.load.buffer.size = 5242880
      	offsets.retention.check.interval.ms = 600000
      	offsets.retention.minutes = 10080
      	offsets.topic.compression.codec = 0
      	offsets.topic.num.partitions = 50
      	offsets.topic.replication.factor = 1
      	offsets.topic.segment.bytes = 104857600
      	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
      	password.encoder.iterations = 4096
      	password.encoder.key.length = 128
      	password.encoder.keyfactory.algorithm = null
      	password.encoder.old.secret = null
      	password.encoder.secret = null
      	port = 0
      	principal.builder.class = null
      	producer.purgatory.purge.interval.requests = 1000
      	queued.max.request.bytes = -1
      	queued.max.requests = 500
      	quota.consumer.default = 9223372036854775807
      	quota.producer.default = 9223372036854775807
      	quota.window.num = 11
      	quota.window.size.seconds = 1
      	replica.fetch.backoff.ms = 1000
      	replica.fetch.max.bytes = 1048576
      	replica.fetch.min.bytes = 1
      	replica.fetch.response.max.bytes = 10485760
      	replica.fetch.wait.max.ms = 500
      	replica.high.watermark.checkpoint.interval.ms = 5000
      	replica.lag.time.max.ms = 10000
      	replica.socket.receive.buffer.bytes = 65536
      	replica.socket.timeout.ms = 30000
      	replication.quota.window.num = 11
      	replication.quota.window.size.seconds = 1
      	request.timeout.ms = 30000
      	reserved.broker.max.id = 1000
      	sasl.client.callback.handler.class = null
      	sasl.enabled.mechanisms = [GSSAPI]
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.principal.to.local.rules = [DEFAULT]
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism.inter.broker.protocol = GSSAPI
      	sasl.server.callback.handler.class = null
      	security.inter.broker.protocol = PLAINTEXT
      	socket.receive.buffer.bytes = 102400
      	socket.request.max.bytes = 104857600
      	socket.send.buffer.bytes = 102400
      	ssl.cipher.suites = []
      	ssl.client.auth = none
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.principal.mapping.rules = [DEFAULT]
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
      	transaction.max.timeout.ms = 900000
      	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
      	transaction.state.log.load.buffer.size = 5242880
      	transaction.state.log.min.isr = 2
      	transaction.state.log.num.partitions = 50
      	transaction.state.log.replication.factor = 3
      	transaction.state.log.segment.bytes = 104857600
      	transactional.id.expiration.ms = 604800000
      	unclean.leader.election.enable = false
      	zookeeper.connect = 127.0.0.1:39817
      	zookeeper.connection.timeout.ms = null
      	zookeeper.max.in.flight.requests = 10
      	zookeeper.session.timeout.ms = 10000
      	zookeeper.set.acl = false
      	zookeeper.sync.time.ms = 2000
       (kafka.server.KafkaConfig:346)
      [2019-05-25 04:48:44,900] INFO Loading logs. (kafka.log.LogManager:66)
      [2019-05-25 04:48:44,900] INFO Logs loading complete in 0 ms. (kafka.log.LogManager:66)
      [2019-05-25 04:48:44,901] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager:66)
      [2019-05-25 04:48:44,902] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager:66)
      [2019-05-25 04:48:44,902] INFO Starting the log cleaner (kafka.log.LogCleaner:66)
      [2019-05-25 04:48:44,913] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:48:44,913] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:48:44,913] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:48:44,951] INFO Awaiting socket connections on localhost:39958. (kafka.network.Acceptor:66)
      [2019-05-25 04:48:44,951] WARN [AdminClient clientId=adminclient-9] Connection to node 0 (localhost/127.0.0.1:37976) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:748)
      [2019-05-25 04:48:44,953] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner:66)
      [2019-05-25 04:48:44,972] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(localhost,0,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer:66)
      [2019-05-25 04:48:44,972] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer:66)
      [2019-05-25 04:48:44,988] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:45,008] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:45,021] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient:66)
      [2019-05-25 04:48:45,022] INFO Stat of the created znode at /brokers/ids/0 is: 24,24,1558759725021,1558759725021,1,0,0,72245562574307328,190,0,24
       (kafka.zk.KafkaZkClient:66)
      [2019-05-25 04:48:45,022] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(localhost,39958,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 24 (kafka.zk.KafkaZkClient:66)
      [2019-05-25 04:48:45,022] WARN No meta.properties file under dir /tmp/junit30138112216706354/junit261031386903701928/meta.properties (kafka.server.BrokerMetadataCheckpoint:70)
      [2019-05-25 04:48:45,043] INFO [ExpirationReaper-0-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:45,043] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:45,044] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2019-05-25 04:48:45,351] INFO SessionTrackerImpl exited loop! (org.apache.zookeeper.server.SessionTrackerImpl:163)
      [2019-05-25 04:48:46,156] WARN [AdminClient clientId=adminclient-9] Connection to node 0 (localhost/127.0.0.1:37976) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:748)
      [2019-05-25 04:48:46,654] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2019-05-25 04:48:46,655] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:46,655] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:46,656] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:48:46,657] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:48:46,657] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:48:46,657] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager:66)
      [2019-05-25 04:48:46,658] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient:66)
      [2019-05-25 04:48:46,658] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager:66)
      [2019-05-25 04:48:46,659] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,659] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,660] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,660] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator:66)
      [2019-05-25 04:48:46,660] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,660] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator:66)
      [2019-05-25 04:48:46,660] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,661] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2019-05-25 04:48:46,662] INFO [Controller id=0] Initialized broker epochs cache: Map(0 -> 24) (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,663] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2019-05-25 04:48:46,664] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread:66)
      [2019-05-25 04:48:46,664] INFO [Controller id=0] Partitions being reassigned: Map() (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] Current list of topics in the cluster: Set() (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] List of topics to be deleted:  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] List of topics ineligible for deletion:  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,665] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:48:46,666] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: Set() (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:48:46,666] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:48:46,666] INFO Kafka startTimeMs: 1558759724792 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:48:46,666] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,666] INFO [KafkaServer id=0] started (kafka.server.KafkaServer:66)
      [2019-05-25 04:48:46,666] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine:66)
      [2019-05-25 04:48:46,666] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine:66)
      [2019-05-25 04:48:46,666] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine:66)
      [2019-05-25 04:48:46,666] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine:66)
      [2019-05-25 04:48:46,666] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine:66)
      [2019-05-25 04:48:46,667] INFO AdminClientConfig values: 
      	bootstrap.servers = [localhost:39958]
      	client.dns.lookup = default
      	client.id = 
      	connections.max.idle.ms = 300000
      	metadata.max.age.ms = 300000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 120000
      	retries = 5
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
       (org.apache.kafka.clients.admin.AdminClientConfig:346)
      [2019-05-25 04:48:46,667] INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,667] INFO [Controller id=0] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,667] INFO [Controller id=0] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,668] INFO [Controller id=0] Partitions undergoing preferred replica election:  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,668] INFO [Controller id=0] Partitions that completed preferred replica election:  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,667] INFO [RequestSendThread controllerId=0] Controller 0 connected to localhost:39958 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread:66)
      [2019-05-25 04:48:46,668] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion:  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,669] INFO [Controller id=0] Resuming preferred replica election for partitions:  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,669] INFO [Controller id=0] Starting preferred replica leader election for partitions  (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,668] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:48:46,669] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:48:46,669] INFO Kafka startTimeMs: 1558759726668 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:48:46,670] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:multi cxid:0x38 zxid:0x1c txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor:596)
      [2019-05-25 04:48:46,670] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,673] INFO Creating topic topic-1 with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient:66)
      [2019-05-25 04:48:46,674] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:setData cxid:0x3e zxid:0x1d txntype:-1 reqpath:n/a Error Path:/config/topics/topic-1 Error:KeeperErrorCode = NoNode for /config/topics/topic-1 (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:46,676] INFO [Controller id=0] New topics: [Set(topic-1)], deleted topics: [Set()], new partition replica assignment [Map(topic-1-0 -> Vector(0))] (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,676] INFO [Controller id=0] New partition creation callback for topic-1-0 (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:46,679] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(topic-1-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:48:46,695] INFO [Log partition=topic-1-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Loading producer state till offset 0 with message format version 2 (kafka.log.Log:66)
      [2019-05-25 04:48:46,696] INFO [Log partition=topic-1-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log:66)
      [2019-05-25 04:48:46,697] INFO Created log for partition topic-1-0 in /tmp/junit30138112216706354/junit261031386903701928 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.3-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager:66)
      [2019-05-25 04:48:46,697] INFO [Partition topic-1-0 broker=0] No checkpointed highwatermark is found for partition topic-1-0 (kafka.cluster.Partition:66)
      [2019-05-25 04:48:46,697] INFO Replica loaded for partition topic-1-0 with initial high watermark 0 (kafka.cluster.Replica:66)
      [2019-05-25 04:48:46,697] INFO [Partition topic-1-0 broker=0] topic-1-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition:66)
      [2019-05-25 04:48:47,031] INFO AdminClientConfig values: 
      	bootstrap.servers = [localhost:39958]
      	client.dns.lookup = default
      	client.id = 
      	connections.max.idle.ms = 300000
      	metadata.max.age.ms = 300000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 120000
      	retries = 5
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
       (org.apache.kafka.clients.admin.AdminClientConfig:346)
      [2019-05-25 04:48:47,031] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:48:47,031] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:48:47,032] INFO Kafka startTimeMs: 1558759727031 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:48:47,048] INFO Creating topic topic-2 with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient:66)
      [2019-05-25 04:48:47,049] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:setData cxid:0x48 zxid:0x23 txntype:-1 reqpath:n/a Error Path:/config/topics/topic-2 Error:KeeperErrorCode = NoNode for /config/topics/topic-2 (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:47,052] INFO [Controller id=0] New topics: [Set(topic-2)], deleted topics: [Set()], new partition replica assignment [Map(topic-2-0 -> Vector(0))] (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:47,052] INFO [Controller id=0] New partition creation callback for topic-2-0 (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:47,077] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(topic-2-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:48:47,078] INFO [Log partition=topic-2-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Loading producer state till offset 0 with message format version 2 (kafka.log.Log:66)
      [2019-05-25 04:48:47,079] INFO [Log partition=topic-2-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log:66)
      [2019-05-25 04:48:47,079] INFO Created log for partition topic-2-0 in /tmp/junit30138112216706354/junit261031386903701928 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.3-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager:66)
      [2019-05-25 04:48:47,260] WARN [AdminClient clientId=adminclient-9] Connection to node 0 (localhost/127.0.0.1:37976) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:748)
      [2019-05-25 04:48:47,308] INFO [Partition topic-2-0 broker=0] No checkpointed highwatermark is found for partition topic-2-0 (kafka.cluster.Partition:66)
      [2019-05-25 04:48:47,308] INFO Replica loaded for partition topic-2-0 with initial high watermark 0 (kafka.cluster.Replica:66)
      [2019-05-25 04:48:47,308] INFO [Partition topic-2-0 broker=0] topic-2-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition:66)
      [2019-05-25 04:48:47,505] INFO AdminClientConfig values: 
      	bootstrap.servers = [localhost:39958]
      	client.dns.lookup = default
      	client.id = 
      	connections.max.idle.ms = 300000
      	metadata.max.age.ms = 300000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 120000
      	retries = 5
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
       (org.apache.kafka.clients.admin.AdminClientConfig:346)
      [2019-05-25 04:48:47,506] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:48:47,506] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:48:47,506] INFO Kafka startTimeMs: 1558759727506 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:48:47,523] INFO Creating topic topic-A with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient:66)
      [2019-05-25 04:48:47,531] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:setData cxid:0x52 zxid:0x29 txntype:-1 reqpath:n/a Error Path:/config/topics/topic-A Error:KeeperErrorCode = NoNode for /config/topics/topic-A (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:48:47,543] INFO [Controller id=0] New topics: [Set(topic-A)], deleted topics: [Set()], new partition replica assignment [Map(topic-A-0 -> Vector(0))] (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:47,543] INFO [Controller id=0] New partition creation callback for topic-A-0 (kafka.controller.KafkaController:66)
      [2019-05-25 04:48:47,546] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(topic-A-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:48:47,553] INFO [Log partition=topic-A-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Loading producer state till offset 0 with message format version 2 (kafka.log.Log:66)
      [2019-05-25 04:48:47,554] INFO [Log partition=topic-A-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log:66)
      [2019-05-25 04:48:47,555] INFO Created log for partition topic-A-0 in /tmp/junit30138112216706354/junit261031386903701928 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.3-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager:66)
      [2019-05-25 04:48:47,555] INFO [Partition topic-A-0 broker=0] No checkpointed highwatermark is found for partition topic-A-0 (kafka.cluster.Partition:66)
      [2019-05-25 04:48:47,555] INFO Replica loaded for partition topic-A-0 with initial high watermark 0 (kafka.cluster.Replica:66)
      [2019-05-25 04:48:47,555] INFO [Partition topic-A-0 broker=0] 
      ...[truncated 390986 chars]...
      init
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
       (org.apache.kafka.clients.admin.AdminClientConfig:346)
      [2019-05-25 04:51:12,283] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:51:12,283] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:51:12,283] INFO Kafka startTimeMs: 1558759872283 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:51:12,304] INFO [Controller id=0] Starting topic deletion for topics outputTopic (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:12,307] INFO [Topic Deletion Manager 0] Handling deletion for topics outputTopic (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:51:12,307] INFO [Topic Deletion Manager 0] Deletion of topic outputTopic (re)started (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:51:12,307] INFO [Topic Deletion Manager 0] Topic deletion callback for outputTopic (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:51:12,308] INFO [Topic Deletion Manager 0] Partition deletion callback for outputTopic-0 (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:51:12,309] INFO [GroupMetadataManager brokerId=0] Group 817833dc-0931-4e0c-a55a-c1cdc59e816c transitioned to Dead in generation 2 (kafka.coordinator.group.GroupMetadataManager:66)
      [2019-05-25 04:51:12,316] INFO [GroupCoordinator 0]: Removed 1 offsets associated with deleted partitions: outputTopic-0. (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:12,321] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(outputTopic-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:51:12,321] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(outputTopic-0) (kafka.server.ReplicaAlterLogDirsManager:66)
      [2019-05-25 04:51:12,322] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(outputTopic-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:51:12,322] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(outputTopic-0) (kafka.server.ReplicaAlterLogDirsManager:66)
      [2019-05-25 04:51:12,323] INFO The cleaning for partition outputTopic-0 is aborted and paused (kafka.log.LogCleaner:66)
      [2019-05-25 04:51:12,323] INFO The cleaning for partition outputTopic-0 is aborted (kafka.log.LogCleaner:66)
      [2019-05-25 04:51:12,350] INFO Log for partition outputTopic-0 is renamed to /tmp/junit30138112216706354/junit261031386903701928/outputTopic-0.6b3c9a880e504a60826e12fa80c3be57-delete and is scheduled for deletion (kafka.log.LogManager:66)
      [2019-05-25 04:51:12,351] INFO [Topic Deletion Manager 0] Handling deletion for topics outputTopic (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:51:12,360] INFO [Topic Deletion Manager 0] Deletion of topic outputTopic successfully completed (kafka.controller.TopicDeletionManager:66)
      [2019-05-25 04:51:12,365] INFO [Controller id=0] New topics: [Set()], deleted topics: [Set()], new partition replica assignment [Map()] (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:12,436] INFO AdminClientConfig values: 
      	bootstrap.servers = [localhost:39958]
      	client.dns.lookup = default
      	client.id = 
      	connections.max.idle.ms = 300000
      	metadata.max.age.ms = 300000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 120000
      	retries = 5
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
       (org.apache.kafka.clients.admin.AdminClientConfig:346)
      [2019-05-25 04:51:12,437] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:51:12,437] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:51:12,437] INFO Kafka startTimeMs: 1558759872437 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:51:12,476] INFO Creating topic outputTopic with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient:66)
      [2019-05-25 04:51:12,477] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:setData cxid:0xcdf zxid:0x11f txntype:-1 reqpath:n/a Error Path:/config/topics/outputTopic Error:KeeperErrorCode = NoNode for /config/topics/outputTopic (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:51:12,479] INFO [Controller id=0] New topics: [Set(outputTopic)], deleted topics: [Set()], new partition replica assignment [Map(outputTopic-0 -> Vector(0))] (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:12,479] INFO [Controller id=0] New partition creation callback for outputTopic-0 (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:12,483] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(outputTopic-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:51:12,492] INFO [Log partition=outputTopic-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Loading producer state till offset 0 with message format version 2 (kafka.log.Log:66)
      [2019-05-25 04:51:12,493] INFO [Log partition=outputTopic-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log:66)
      [2019-05-25 04:51:12,494] INFO Created log for partition outputTopic-0 in /tmp/junit30138112216706354/junit261031386903701928 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.3-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager:66)
      [2019-05-25 04:51:12,494] INFO Replica loaded for partition outputTopic-0 with initial high watermark 0 (kafka.cluster.Replica:66)
      [2019-05-25 04:51:12,495] INFO [Partition outputTopic-0 broker=0] outputTopic-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition:66)
      [2019-05-25 04:51:12,517] INFO StreamsConfig values: 
      	application.id = regex-source-integration-test
      	application.server = 
      	bootstrap.servers = [localhost:39958]
      	buffered.records.per.partition = 1000
      	cache.max.bytes.buffering = 0
      	client.id = 
      	commit.interval.ms = 100
      	connections.max.idle.ms = 540000
      	default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
      	default.key.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
      	default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
      	default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
      	default.value.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
      	max.task.idle.ms = 0
      	metadata.max.age.ms = 1000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	num.standby.replicas = 0
      	num.stream.threads = 1
      	partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
      	poll.ms = 100
      	processing.guarantee = at_least_once
      	receive.buffer.bytes = 32768
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	replication.factor = 1
      	request.timeout.ms = 40000
      	retries = 0
      	retry.backoff.ms = 100
      	rocksdb.config.setter = null
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	state.cleanup.delay.ms = 600000
      	state.dir = /tmp/kafka-5537357715057480745
      	topology.optimization = none
      	upgrade.from = null
      	windowstore.changelog.additional.retention.ms = 86400000
       (org.apache.kafka.streams.StreamsConfig:346)
      [2019-05-25 04:51:12,531] INFO AdminClientConfig values: 
      	bootstrap.servers = [localhost:39958]
      	client.dns.lookup = default
      	client.id = regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-admin
      	connections.max.idle.ms = 300000
      	metadata.max.age.ms = 1000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 120000
      	retries = 5
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
       (org.apache.kafka.clients.admin.AdminClientConfig:346)
      [2019-05-25 04:51:12,541] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:51:12,541] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:51:12,541] INFO Kafka startTimeMs: 1558759872541 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:51:12,543] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Creating restore consumer client (org.apache.kafka.streams.processor.internals.StreamThread:609)
      [2019-05-25 04:51:12,543] INFO ConsumerConfig values: 
      	allow.auto.create.topics = true
      	auto.commit.interval.ms = 5000
      	auto.offset.reset = none
      	bootstrap.servers = [localhost:39958]
      	check.crcs = true
      	client.dns.lookup = default
      	client.id = regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-restore-consumer
      	client.rack = 
      	connections.max.idle.ms = 540000
      	default.api.timeout.ms = 60000
      	enable.auto.commit = false
      	exclude.internal.topics = true
      	fetch.max.bytes = 52428800
      	fetch.max.wait.ms = 500
      	fetch.min.bytes = 1
      	group.id = null
      	group.instance.id = null
      	heartbeat.interval.ms = 3000
      	interceptor.classes = []
      	internal.leave.group.on.close = false
      	isolation.level = read_uncommitted
      	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
      	max.partition.fetch.bytes = 1048576
      	max.poll.interval.ms = 300000
      	max.poll.records = 1000
      	metadata.max.age.ms = 1000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 30000
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	session.timeout.ms = 10000
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
       (org.apache.kafka.clients.consumer.ConsumerConfig:346)
      [2019-05-25 04:51:12,545] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:51:12,545] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:51:12,545] INFO Kafka startTimeMs: 1558759872545 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:51:12,546] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Creating shared producer client (org.apache.kafka.streams.processor.internals.StreamThread:619)
      [2019-05-25 04:51:12,546] INFO ProducerConfig values: 
      	acks = 1
      	batch.size = 16384
      	bootstrap.servers = [localhost:39958]
      	buffer.memory = 33554432
      	client.dns.lookup = default
      	client.id = regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-producer
      	compression.type = none
      	connections.max.idle.ms = 540000
      	delivery.timeout.ms = 120000
      	enable.idempotence = false
      	interceptor.classes = []
      	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
      	linger.ms = 100
      	max.block.ms = 60000
      	max.in.flight.requests.per.connection = 5
      	max.request.size = 1048576
      	metadata.max.age.ms = 1000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
      	receive.buffer.bytes = 32768
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 30000
      	retries = 2147483647
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.timeout.ms = 60000
      	transactional.id = null
      	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
       (org.apache.kafka.clients.producer.ProducerConfig:346)
      [2019-05-25 04:51:12,548] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:51:12,549] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:51:12,549] INFO Kafka startTimeMs: 1558759872548 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:51:12,550] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Creating consumer client (org.apache.kafka.streams.processor.internals.StreamThread:662)
      [2019-05-25 04:51:12,550] INFO ConsumerConfig values: 
      	allow.auto.create.topics = true
      	auto.commit.interval.ms = 5000
      	auto.offset.reset = earliest
      	bootstrap.servers = [localhost:39958]
      	check.crcs = true
      	client.dns.lookup = default
      	client.id = regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer
      	client.rack = 
      	connections.max.idle.ms = 540000
      	default.api.timeout.ms = 60000
      	enable.auto.commit = false
      	exclude.internal.topics = true
      	fetch.max.bytes = 52428800
      	fetch.max.wait.ms = 500
      	fetch.min.bytes = 1
      	group.id = regex-source-integration-test
      	group.instance.id = null
      	heartbeat.interval.ms = 3000
      	interceptor.classes = []
      	internal.leave.group.on.close = false
      	isolation.level = read_uncommitted
      	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
      	max.partition.fetch.bytes = 1048576
      	max.poll.interval.ms = 300000
      	max.poll.records = 1000
      	metadata.max.age.ms = 1000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor]
      	receive.buffer.bytes = 65536
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	request.timeout.ms = 30000
      	retry.backoff.ms = 100
      	sasl.client.callback.handler.class = null
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism = GSSAPI
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	session.timeout.ms = 10000
      	ssl.cipher.suites = null
      	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.protocol = TLS
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
       (org.apache.kafka.clients.consumer.ConsumerConfig:346)
      [2019-05-25 04:51:12,558] WARN The configuration 'admin.retries' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:354)
      [2019-05-25 04:51:12,559] WARN The configuration 'admin.retry.backoff.ms' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:354)
      [2019-05-25 04:51:12,559] INFO Kafka version: 5.3.0-ccs-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:117)
      [2019-05-25 04:51:12,559] INFO Kafka commitId: a9f6e87b7820377c (org.apache.kafka.common.utils.AppInfoParser:118)
      [2019-05-25 04:51:12,560] INFO Kafka startTimeMs: 1558759872559 (org.apache.kafka.common.utils.AppInfoParser:119)
      [2019-05-25 04:51:12,560] INFO stream-client [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3] State transition from CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
      [2019-05-25 04:51:12,561] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
      [2019-05-25 04:51:12,561] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] State transition from CREATED to STARTING (org.apache.kafka.streams.processor.internals.StreamThread:212)
      [2019-05-25 04:51:12,562] INFO StreamsConfig values: 
      	application.id = a88be74d-1716-4883-ac51-489e22829dae
      	application.server = 
      	bootstrap.servers = [localhost:9091]
      	buffered.records.per.partition = 1000
      	cache.max.bytes.buffering = 10485760
      	client.id = 
      	commit.interval.ms = 30000
      	connections.max.idle.ms = 540000
      	default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
      	default.key.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
      	default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
      	default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
      	default.value.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
      	max.task.idle.ms = 0
      	metadata.max.age.ms = 300000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	num.standby.replicas = 0
      	num.stream.threads = 1
      	partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
      	poll.ms = 100
      	processing.guarantee = at_least_once
      	receive.buffer.bytes = 32768
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	replication.factor = 1
      	request.timeout.ms = 40000
      	retries = 0
      	retry.backoff.ms = 100
      	rocksdb.config.setter = null
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	state.cleanup.delay.ms = 600000
      	state.dir = /tmp/kafka-6277650282384042061
      	topology.optimization = none
      	upgrade.from = null
      	windowstore.changelog.additional.retention.ms = 86400000
       (org.apache.kafka.streams.StreamsConfig:346)
      [2019-05-25 04:51:12,567] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] Subscribed to pattern: 'topic-\d+' (org.apache.kafka.clients.consumer.KafkaConsumer:1027)
      [2019-05-25 04:51:12,578] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] Cluster ID: RbC14HrzQwetftC44Ozhrw (org.apache.kafka.clients.Metadata:266)
      [2019-05-25 04:51:12,578] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] Discovered group coordinator localhost:39958 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:728)
      [2019-05-25 04:51:12,587] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:477)
      [2019-05-25 04:51:12,587] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] State transition from STARTING to PARTITIONS_REVOKED (org.apache.kafka.streams.processor.internals.StreamThread:212)
      [2019-05-25 04:51:12,588] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer:1068)
      [2019-05-25 04:51:12,588] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] partition revocation took 1 ms.
      	suspended active tasks: []
      	suspended standby tasks: [] (org.apache.kafka.streams.processor.internals.StreamThread:328)
      [2019-05-25 04:51:12,588] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:505)
      [2019-05-25 04:51:12,607] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:505)
      [2019-05-25 04:51:12,611] INFO [GroupCoordinator 0]: Preparing to rebalance group regex-source-integration-test in state PreparingRebalance with old generation 7 (__consumer_offsets-13) (reason: Adding new member regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer-93b0aee9-d6ca-4ed4-a5c4-74da2a47ee36 with group instanceid None) (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:12,652] INFO [Producer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-producer] Cluster ID: RbC14HrzQwetftC44Ozhrw (org.apache.kafka.clients.Metadata:266)
      [2019-05-25 04:51:12,668] INFO StreamsConfig values: 
      	application.id = e39dbd7b-cba4-4696-b1d9-55c85826d98c
      	application.server = 
      	bootstrap.servers = [localhost:9091]
      	buffered.records.per.partition = 1000
      	cache.max.bytes.buffering = 10485760
      	client.id = 
      	commit.interval.ms = 30000
      	connections.max.idle.ms = 540000
      	default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
      	default.key.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
      	default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
      	default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
      	default.value.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
      	max.task.idle.ms = 0
      	metadata.max.age.ms = 300000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = DEBUG
      	metrics.sample.window.ms = 30000
      	num.standby.replicas = 0
      	num.stream.threads = 1
      	partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
      	poll.ms = 100
      	processing.guarantee = at_least_once
      	receive.buffer.bytes = 32768
      	reconnect.backoff.max.ms = 1000
      	reconnect.backoff.ms = 50
      	replication.factor = 1
      	request.timeout.ms = 40000
      	retries = 0
      	retry.backoff.ms = 100
      	rocksdb.config.setter = null
      	security.protocol = PLAINTEXT
      	send.buffer.bytes = 131072
      	state.cleanup.delay.ms = 600000
      	state.dir = /tmp/kafka-1465298165731955315
      	topology.optimization = none
      	upgrade.from = null
      	windowstore.changelog.additional.retention.ms = 86400000
       (org.apache.kafka.streams.StreamsConfig:346)
      [2019-05-25 04:51:12,669] INFO stream-client [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3] State transition from REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
      [2019-05-25 04:51:12,670] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Informed to shut down (org.apache.kafka.streams.processor.internals.StreamThread:1192)
      [2019-05-25 04:51:12,670] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] State transition from PARTITIONS_REVOKED to PENDING_SHUTDOWN (org.apache.kafka.streams.processor.internals.StreamThread:212)
      [2019-05-25 04:51:12,770] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Shutting down (org.apache.kafka.streams.processor.internals.StreamThread:1206)
      [2019-05-25 04:51:12,774] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions (org.apache.kafka.clients.consumer.KafkaConsumer:1068)
      [2019-05-25 04:51:12,774] INFO [Producer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:1153)
      [2019-05-25 04:51:16,573] INFO [GroupCoordinator 0]: Member regex-source-integration-test-0f4fe191-11be-48f5-9b08-dcf086cde3b7-StreamThread-1-consumer-a9df38c4-76ed-42d3-860f-cec8700df0e1 in group regex-source-integration-test has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:16,574] INFO [GroupCoordinator 0]: Stabilized group regex-source-integration-test generation 8 (__consumer_offsets-13) (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:16,578] INFO Creating topic regex-source-integration-test-testStateStore-changelog with configuration {cleanup.policy=compact} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient:66)
      [2019-05-25 04:51:16,578] INFO Got user-level KeeperException when processing sessionid:0x100aaf4d5d70000 type:setData cxid:0xce9 zxid:0x125 txntype:-1 reqpath:n/a Error Path:/config/topics/regex-source-integration-test-testStateStore-changelog Error:KeeperErrorCode = NoNode for /config/topics/regex-source-integration-test-testStateStore-changelog (org.apache.zookeeper.server.PrepRequestProcessor:653)
      [2019-05-25 04:51:16,581] INFO [Controller id=0] New topics: [Set(regex-source-integration-test-testStateStore-changelog)], deleted topics: [Set()], new partition replica assignment [Map(regex-source-integration-test-testStateStore-changelog-0 -> Vector(0))] (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:16,581] INFO [Controller id=0] New partition creation callback for regex-source-integration-test-testStateStore-changelog-0 (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:16,592] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(regex-source-integration-test-testStateStore-changelog-0) (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:51:17,750] INFO [Log partition=regex-source-integration-test-testStateStore-changelog-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Loading producer state till offset 0 with message format version 2 (kafka.log.Log:66)
      [2019-05-25 04:51:17,751] INFO [Log partition=regex-source-integration-test-testStateStore-changelog-0, dir=/tmp/junit30138112216706354/junit261031386903701928] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log:66)
      [2019-05-25 04:51:17,751] INFO Created log for partition regex-source-integration-test-testStateStore-changelog-0 in /tmp/junit30138112216706354/junit261031386903701928 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.3-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager:66)
      [2019-05-25 04:51:17,764] INFO [Partition regex-source-integration-test-testStateStore-changelog-0 broker=0] No checkpointed highwatermark is found for partition regex-source-integration-test-testStateStore-changelog-0 (kafka.cluster.Partition:66)
      [2019-05-25 04:51:17,764] INFO Replica loaded for partition regex-source-integration-test-testStateStore-changelog-0 with initial high watermark 0 (kafka.cluster.Replica:66)
      [2019-05-25 04:51:17,764] INFO [Partition regex-source-integration-test-testStateStore-changelog-0 broker=0] regex-source-integration-test-testStateStore-changelog-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition:66)
      [2019-05-25 04:51:17,775] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer] Assigned tasks to clients as {52d96703-af46-4961-a090-dcca05b473f3=[activeTasks: ([0_0]) standbyTasks: ([]) assignedTasks: ([0_0]) prevActiveTasks: ([]) prevStandbyTasks: ([]) prevAssignedTasks: ([]) capacity: 1]}. (org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor:636)
      [2019-05-25 04:51:17,777] INFO [GroupCoordinator 0]: Assignment received from leader for group regex-source-integration-test for generation 8 (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:17,780] INFO [Consumer clientId=regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1-consumer, groupId=regex-source-integration-test] Successfully joined group with generation 8 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:469)
      [2019-05-25 04:51:17,785] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD (org.apache.kafka.streams.processor.internals.StreamThread:212)
      [2019-05-25 04:51:17,785] INFO stream-thread [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3-StreamThread-1] Shutdown complete (org.apache.kafka.streams.processor.internals.StreamThread:1226)
      [2019-05-25 04:51:17,786] INFO stream-client [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3] State transition from PENDING_SHUTDOWN to NOT_RUNNING (org.apache.kafka.streams.KafkaStreams:263)
      [2019-05-25 04:51:17,786] INFO stream-client [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3] Streams client stopped completely (org.apache.kafka.streams.KafkaStreams:898)
      [2019-05-25 04:51:17,787] INFO stream-client [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3] Already in the pending shutdown state, wait to complete shutdown (org.apache.kafka.streams.KafkaStreams:850)
      [2019-05-25 04:51:17,787] INFO stream-client [regex-source-integration-test-52d96703-af46-4961-a090-dcca05b473f3] Streams client stopped completely (org.apache.kafka.streams.KafkaStreams:898)
      [2019-05-25 04:51:17,789] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer:66)
      [2019-05-25 04:51:17,789] INFO [KafkaServer id=0] Starting controlled shutdown (kafka.server.KafkaServer:66)
      [2019-05-25 04:51:17,797] INFO [Controller id=0] Shutting down broker 0 (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:17,801] INFO [KafkaServer id=0] Controlled shutdown succeeded (kafka.server.KafkaServer:66)
      [2019-05-25 04:51:17,801] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2019-05-25 04:51:17,802] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2019-05-25 04:51:17,802] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2019-05-25 04:51:17,803] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer:66)
      [2019-05-25 04:51:17,805] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer:66)
      [2019-05-25 04:51:17,805] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool:66)
      [2019-05-25 04:51:17,807] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool:66)
      [2019-05-25 04:51:17,809] INFO [KafkaApi-0] Shutdown complete. (kafka.server.KafkaApis:66)
      [2019-05-25 04:51:17,809] INFO [ExpirationReaper-0-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:17,984] INFO [ExpirationReaper-0-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:17,984] INFO [ExpirationReaper-0-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:17,985] INFO [TransactionCoordinator id=0] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator:66)
      [2019-05-25 04:51:17,987] INFO [ProducerId Manager 0]: Shutdown complete: last producerId assigned 0 (kafka.coordinator.transaction.ProducerIdManager:66)
      [2019-05-25 04:51:17,987] INFO [Transaction State Manager 0]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager:66)
      [2019-05-25 04:51:17,987] INFO [Transaction Marker Channel Manager 0]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2019-05-25 04:51:17,988] INFO [Transaction Marker Channel Manager 0]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2019-05-25 04:51:17,988] INFO [Transaction Marker Channel Manager 0]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2019-05-25 04:51:17,988] INFO [TransactionCoordinator id=0] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator:66)
      [2019-05-25 04:51:17,988] INFO [GroupCoordinator 0]: Shutting down. (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:17,990] INFO [ExpirationReaper-0-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,176] INFO [ExpirationReaper-0-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,177] INFO [ExpirationReaper-0-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,177] INFO [ExpirationReaper-0-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,264] INFO [ExpirationReaper-0-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,264] INFO [ExpirationReaper-0-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,264] INFO [GroupCoordinator 0]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator:66)
      [2019-05-25 04:51:18,265] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager:66)
      [2019-05-25 04:51:18,265] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2019-05-25 04:51:18,265] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2019-05-25 04:51:18,266] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2019-05-25 04:51:18,266] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:51:18,266] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager:66)
      [2019-05-25 04:51:18,266] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager:66)
      [2019-05-25 04:51:18,267] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager:66)
      [2019-05-25 04:51:18,267] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,299] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,299] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,300] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,399] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,400] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,403] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,472] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,473] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,473] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,494] INFO [ExpirationReaper-0-ElectPreferredLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,494] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2019-05-25 04:51:18,502] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager:66)
      [2019-05-25 04:51:18,503] INFO Shutting down. (kafka.log.LogManager:66)
      [2019-05-25 04:51:18,503] INFO Shutting down the log cleaner. (kafka.log.LogCleaner:66)
      [2019-05-25 04:51:18,503] INFO [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner:66)
      [2019-05-25 04:51:18,504] INFO [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner:66)
      [2019-05-25 04:51:18,504] INFO [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner:66)
      [2019-05-25 04:51:18,527] INFO [ProducerStateManager partition=topic-Z-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,551] INFO [ProducerStateManager partition=fa-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,579] INFO [ProducerStateManager partition=topic-C-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,609] INFO [ProducerStateManager partition=foo-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,610] INFO [ProducerStateManager partition=topic-2-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,614] INFO [ProducerStateManager partition=topic-Y-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,645] INFO [ProducerStateManager partition=__consumer_offsets-13] Writing producer snapshot at offset 19 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,655] INFO [ProducerStateManager partition=topic-1-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,692] INFO [ProducerStateManager partition=__consumer_offsets-29] Writing producer snapshot at offset 6 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,707] INFO [ProducerStateManager partition=topic-A-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,727] INFO [ProducerStateManager partition=__consumer_offsets-10] Writing producer snapshot at offset 5 (kafka.log.ProducerStateManager:66)
      [2019-05-25 04:51:18,777] INFO Shutdown complete. (kafka.log.LogManager:66)
      [2019-05-25 04:51:18,778] INFO [ControllerEventThread controllerId=0] Shutting down (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2019-05-25 04:51:18,779] INFO [ControllerEventThread controllerId=0] Stopped (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2019-05-25 04:51:18,788] INFO [ControllerEventThread controllerId=0] Shutdown completed (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2019-05-25 04:51:18,789] INFO [PartitionStateMachine controllerId=0] Stopped partition state machine (kafka.controller.ZkPartitionStateMachine:66)
      [2019-05-25 04:51:18,789] INFO [ReplicaStateMachine controllerId=0] Stopped replica state machine (kafka.controller.ZkReplicaStateMachine:66)
      [2019-05-25 04:51:18,789] INFO [RequestSendThread controllerId=0] Shutting down (kafka.controller.RequestSendThread:66)
      [2019-05-25 04:51:18,790] INFO [RequestSendThread controllerId=0] Stopped (kafka.controller.RequestSendThread:66)
      [2019-05-25 04:51:18,790] INFO [RequestSendThread controllerId=0] Shutdown completed (kafka.controller.RequestSendThread:66)
      [2019-05-25 04:51:18,792] INFO [Controller id=0] Resigned (kafka.controller.KafkaController:66)
      [2019-05-25 04:51:18,792] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient:66)
      [2019-05-25 04:51:18,793] INFO Processed session termination for sessionid: 0x100aaf4d5d70000 (org.apache.zookeeper.server.PrepRequestProcessor:487)
      [2019-05-25 04:51:18,806] INFO Closed socket connection for client /127.0.0.1:50092 which had sessionid 0x100aaf4d5d70000 (org.apache.zookeeper.server.NIOServerCnxn:1056)
      [2019-05-25 04:51:18,807] INFO EventThread shut down for session: 0x100aaf4d5d70000 (org.apache.zookeeper.ClientCnxn:522)
      [2019-05-25 04:51:18,807] INFO Session: 0x100aaf4d5d70000 closed (org.apache.zookeeper.ZooKeeper:693)
      [2019-05-25 04:51:18,808] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient:66)
      [2019-05-25 04:51:18,808] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:18,961] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:18,961] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:18,962] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:19,959] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:19,959] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:19,960] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:19,961] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:19,961] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2019-05-25 04:51:19,962] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer:66)
      [2019-05-25 04:51:19,995] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer:66)
      [2019-05-25 04:51:20,011] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer:66)
      [2019-05-25 04:51:20,037] INFO shutting down (org.apache.zookeeper.server.ZooKeeperServer:502)
      [2019-05-25 04:51:20,037] INFO Shutting down (org.apache.zookeeper.server.SessionTrackerImpl:226)
      [2019-05-25 04:51:20,038] INFO Shutting down (org.apache.zookeeper.server.PrepRequestProcessor:769)
      [2019-05-25 04:51:20,038] INFO Shutting down (org.apache.zookeeper.server.SyncRequestProcessor:208)
      [2019-05-25 04:51:20,038] INFO SyncRequestProcessor exited! (org.apache.zookeeper.server.SyncRequestProcessor:186)
      [2019-05-25 04:51:20,038] INFO PrepRequestProcessor exited loop! (org.apache.zookeeper.server.PrepRequestProcessor:144)
      [2019-05-25 04:51:20,043] INFO shutdown of request processor complete (org.apache.zookeeper.server.FinalRequestProcessor:430)
      [2019-05-25 04:51:20,044] INFO NIOServerCnxn factory exited run method (org.apache.zookeeper.server.NIOServerCnxnFactory:249)
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            cadonna Bruno Cadonna
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: