Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-9742

StandbyTaskEOSIntegrationTest broken

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: streams
    • Labels:
      None

      Description

      Test failed on a PR build last night:

      org.apache.kafka.streams.integration.StandbyTaskEOSIntegrationTest.surviveWithOneTaskAsStandbyFailing

      java.lang.AssertionErrorStacktracejava.lang.AssertionError at org.junit.Assert.fail(Assert.java:87)
       at org.junit.Assert.assertTrue(Assert.java:42)
       at org.junit.Assert.assertTrue(Assert.java:53)
       at org.apache.kafka.streams.integration.StandbyTaskEOSIntegrationTest.surviveWithOneTaskAsStandby(StandbyTaskEOSIntegrationTest.java:98)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
       at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
       at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
       at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
       at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
       at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
       at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
       at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
       at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
       at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
       at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
       at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
       at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
       at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
       at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
       at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
       at org.junit.rules.RunRules.evaluate(RunRules.java:20)
       at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
       at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
       at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
       at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
       at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
       at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
       at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
       at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
       at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
       at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
       at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
       at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
       at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
       at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
       at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
       at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
       at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
       at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:412)
       at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
       at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
       at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
       at java.lang.Thread.run(Thread.java:748) 

      Standard Output

      [2020-03-21 00:05:31,344] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$:31)
      [2020-03-21 00:05:31,396] INFO Server environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,396] INFO Server environment:host.name=asf937.gq1.ygridcore.net (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,396] INFO Server environment:java.version=1.8.0_241 (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,396] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,396] INFO Server environment:java.home=/usr/local/asfpackages/java/jdk1.8.0_241/jre (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,396] INFO Server environment:java.class.path=/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/classes/java/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/resources/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/classes/java/main:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/resources/main:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/test-utils/build/libs/kafka-streams-test-utils-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/clients/build/classes/java/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/clients/build/resources/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/classes/java/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/classes/scala/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/resources/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/libs/kafka-streams-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/libs/kafka_2.12-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/connect/json/build/libs/connect-json-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/connect/api/build/libs/connect-api-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/clients/build/libs/kafka-clients-2.6.0-SNAPSHOT.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.yammer.metrics/metrics-core/2.2.0/f82c035cfa786d3cbec362c38c22a5f5b1bc8724/metrics-core-2.2.0.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.typesafe.scala-logging/scala-logging_2.12/3.9.2/b1f19bc6774e01debf09bf5f564ad3613687bf49/scala-logging_2.12-3.9.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.5.7/12bdf55ba8be7fc891996319d37f35eaad7e63ea/zookeeper-3.5.7.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.30/b5a4b6d16ab13e34a88fae84c35cd5d68cac922c/slf4j-api-1.7.30.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.rocksdb/rocksdbjni/5.18.4/def7af83920ad2c39eb452f6ef9603777d899ea0/rocksdbjni-5.18.4.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.17/5af35056b4d257e4b64b9e8069c0746e8b08629f/log4j-1.2.17.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-module-junit4/2.0.5/c922fc29c82664e06466a7ce1face1661d688255/powermock-module-junit4-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-module-junit4-common/2.0.5/d02a42a4cc6d9229a11b1bc5c37a3f5f2c342d0a/powermock-module-junit4-common-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/junit/junit/4.13/e49ccba652b735c93bd6e6f59760d8254cf597dd/junit-4.13.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-api-easymock/2.0.5/a4bca999c461a2787026ce161846affba451fee9/powermock-api-easymock-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.easymock/easymock/4.1/e19506d19d84e8db90d864696282d6981c002e74/easymock-4.1.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcpkix-jdk15on/1.64/3dac163e20110817d850d17e0444852a6d7d0bd7/bcpkix-jdk15on-1.64.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.hamcrest/hamcrest/2.2/1820c0968dba3a11a1b30669bb1f01978a91dedc/hamcrest-2.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.luben/zstd-jni/1.4.4-7/f7e9d149c0182968cc2a8706d3ffe82f5c9f01eb/zstd-jni-1.4.4-7.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.lz4/lz4-java/1.7.1/c4d931ef8ad2c9c35d65b231a33e61428472d0da/lz4-java-1.7.1.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.7.3/241bb74a1eb37d68a4e324a4bc3865427de0a62d/snappy-java-1.1.7.3.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jdk8/2.10.2/dca8c8ab85eaabefe021e2f1ac777f3a6b16a3cb/jackson-datatype-jdk8-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-scala_2.12/2.10.2/435902f7ac8f01468265c44bd4100b92c6f29663/jackson-module-scala_2.12-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-csv/2.10.2/b80d499bd4853c784ffd9112aee2ecf5817c28be/jackson-dataformat-csv-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-paranamer/2.10.2/cfd83c1efb7ebfd83aafa5d22fc760a9d94c2a67/jackson-module-paranamer-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.10.2/528de95f198afafbcfb0c09d2e43b6e0ea663ec/jackson-databind-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/net.sf.jopt-simple/jopt-simple/5.0.4/4fdac2fbe92dfad86aa6e9301736f6b4342a3f5c/jopt-simple-5.0.4.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang.modules/scala-collection-compat_2.12/2.1.3/17ec3eeaba48b3f3e402ecfe22287761fb5c29b7/scala-collection-compat_2.12-2.1.3.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang.modules/scala-java8-compat_2.12/0.9.0/9525fb6bbf54a9caf0f7e1b65b261215b02fe939/scala-java8-compat_2.12-0.9.0.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-reflect/2.12.11/7695010d1f4309a9c4b65be33528e382869ab3c4/scala-reflect-2.12.11.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-library/2.12.11/1a0634714a956c1aae9abefc83acaf6d4eabfa7d/scala-library-2.12.11.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.hamcrest/hamcrest-core/1.3/42a25dc3219429f0e5d060061f71acb49bf010a0/hamcrest-core-1.3.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-api-support/2.0.5/f7e9d65624f55c9c15ebd89a3a8770d1bb21e49c/powermock-api-support-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-core/2.0.5/d5d5ca75413883e00595185d79714e0911c7358e/powermock-core-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-reflect/2.0.5/6bca328201936519e08bb1d8fdf37c0a3d7075d0/powermock-reflect-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.objenesis/objenesis/3.1/48f12deaae83a8dfc3775d830c9fd60ea59bbbca/objenesis-3.1.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/cglib/cglib-nodep/3.2.9/27ca91ebc2b82f844e62a7ba8c2c1fdf9b84fa80/cglib-nodep-3.2.9.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.64/1467dac1b787b5ad2a18201c0c281df69882259e/bcprov-jdk15on-1.64.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.10.2/3a13b6105946541b8d4181a0506355b5fae63260/jackson-annotations-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.10.2/73d4322a6bda684f676a2b5fe918361c4e5c7cca/jackson-core-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper-jute/3.5.7/1270f80b08904499a6839a2ee1800da687ad96b4/zookeeper-jute-3.5.7.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.yetus/audience-annotations/0.5.0/55762d3191a8d6610ef46d11e8cb70c7667342a3/audience-annotations-0.5.0.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.45.Final/51071ba9977cce64e3a58e6f2f6326bbb7e5bc7f/netty-handler-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.45.Final/cf153257db449b6a74adb64fbd2903542af55892/netty-transport-native-epoll-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.thoughtworks.paranamer/paranamer/2.8/619eba74c19ccf1da8ebec97a2d7f8ba05773dd6/paranamer-2.8.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.45.Final/8c768728a3e82c3cef62a7a2c8f52ae8d777bac9/netty-codec-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.45.Final/49f9fa4b7fe7d3e562666d050049541b86822549/netty-transport-native-unix-common-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.45.Final/b7d8f2645e330bd66cd4f28f155eba605e0c8758/netty-transport-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.45.Final/bac54338074540c4f3241a3d92358fad5df89ba/netty-buffer-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.45.Final/9e77bdc045d33a570dabf9d53192ea954bb195d7/netty-resolver-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.45.Final/5cf5e448d44ddf53d00f2fc4047c2a7aceaa7087/netty-common-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy/1.9.10/211a2b4d3df1eeef2a6cacf78d74a1f725e7a840/byte-buddy-1.9.10.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.9.10/9674aba5ee793e54b864952b001166848da0f26b/byte-buddy-agent-1.9.10.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.javassist/javassist/3.25.0-GA/442dc1f9fd520130bd18da938622f4f9b2e5fba3/javassist-3.25.0-GA.jar (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,397] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,397] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,397] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:os.version=4.15.0-76-generic (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:user.name=jenkins (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:user.home=/home/jenkins (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:user.dir=/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:os.memory.free=209MB (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,398] INFO Server environment:os.memory.max=1820MB (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,399] INFO Server environment:os.memory.total=292MB (org.apache.zookeeper.server.ZooKeeperServer:109)
      [2020-03-21 00:05:31,404] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog:115)
      [2020-03-21 00:05:31,442] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase:117)
      [2020-03-21 00:05:31,444] INFO minSessionTimeout set to 1600 (org.apache.zookeeper.server.ZooKeeperServer:938)
      [2020-03-21 00:05:31,445] INFO maxSessionTimeout set to 16000 (org.apache.zookeeper.server.ZooKeeperServer:947)
      [2020-03-21 00:05:31,445] INFO Created server with tickTime 800 minSessionTimeout 1600 maxSessionTimeout 16000 datadir /tmp/kafka-1304592028848891536/version-2 snapdir /tmp/kafka-5436111649949570329/version-2 (org.apache.zookeeper.server.ZooKeeperServer:166)
      [2020-03-21 00:05:31,465] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 3 selector thread(s), 48 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory:673)
      [2020-03-21 00:05:31,472] INFO binding to port /127.0.0.1:0 (org.apache.zookeeper.server.NIOServerCnxnFactory:686)
      [2020-03-21 00:05:31,485] INFO Snapshotting: 0x0 to /tmp/kafka-5436111649949570329/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog:404)
      [2020-03-21 00:05:31,491] INFO Snapshotting: 0x0 to /tmp/kafka-5436111649949570329/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog:404)
      [2020-03-21 00:05:32,120] INFO KafkaConfig values: 
      	advertised.host.name = null
      	advertised.listeners = null
      	advertised.port = null
      	alter.config.policy.class.name = null
      	alter.log.dirs.replication.quota.window.num = 11
      	alter.log.dirs.replication.quota.window.size.seconds = 1
      	authorizer.class.name = 
      	auto.create.topics.enable = true
      	auto.leader.rebalance.enable = true
      	background.threads = 10
      	broker.id = 0
      	broker.id.generation.enable = true
      	broker.rack = null
      	client.quota.callback.class = null
      	compression.type = producer
      	connection.failed.authentication.delay.ms = 100
      	connections.max.idle.ms = 600000
      	connections.max.reauth.ms = 0
      	control.plane.listener.name = null
      	controlled.shutdown.enable = true
      	controlled.shutdown.max.retries = 3
      	controlled.shutdown.retry.backoff.ms = 5000
      	controller.socket.timeout.ms = 30000
      	create.topic.policy.class.name = null
      	default.replication.factor = 1
      	delegation.token.expiry.check.interval.ms = 3600000
      	delegation.token.expiry.time.ms = 86400000
      	delegation.token.master.key = null
      	delegation.token.max.lifetime.ms = 604800000
      	delete.records.purgatory.purge.interval.requests = 1
      	delete.topic.enable = true
      	fetch.max.bytes = 57671680
      	fetch.purgatory.purge.interval.requests = 1000
      	group.initial.rebalance.delay.ms = 0
      	group.max.session.timeout.ms = 1800000
      	group.max.size = 2147483647
      	group.min.session.timeout.ms = 0
      	host.name = localhost
      	inter.broker.listener.name = null
      	inter.broker.protocol.version = 2.5-IV0
      	kafka.metrics.polling.interval.secs = 10
      	kafka.metrics.reporters = []
      	leader.imbalance.check.interval.seconds = 300
      	leader.imbalance.per.broker.percentage = 10
      	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      	listeners = null
      	log.cleaner.backoff.ms = 15000
      	log.cleaner.dedupe.buffer.size = 2097152
      	log.cleaner.delete.retention.ms = 86400000
      	log.cleaner.enable = true
      	log.cleaner.io.buffer.load.factor = 0.9
      	log.cleaner.io.buffer.size = 524288
      	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
      	log.cleaner.max.compaction.lag.ms = 9223372036854775807
      	log.cleaner.min.cleanable.ratio = 0.5
      	log.cleaner.min.compaction.lag.ms = 0
      	log.cleaner.threads = 1
      	log.cleanup.policy = [delete]
      	log.dir = /tmp/junit8727345289613412077/junit1648642492482663308
      	log.dirs = null
      	log.flush.interval.messages = 9223372036854775807
      	log.flush.interval.ms = null
      	log.flush.offset.checkpoint.interval.ms = 60000
      	log.flush.scheduler.interval.ms = 9223372036854775807
      	log.flush.start.offset.checkpoint.interval.ms = 60000
      	log.index.interval.bytes = 4096
      	log.index.size.max.bytes = 10485760
      	log.message.downconversion.enable = true
      	log.message.format.version = 2.5-IV0
      	log.message.timestamp.difference.max.ms = 9223372036854775807
      	log.message.timestamp.type = CreateTime
      	log.preallocate = false
      	log.retention.bytes = -1
      	log.retention.check.interval.ms = 300000
      	log.retention.hours = 168
      	log.retention.minutes = null
      	log.retention.ms = null
      	log.roll.hours = 168
      	log.roll.jitter.hours = 0
      	log.roll.jitter.ms = null
      	log.roll.ms = null
      	log.segment.bytes = 1073741824
      	log.segment.delete.delay.ms = 60000
      	max.connections = 2147483647
      	max.connections.per.ip = 2147483647
      	max.connections.per.ip.overrides = 
      	max.incremental.fetch.session.cache.slots = 1000
      	message.max.bytes = 1000000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	min.insync.replicas = 1
      	num.io.threads = 8
      	num.network.threads = 3
      	num.partitions = 1
      	num.recovery.threads.per.data.dir = 1
      	num.replica.alter.log.dirs.threads = null
      	num.replica.fetchers = 1
      	offset.metadata.max.bytes = 4096
      	offsets.commit.required.acks = -1
      	offsets.commit.timeout.ms = 5000
      	offsets.load.buffer.size = 5242880
      	offsets.retention.check.interval.ms = 600000
      	offsets.retention.minutes = 10080
      	offsets.topic.compression.codec = 0
      	offsets.topic.num.partitions = 5
      	offsets.topic.replication.factor = 1
      	offsets.topic.segment.bytes = 104857600
      	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
      	password.encoder.iterations = 4096
      	password.encoder.key.length = 128
      	password.encoder.keyfactory.algorithm = null
      	password.encoder.old.secret = null
      	password.encoder.secret = null
      	port = 0
      	principal.builder.class = null
      	producer.purgatory.purge.interval.requests = 1000
      	queued.max.request.bytes = -1
      	queued.max.requests = 500
      	quota.consumer.default = 9223372036854775807
      	quota.producer.default = 9223372036854775807
      	quota.window.num = 11
      	quota.window.size.seconds = 1
      	replica.fetch.backoff.ms = 1000
      	replica.fetch.max.bytes = 1048576
      	replica.fetch.min.bytes = 1
      	replica.fetch.response.max.bytes = 10485760
      	replica.fetch.wait.max.ms = 500
      	replica.high.watermark.checkpoint.interval.ms = 5000
      	replica.lag.time.max.ms = 30000
      	replica.selector.class = null
      	replica.socket.receive.buffer.bytes = 65536
      	replica.socket.timeout.ms = 30000
      	replication.quota.window.num = 11
      	replication.quota.window.size.seconds = 1
      	request.timeout.ms = 30000
      	reserved.broker.max.id = 1000
      	sasl.client.callback.handler.class = null
      	sasl.enabled.mechanisms = [GSSAPI]
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.principal.to.local.rules = [DEFAULT]
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism.inter.broker.protocol = GSSAPI
      	sasl.server.callback.handler.class = null
      	security.inter.broker.protocol = PLAINTEXT
      	security.providers = null
      	socket.receive.buffer.bytes = 102400
      	socket.request.max.bytes = 104857600
      	socket.send.buffer.bytes = 102400
      	ssl.cipher.suites = []
      	ssl.client.auth = none
      	ssl.enabled.protocols = [TLSv1.2]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.principal.mapping.rules = DEFAULT
      	ssl.protocol = TLSv1.2
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
      	transaction.max.timeout.ms = 900000
      	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
      	transaction.state.log.load.buffer.size = 5242880
      	transaction.state.log.min.isr = 2
      	transaction.state.log.num.partitions = 50
      	transaction.state.log.replication.factor = 3
      	transaction.state.log.segment.bytes = 104857600
      	transactional.id.expiration.ms = 604800000
      	unclean.leader.election.enable = false
      	zookeeper.clientCnxnSocket = null
      	zookeeper.connect = 127.0.0.1:42351
      	zookeeper.connection.timeout.ms = null
      	zookeeper.max.in.flight.requests = 10
      	zookeeper.session.timeout.ms = 10000
      	zookeeper.set.acl = false
      	zookeeper.ssl.cipher.suites = null
      	zookeeper.ssl.client.enable = false
      	zookeeper.ssl.crl.enable = false
      	zookeeper.ssl.enabled.protocols = null
      	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
      	zookeeper.ssl.keystore.location = null
      	zookeeper.ssl.keystore.password = null
      	zookeeper.ssl.keystore.type = null
      	zookeeper.ssl.ocsp.enable = false
      	zookeeper.ssl.protocol = TLSv1.2
      	zookeeper.ssl.truststore.location = null
      	zookeeper.ssl.truststore.password = null
      	zookeeper.ssl.truststore.type = null
      	zookeeper.sync.time.ms = 2000
       (kafka.server.KafkaConfig:347)
      [2020-03-21 00:05:32,159] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util:79)
      [2020-03-21 00:05:32,269] INFO starting (kafka.server.KafkaServer:66)
      [2020-03-21 00:05:32,270] INFO Connecting to zookeeper on 127.0.0.1:42351 (kafka.server.KafkaServer:66)
      [2020-03-21 00:05:32,307] INFO [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:42351. (kafka.zookeeper.ZooKeeperClient:66)
      [2020-03-21 00:05:32,318] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,318] INFO Client environment:host.name=asf937.gq1.ygridcore.net (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,318] INFO Client environment:java.version=1.8.0_241 (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,318] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,318] INFO Client environment:java.home=/usr/local/asfpackages/java/jdk1.8.0_241/jre (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,318] INFO Client environment:java.class.path=/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/classes/java/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/resources/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/classes/java/main:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/resources/main:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/test-utils/build/libs/kafka-streams-test-utils-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/clients/build/classes/java/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/clients/build/resources/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/classes/java/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/classes/scala/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/resources/test:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams/build/libs/kafka-streams-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/core/build/libs/kafka_2.12-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/connect/json/build/libs/connect-json-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/connect/api/build/libs/connect-api-2.6.0-SNAPSHOT.jar:/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/clients/build/libs/kafka-clients-2.6.0-SNAPSHOT.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.30/c21f55139d8141d2231214fb1feaf50a1edca95e/slf4j-log4j12-1.7.30.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.yammer.metrics/metrics-core/2.2.0/f82c035cfa786d3cbec362c38c22a5f5b1bc8724/metrics-core-2.2.0.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.typesafe.scala-logging/scala-logging_2.12/3.9.2/b1f19bc6774e01debf09bf5f564ad3613687bf49/scala-logging_2.12-3.9.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.5.7/12bdf55ba8be7fc891996319d37f35eaad7e63ea/zookeeper-3.5.7.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.30/b5a4b6d16ab13e34a88fae84c35cd5d68cac922c/slf4j-api-1.7.30.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.rocksdb/rocksdbjni/5.18.4/def7af83920ad2c39eb452f6ef9603777d899ea0/rocksdbjni-5.18.4.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.17/5af35056b4d257e4b64b9e8069c0746e8b08629f/log4j-1.2.17.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-module-junit4/2.0.5/c922fc29c82664e06466a7ce1face1661d688255/powermock-module-junit4-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-module-junit4-common/2.0.5/d02a42a4cc6d9229a11b1bc5c37a3f5f2c342d0a/powermock-module-junit4-common-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/junit/junit/4.13/e49ccba652b735c93bd6e6f59760d8254cf597dd/junit-4.13.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-api-easymock/2.0.5/a4bca999c461a2787026ce161846affba451fee9/powermock-api-easymock-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.easymock/easymock/4.1/e19506d19d84e8db90d864696282d6981c002e74/easymock-4.1.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcpkix-jdk15on/1.64/3dac163e20110817d850d17e0444852a6d7d0bd7/bcpkix-jdk15on-1.64.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.hamcrest/hamcrest/2.2/1820c0968dba3a11a1b30669bb1f01978a91dedc/hamcrest-2.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.luben/zstd-jni/1.4.4-7/f7e9d149c0182968cc2a8706d3ffe82f5c9f01eb/zstd-jni-1.4.4-7.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.lz4/lz4-java/1.7.1/c4d931ef8ad2c9c35d65b231a33e61428472d0da/lz4-java-1.7.1.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.7.3/241bb74a1eb37d68a4e324a4bc3865427de0a62d/snappy-java-1.1.7.3.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jdk8/2.10.2/dca8c8ab85eaabefe021e2f1ac777f3a6b16a3cb/jackson-datatype-jdk8-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-scala_2.12/2.10.2/435902f7ac8f01468265c44bd4100b92c6f29663/jackson-module-scala_2.12-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-csv/2.10.2/b80d499bd4853c784ffd9112aee2ecf5817c28be/jackson-dataformat-csv-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-paranamer/2.10.2/cfd83c1efb7ebfd83aafa5d22fc760a9d94c2a67/jackson-module-paranamer-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.10.2/528de95f198afafbcfb0c09d2e43b6e0ea663ec/jackson-databind-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/net.sf.jopt-simple/jopt-simple/5.0.4/4fdac2fbe92dfad86aa6e9301736f6b4342a3f5c/jopt-simple-5.0.4.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang.modules/scala-collection-compat_2.12/2.1.3/17ec3eeaba48b3f3e402ecfe22287761fb5c29b7/scala-collection-compat_2.12-2.1.3.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang.modules/scala-java8-compat_2.12/0.9.0/9525fb6bbf54a9caf0f7e1b65b261215b02fe939/scala-java8-compat_2.12-0.9.0.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-reflect/2.12.11/7695010d1f4309a9c4b65be33528e382869ab3c4/scala-reflect-2.12.11.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-library/2.12.11/1a0634714a956c1aae9abefc83acaf6d4eabfa7d/scala-library-2.12.11.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.hamcrest/hamcrest-core/1.3/42a25dc3219429f0e5d060061f71acb49bf010a0/hamcrest-core-1.3.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-api-support/2.0.5/f7e9d65624f55c9c15ebd89a3a8770d1bb21e49c/powermock-api-support-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-core/2.0.5/d5d5ca75413883e00595185d79714e0911c7358e/powermock-core-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-reflect/2.0.5/6bca328201936519e08bb1d8fdf37c0a3d7075d0/powermock-reflect-2.0.5.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.objenesis/objenesis/3.1/48f12deaae83a8dfc3775d830c9fd60ea59bbbca/objenesis-3.1.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/cglib/cglib-nodep/3.2.9/27ca91ebc2b82f844e62a7ba8c2c1fdf9b84fa80/cglib-nodep-3.2.9.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.64/1467dac1b787b5ad2a18201c0c281df69882259e/bcprov-jdk15on-1.64.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.10.2/3a13b6105946541b8d4181a0506355b5fae63260/jackson-annotations-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.10.2/73d4322a6bda684f676a2b5fe918361c4e5c7cca/jackson-core-2.10.2.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper-jute/3.5.7/1270f80b08904499a6839a2ee1800da687ad96b4/zookeeper-jute-3.5.7.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.yetus/audience-annotations/0.5.0/55762d3191a8d6610ef46d11e8cb70c7667342a3/audience-annotations-0.5.0.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.45.Final/51071ba9977cce64e3a58e6f2f6326bbb7e5bc7f/netty-handler-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.45.Final/cf153257db449b6a74adb64fbd2903542af55892/netty-transport-native-epoll-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/com.thoughtworks.paranamer/paranamer/2.8/619eba74c19ccf1da8ebec97a2d7f8ba05773dd6/paranamer-2.8.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.45.Final/8c768728a3e82c3cef62a7a2c8f52ae8d777bac9/netty-codec-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.45.Final/49f9fa4b7fe7d3e562666d050049541b86822549/netty-transport-native-unix-common-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.45.Final/b7d8f2645e330bd66cd4f28f155eba605e0c8758/netty-transport-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.45.Final/bac54338074540c4f3241a3d92358fad5df89ba/netty-buffer-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.45.Final/9e77bdc045d33a570dabf9d53192ea954bb195d7/netty-resolver-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.45.Final/5cf5e448d44ddf53d00f2fc4047c2a7aceaa7087/netty-common-4.1.45.Final.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy/1.9.10/211a2b4d3df1eeef2a6cacf78d74a1f725e7a840/byte-buddy-1.9.10.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.9.10/9674aba5ee793e54b864952b001166848da0f26b/byte-buddy-agent-1.9.10.jar:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.javassist/javassist/3.25.0-GA/442dc1f9fd520130bd18da938622f4f9b2e5fba3/javassist-3.25.0-GA.jar (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,319] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:os.version=4.15.0-76-generic (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:user.name=jenkins (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,320] INFO Client environment:user.home=/home/jenkins (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,321] INFO Client environment:user.dir=/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.12/streams (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,321] INFO Client environment:os.memory.free=199MB (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,321] INFO Client environment:os.memory.max=1820MB (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,321] INFO Client environment:os.memory.total=292MB (org.apache.zookeeper.ZooKeeper:109)
      [2020-03-21 00:05:32,327] INFO Initiating client connection, connectString=127.0.0.1:42351 sessionTimeout=10000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@f4c59db (org.apache.zookeeper.ZooKeeper:868)
      [2020-03-21 00:05:32,333] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket:237)
      [2020-03-21 00:05:32,348] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn:1653)
      [2020-03-21 00:05:32,351] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient:66)
      [2020-03-21 00:05:32,362] INFO Opening socket connection to server localhost/127.0.0.1:42351. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn:1112)
      [2020-03-21 00:05:32,365] INFO Socket connection established, initiating session, client: /127.0.0.1:43096, server: localhost/127.0.0.1:42351 (org.apache.zookeeper.ClientCnxn:959)
      [2020-03-21 00:05:32,383] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog:218)
      [2020-03-21 00:05:32,401] INFO Session establishment complete on server localhost/127.0.0.1:42351, sessionid = 0x100fb9d3b070000, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:1394)
      [2020-03-21 00:05:32,408] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient:66)
      [2020-03-21 00:05:33,163] INFO Cluster ID = -OYQb-c6TdOzUjNdfG2ngA (kafka.server.KafkaServer:66)
      [2020-03-21 00:05:33,182] WARN No meta.properties file under dir /tmp/junit8727345289613412077/junit1648642492482663308/meta.properties (kafka.server.BrokerMetadataCheckpoint:70)
      [2020-03-21 00:05:33,299] INFO KafkaConfig values: 
      	advertised.host.name = null
      	advertised.listeners = null
      	advertised.port = null
      	alter.config.policy.class.name = null
      	alter.log.dirs.replication.quota.window.num = 11
      	alter.log.dirs.replication.quota.window.size.seconds = 1
      	authorizer.class.name = 
      	auto.create.topics.enable = true
      	auto.leader.rebalance.enable = true
      	background.threads = 10
      	broker.id = 0
      	broker.id.generation.enable = true
      	broker.rack = null
      	client.quota.callback.class = null
      	compression.type = producer
      	connection.failed.authentication.delay.ms = 100
      	connections.max.idle.ms = 600000
      	connections.max.reauth.ms = 0
      	control.plane.listener.name = null
      	controlled.shutdown.enable = true
      	controlled.shutdown.max.retries = 3
      	controlled.shutdown.retry.backoff.ms = 5000
      	controller.socket.timeout.ms = 30000
      	create.topic.policy.class.name = null
      	default.replication.factor = 1
      	delegation.token.expiry.check.interval.ms = 3600000
      	delegation.token.expiry.time.ms = 86400000
      	delegation.token.master.key = null
      	delegation.token.max.lifetime.ms = 604800000
      	delete.records.purgatory.purge.interval.requests = 1
      	delete.topic.enable = true
      	fetch.max.bytes = 57671680
      	fetch.purgatory.purge.interval.requests = 1000
      	group.initial.rebalance.delay.ms = 0
      	group.max.session.timeout.ms = 1800000
      	group.max.size = 2147483647
      	group.min.session.timeout.ms = 0
      	host.name = localhost
      	inter.broker.listener.name = null
      	inter.broker.protocol.version = 2.5-IV0
      	kafka.metrics.polling.interval.secs = 10
      	kafka.metrics.reporters = []
      	leader.imbalance.check.interval.seconds = 300
      	leader.imbalance.per.broker.percentage = 10
      	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      	listeners = null
      	log.cleaner.backoff.ms = 15000
      	log.cleaner.dedupe.buffer.size = 2097152
      	log.cleaner.delete.retention.ms = 86400000
      	log.cleaner.enable = true
      	log.cleaner.io.buffer.load.factor = 0.9
      	log.cleaner.io.buffer.size = 524288
      	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
      	log.cleaner.max.compaction.lag.ms = 9223372036854775807
      	log.cleaner.min.cleanable.ratio = 0.5
      	log.cleaner.min.compaction.lag.ms = 0
      	log.cleaner.threads = 1
      	log.cleanup.policy = [delete]
      	log.dir = /tmp/junit8727345289613412077/junit1648642492482663308
      	log.dirs = null
      	log.flush.interval.messages = 9223372036854775807
      	log.flush.interval.ms = null
      	log.flush.offset.checkpoint.interval.ms = 60000
      	log.flush.scheduler.interval.ms = 9223372036854775807
      	log.flush.start.offset.checkpoint.interval.ms = 60000
      	log.index.interval.bytes = 4096
      	log.index.size.max.bytes = 10485760
      	log.message.downconversion.enable = true
      	log.message.format.version = 2.5-IV0
      	log.message.timestamp.difference.max.ms = 9223372036854775807
      	log.message.timestamp.type = CreateTime
      	log.preallocate = false
      	log.retention.bytes = -1
      	log.retention.check.interval.ms = 300000
      	log.retention.hours = 168
      	log.retention.minutes = null
      	log.retention.ms = null
      	log.roll.hours = 168
      	log.roll.jitter.hours = 0
      	log.roll.jitter.ms = null
      	log.roll.ms = null
      	log.segment.bytes = 1073741824
      	log.segment.delete.delay.ms = 60000
      	max.connections = 2147483647
      	max.connections.per.ip = 2147483647
      	max.connections.per.ip.overrides = 
      	max.incremental.fetch.session.cache.slots = 1000
      	message.max.bytes = 1000000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	min.insync.replicas = 1
      	num.io.threads = 8
      	num.network.threads = 3
      	num.partitions = 1
      	num.recovery.threads.per.data.dir = 1
      	num.replica.alter.log.dirs.threads = null
      	num.replica.fetchers = 1
      	offset.metadata.max.bytes = 4096
      	offsets.commit.required.acks = -1
      	offsets.commit.timeout.ms = 5000
      	offsets.load.buffer.size = 5242880
      	offsets.retention.check.interval.ms = 600000
      	offsets.retention.minutes = 10080
      	offsets.topic.compression.codec = 0
      	offsets.topic.num.partitions = 5
      	offsets.topic.replication.factor = 1
      	offsets.topic.segment.bytes = 104857600
      	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
      	password.encoder.iterations = 4096
      	password.encoder.key.length = 128
      	password.encoder.keyfactory.algorithm = null
      	password.encoder.old.secret = null
      	password.encoder.secret = null
      	port = 0
      	principal.builder.class = null
      	producer.purgatory.purge.interval.requests = 1000
      	queued.max.request.bytes = -1
      	queued.max.requests = 500
      	quota.consumer.default = 9223372036854775807
      	quota.producer.default = 9223372036854775807
      	quota.window.num = 11
      	quota.window.size.seconds = 1
      	replica.fetch.backoff.ms = 1000
      	replica.fetch.max.bytes = 1048576
      	replica.fetch.min.bytes = 1
      	replica.fetch.response.max.bytes = 10485760
      	replica.fetch.wait.max.ms = 500
      	replica.high.watermark.checkpoint.interval.ms = 5000
      	replica.lag.time.max.ms = 30000
      	replica.selector.class = null
      	replica.socket.receive.buffer.bytes = 65536
      	replica.socket.timeout.ms = 30000
      	replication.quota.window.num = 11
      	replication.quota.window.size.seconds = 1
      	request.timeout.ms = 30000
      	reserved.broker.max.id = 1000
      	sasl.client.callback.handler.class = null
      	sasl.enabled.mechanisms = [GSSAPI]
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.principal.to.local.rules = [DEFAULT]
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism.inter.broker.protocol = GSSAPI
      	sasl.server.callback.handler.class = null
      	security.inter.broker.protocol = PLAINTEXT
      	security.providers = null
      	socket.receive.buffer.bytes = 102400
      	socket.request.max.bytes = 104857600
      	socket.send.buffer.bytes = 102400
      	ssl.cipher.suites = []
      	ssl.client.auth = none
      	ssl.enabled.protocols = [TLSv1.2]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.principal.mapping.rules = DEFAULT
      	ssl.protocol = TLSv1.2
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
      	transaction.max.timeout.ms = 900000
      	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
      	transaction.state.log.load.buffer.size = 5242880
      	transaction.state.log.min.isr = 2
      	transaction.state.log.num.partitions = 50
      	transaction.state.log.replication.factor = 3
      	transaction.state.log.segment.bytes = 104857600
      	transactional.id.expiration.ms = 604800000
      	unclean.leader.election.enable = false
      	zookeeper.clientCnxnSocket = null
      	zookeeper.connect = 127.0.0.1:42351
      	zookeeper.connection.timeout.ms = null
      	zookeeper.max.in.flight.requests = 10
      	zookeeper.session.timeout.ms = 10000
      	zookeeper.set.acl = false
      	zookeeper.ssl.cipher.suites = null
      	zookeeper.ssl.client.enable = false
      	zookeeper.ssl.crl.enable = false
      	zookeeper.ssl.enabled.protocols = null
      	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
      	zookeeper.ssl.keystore.location = null
      	zookeeper.ssl.keystore.password = null
      	zookeeper.ssl.keystore.type = null
      	zookeeper.ssl.ocsp.enable = false
      	zookeeper.ssl.protocol = TLSv1.2
      	zookeeper.ssl.truststore.location = null
      	zookeeper.ssl.truststore.password = null
      	zookeeper.ssl.truststore.type = null
      	zookeeper.sync.time.ms = 2000
       (kafka.server.KafkaConfig:347)
      [2020-03-21 00:05:33,322] INFO KafkaConfig values: 
      	advertised.host.name = null
      	advertised.listeners = null
      	advertised.port = null
      	alter.config.policy.class.name = null
      	alter.log.dirs.replication.quota.window.num = 11
      	alter.log.dirs.replication.quota.window.size.seconds = 1
      	authorizer.class.name = 
      	auto.create.topics.enable = true
      	auto.leader.rebalance.enable = true
      	background.threads = 10
      	broker.id = 0
      	broker.id.generation.enable = true
      	broker.rack = null
      	client.quota.callback.class = null
      	compression.type = producer
      	connection.failed.authentication.delay.ms = 100
      	connections.max.idle.ms = 600000
      	connections.max.reauth.ms = 0
      	control.plane.listener.name = null
      	controlled.shutdown.enable = true
      	controlled.shutdown.max.retries = 3
      	controlled.shutdown.retry.backoff.ms = 5000
      	controller.socket.timeout.ms = 30000
      	create.topic.policy.class.name = null
      	default.replication.factor = 1
      	delegation.token.expiry.check.interval.ms = 3600000
      	delegation.token.expiry.time.ms = 86400000
      	delegation.token.master.key = null
      	delegation.token.max.lifetime.ms = 604800000
      	delete.records.purgatory.purge.interval.requests = 1
      	delete.topic.enable = true
      	fetch.max.bytes = 57671680
      	fetch.purgatory.purge.interval.requests = 1000
      	group.initial.rebalance.delay.ms = 0
      	group.max.session.timeout.ms = 1800000
      	group.max.size = 2147483647
      	group.min.session.timeout.ms = 0
      	host.name = localhost
      	inter.broker.listener.name = null
      	inter.broker.protocol.version = 2.5-IV0
      	kafka.metrics.polling.interval.secs = 10
      	kafka.metrics.reporters = []
      	leader.imbalance.check.interval.seconds = 300
      	leader.imbalance.per.broker.percentage = 10
      	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      	listeners = null
      	log.cleaner.backoff.ms = 15000
      	log.cleaner.dedupe.buffer.size = 2097152
      	log.cleaner.delete.retention.ms = 86400000
      	log.cleaner.enable = true
      	log.cleaner.io.buffer.load.factor = 0.9
      	log.cleaner.io.buffer.size = 524288
      	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
      	log.cleaner.max.compaction.lag.ms = 9223372036854775807
      	log.cleaner.min.cleanable.ratio = 0.5
      	log.cleaner.min.compaction.lag.ms = 0
      	log.cleaner.threads = 1
      	log.cleanup.policy = [delete]
      	log.dir = /tmp/junit8727345289613412077/junit1648642492482663308
      	log.dirs = null
      	log.flush.interval.messages = 9223372036854775807
      	log.flush.interval.ms = null
      	log.flush.offset.checkpoint.interval.ms = 60000
      	log.flush.scheduler.interval.ms = 9223372036854775807
      	log.flush.start.offset.checkpoint.interval.ms = 60000
      	log.index.interval.bytes = 4096
      	log.index.size.max.bytes = 10485760
      	log.message.downconversion.enable = true
      	log.message.format.version = 2.5-IV0
      	log.message.timestamp.difference.max.ms = 9223372036854775807
      	log.message.timestamp.type = CreateTime
      	log.preallocate = false
      	log.retention.bytes = -1
      	log.retention.check.interval.ms = 300000
      	log.retention.hours = 168
      	log.retention.minutes = null
      	log.retention.ms = null
      	log.roll.hours = 168
      	log.roll.jitter.hours = 0
      	log.roll.jitter.ms = null
      	log.roll.ms = null
      	log.segment.bytes = 1073741824
      	log.segment.delete.delay.ms = 60000
      	max.connections = 2147483647
      	max.connections.per.ip = 2147483647
      	max.connections.per.ip.overrides = 
      	max.incremental.fetch.session.cache.slots = 1000
      	message.max.bytes = 1000000
      	metric.reporters = []
      	metrics.num.samples = 2
      	metrics.recording.level = INFO
      	metrics.sample.window.ms = 30000
      	min.insync.replicas = 1
      	num.io.threads = 8
      	num.network.threads = 3
      	num.partitions = 1
      	num.recovery.threads.per.data.dir = 1
      	num.replica.alter.log.dirs.threads = null
      	num.replica.fetchers = 1
      	offset.metadata.max.bytes = 4096
      	offsets.commit.required.acks = -1
      	offsets.commit.timeout.ms = 5000
      	offsets.load.buffer.size = 5242880
      	offsets.retention.check.interval.ms = 600000
      	offsets.retention.minutes = 10080
      	offsets.topic.compression.codec = 0
      	offsets.topic.num.partitions = 5
      	offsets.topic.replication.factor = 1
      	offsets.topic.segment.bytes = 104857600
      	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
      	password.encoder.iterations = 4096
      	password.encoder.key.length = 128
      	password.encoder.keyfactory.algorithm = null
      	password.encoder.old.secret = null
      	password.encoder.secret = null
      	port = 0
      	principal.builder.class = null
      	producer.purgatory.purge.interval.requests = 1000
      	queued.max.request.bytes = -1
      	queued.max.requests = 500
      	quota.consumer.default = 9223372036854775807
      	quota.producer.default = 9223372036854775807
      	quota.window.num = 11
      	quota.window.size.seconds = 1
      	replica.fetch.backoff.ms = 1000
      	replica.fetch.max.bytes = 1048576
      	replica.fetch.min.bytes = 1
      	replica.fetch.response.max.bytes = 10485760
      	replica.fetch.wait.max.ms = 500
      	replica.high.watermark.checkpoint.interval.ms = 5000
      	replica.lag.time.max.ms = 30000
      	replica.selector.class = null
      	replica.socket.receive.buffer.bytes = 65536
      	replica.socket.timeout.ms = 30000
      	replication.quota.window.num = 11
      	replication.quota.window.size.seconds = 1
      	request.timeout.ms = 30000
      	reserved.broker.max.id = 1000
      	sasl.client.callback.handler.class = null
      	sasl.enabled.mechanisms = [GSSAPI]
      	sasl.jaas.config = null
      	sasl.kerberos.kinit.cmd = /usr/bin/kinit
      	sasl.kerberos.min.time.before.relogin = 60000
      	sasl.kerberos.principal.to.local.rules = [DEFAULT]
      	sasl.kerberos.service.name = null
      	sasl.kerberos.ticket.renew.jitter = 0.05
      	sasl.kerberos.ticket.renew.window.factor = 0.8
      	sasl.login.callback.handler.class = null
      	sasl.login.class = null
      	sasl.login.refresh.buffer.seconds = 300
      	sasl.login.refresh.min.period.seconds = 60
      	sasl.login.refresh.window.factor = 0.8
      	sasl.login.refresh.window.jitter = 0.05
      	sasl.mechanism.inter.broker.protocol = GSSAPI
      	sasl.server.callback.handler.class = null
      	security.inter.broker.protocol = PLAINTEXT
      	security.providers = null
      	socket.receive.buffer.bytes = 102400
      	socket.request.max.bytes = 104857600
      	socket.send.buffer.bytes = 102400
      	ssl.cipher.suites = []
      	ssl.client.auth = none
      	ssl.enabled.protocols = [TLSv1.2]
      	ssl.endpoint.identification.algorithm = https
      	ssl.key.password = null
      	ssl.keymanager.algorithm = SunX509
      	ssl.keystore.location = null
      	ssl.keystore.password = null
      	ssl.keystore.type = JKS
      	ssl.principal.mapping.rules = DEFAULT
      	ssl.protocol = TLSv1.2
      	ssl.provider = null
      	ssl.secure.random.implementation = null
      	ssl.trustmanager.algorithm = PKIX
      	ssl.truststore.location = null
      	ssl.truststore.password = null
      	ssl.truststore.type = JKS
      	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
      	transaction.max.timeout.ms = 900000
      	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
      	transaction.state.log.load.buffer.size = 5242880
      	transaction.state.log.min.isr = 2
      	transaction.state.log.num.partitions = 50
      	transaction.state.log.replication.factor = 3
      	transaction.state.log.segment.bytes = 104857600
      	transactional.id.expiration.ms = 604800000
      	unclean.leader.election.enable = false
      	zookeeper.clientCnxnSocket = null
      	zookeeper.connect = 127.0.0.1:42351
      	zookeeper.connection.timeout.ms = null
      	zookeeper.max.in.flight.requests = 10
      	zookeeper.se
      ...[truncated 198295 chars]...
      achine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:58)
      	at kafka.controller.KafkaController.onReplicasBecomeOffline(KafkaController.scala:450)
      	at kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:418)
      	at kafka.controller.KafkaController.processBrokerChange(KafkaController.scala:1398)
      	at kafka.controller.KafkaController.process(KafkaController.scala:1834)
      	at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:128)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:131)
      	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:131)
      	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
      [2020-03-21 00:05:58,344] ERROR [Controller id=2 epoch=2] Controller 2 epoch 2 failed to change state for partition __consumer_offsets-0 from OfflinePartition to OnlinePartition (state.change.logger:76)
      kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-0 under strategy OfflinePartitionLeaderElectionStrategy(false)
      	at kafka.controller.ZkPartitionStateMachine.$anonfun$doElectLeaderForPartitions$7(PartitionStateMachine.scala:427)
      	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
      	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
      	at kafka.controller.ZkPartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:424)
      	at kafka.controller.ZkPartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:335)
      	at kafka.controller.ZkPartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:236)
      	at kafka.controller.ZkPartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:157)
      	at kafka.controller.PartitionStateMachine.triggerOnlineStateChangeForPartitions(PartitionStateMachine.scala:73)
      	at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:58)
      	at kafka.controller.KafkaController.onReplicasBecomeOffline(KafkaController.scala:450)
      	at kafka.controller.KafkaController.onBrokerFailure(KafkaController.scala:418)
      	at kafka.controller.KafkaController.processBrokerChange(KafkaController.scala:1398)
      	at kafka.controller.KafkaController.process(KafkaController.scala:1834)
      	at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:128)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:131)
      	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:131)
      	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
      [2020-03-21 00:05:58,351] INFO [Controller id=2] Updated broker epochs cache: Map(2 -> 59) (kafka.controller.KafkaController:66)
      [2020-03-21 00:05:58,423] INFO Session: 0x100fb9d3b070001 closed (org.apache.zookeeper.ZooKeeper:1422)
      [2020-03-21 00:05:58,423] INFO EventThread shut down for session: 0x100fb9d3b070001 (org.apache.zookeeper.ClientCnxn:524)
      [2020-03-21 00:05:58,423] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient:66)
      [2020-03-21 00:05:58,424] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:05:59,186] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:05:59,186] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:05:59,187] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:05:59,817] INFO [Controller id=2] Processing automatic preferred replica leader election (kafka.controller.KafkaController:66)
      [2020-03-21 00:05:59,820] INFO [Controller id=2] Starting replica leader election (PREFERRED) for partitions  triggered by AutoTriggered (kafka.controller.KafkaController:66)
      [2020-03-21 00:05:59,821] INFO [Controller id=2] Starting replica leader election (PREFERRED) for partitions  triggered by AutoTriggered (kafka.controller.KafkaController:66)
      [2020-03-21 00:06:00,187] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:00,187] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:00,188] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:01,187] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:01,187] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:01,189] INFO [SocketServer brokerId=1] Shutting down socket server (kafka.network.SocketServer:66)
      [2020-03-21 00:06:01,229] INFO [SocketServer brokerId=1] Shutdown completed (kafka.network.SocketServer:66)
      [2020-03-21 00:06:01,231] INFO [KafkaServer id=1] shut down completed (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:01,234] INFO [KafkaServer id=2] shutting down (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:01,235] INFO [KafkaServer id=2] Starting controlled shutdown (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:01,250] INFO [Controller id=2] Shutting down broker 2 (kafka.controller.KafkaController:66)
      [2020-03-21 00:06:01,253] ERROR [Controller id=2 epoch=2] Controller 2 epoch 2 failed to change state for partition input-0 from OnlinePartition to OnlinePartition (state.change.logger:76)
      kafka.common.StateChangeFailedException: Failed to elect leader for partition input-0 under strategy ControlledShutdownPartitionLeaderElectionStrategy
      	at kafka.controller.ZkPartitionStateMachine.$anonfun$doElectLeaderForPartitions$7(PartitionStateMachine.scala:427)
      	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
      	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
      	at kafka.controller.ZkPartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:424)
      	at kafka.controller.ZkPartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:335)
      	at kafka.controller.ZkPartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:236)
      	at kafka.controller.ZkPartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:157)
      	at kafka.controller.KafkaController.doControlledShutdown(KafkaController.scala:1141)
      	at kafka.controller.KafkaController.$anonfun$processControlledShutdown$1(KafkaController.scala:1103)
      	at kafka.controller.KafkaController.processControlledShutdown(KafkaController.scala:1103)
      	at kafka.controller.KafkaController.process(KafkaController.scala:1826)
      	at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:128)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:131)
      	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:131)
      	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
      [2020-03-21 00:06:01,258] INFO [KafkaServer id=2] Remaining partitions to move: [RemainingPartition(topicName='input', partitionIndex=0)] (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:01,259] INFO [KafkaServer id=2] Error from controller: NONE (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:05,504] INFO [Log partition=input-0, dir=/tmp/junit8106553923042971486/junit8676271596052198179] Found deletable segments with base offsets [0] due to retention time 604800000ms breach (kafka.log.Log:66)
      [2020-03-21 00:06:05,508] INFO [ProducerStateManager partition=input-0] Writing producer snapshot at offset 1 (kafka.log.ProducerStateManager:66)
      [2020-03-21 00:06:05,512] INFO [Log partition=input-0, dir=/tmp/junit8106553923042971486/junit8676271596052198179] Rolled new log segment at offset 1 in 0 ms. (kafka.log.Log:66)
      [2020-03-21 00:06:05,513] INFO [Log partition=input-0, dir=/tmp/junit8106553923042971486/junit8676271596052198179] Scheduling segments for deletion List(LogSegment(baseOffset=0, size=76, lastModifiedTime=1584749137000, largestTime=10)) (kafka.log.Log:66)
      [2020-03-21 00:06:05,515] INFO [Log partition=input-0, dir=/tmp/junit8106553923042971486/junit8676271596052198179] Incrementing log start offset to 1 (kafka.log.Log:66)
      [2020-03-21 00:06:06,261] WARN [KafkaServer id=2] Retrying controlled shutdown after the previous attempt failed... (kafka.server.KafkaServer:70)
      [2020-03-21 00:06:06,270] INFO [Controller id=2] Shutting down broker 2 (kafka.controller.KafkaController:66)
      [2020-03-21 00:06:06,274] ERROR [Controller id=2 epoch=2] Controller 2 epoch 2 failed to change state for partition input-0 from OnlinePartition to OnlinePartition (state.change.logger:76)
      kafka.common.StateChangeFailedException: Failed to elect leader for partition input-0 under strategy ControlledShutdownPartitionLeaderElectionStrategy
      	at kafka.controller.ZkPartitionStateMachine.$anonfun$doElectLeaderForPartitions$7(PartitionStateMachine.scala:427)
      	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
      	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
      	at kafka.controller.ZkPartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:424)
      	at kafka.controller.ZkPartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:335)
      	at kafka.controller.ZkPartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:236)
      	at kafka.controller.ZkPartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:157)
      	at kafka.controller.KafkaController.doControlledShutdown(KafkaController.scala:1141)
      	at kafka.controller.KafkaController.$anonfun$processControlledShutdown$1(KafkaController.scala:1103)
      	at kafka.controller.KafkaController.processControlledShutdown(KafkaController.scala:1103)
      	at kafka.controller.KafkaController.process(KafkaController.scala:1826)
      	at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:128)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:131)
      	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:131)
      	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
      [2020-03-21 00:06:06,278] INFO [KafkaServer id=2] Remaining partitions to move: [RemainingPartition(topicName='input', partitionIndex=0)] (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:06,278] INFO [KafkaServer id=2] Error from controller: NONE (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:08,113] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,123] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,123] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,137] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,138] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,140] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,140] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,141] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,142] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,143] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,143] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,145] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,145] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,147] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,147] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,149] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,149] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,151] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,151] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,153] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,153] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,155] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,155] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,157] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,157] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,159] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,160] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,162] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,162] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,164] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,164] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,165] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,166] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,167] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,167] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:08,169] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Discovered group coordinator localhost:42191 (id: 2147483645 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:795)
      [2020-03-21 00:06:08,170] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:551)
      [2020-03-21 00:06:11,279] WARN [KafkaServer id=2] Retrying controlled shutdown after the previous attempt failed... (kafka.server.KafkaServer:70)
      [2020-03-21 00:06:11,287] INFO [Controller id=2] Shutting down broker 2 (kafka.controller.KafkaController:66)
      [2020-03-21 00:06:11,292] ERROR [Controller id=2 epoch=2] Controller 2 epoch 2 failed to change state for partition input-0 from OnlinePartition to OnlinePartition (state.change.logger:76)
      kafka.common.StateChangeFailedException: Failed to elect leader for partition input-0 under strategy ControlledShutdownPartitionLeaderElectionStrategy
      	at kafka.controller.ZkPartitionStateMachine.$anonfun$doElectLeaderForPartitions$7(PartitionStateMachine.scala:427)
      	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
      	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
      	at kafka.controller.ZkPartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:424)
      	at kafka.controller.ZkPartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:335)
      	at kafka.controller.ZkPartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:236)
      	at kafka.controller.ZkPartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:157)
      	at kafka.controller.KafkaController.doControlledShutdown(KafkaController.scala:1141)
      	at kafka.controller.KafkaController.$anonfun$processControlledShutdown$1(KafkaController.scala:1103)
      	at kafka.controller.KafkaController.processControlledShutdown(KafkaController.scala:1103)
      	at kafka.controller.KafkaController.process(KafkaController.scala:1826)
      	at kafka.controller.QueuedEvent.process(ControllerEventManager.scala:52)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.process$1(ControllerEventManager.scala:128)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:131)
      	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
      	at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:131)
      	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
      [2020-03-21 00:06:11,296] INFO [KafkaServer id=2] Remaining partitions to move: [RemainingPartition(topicName='input', partitionIndex=0)] (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:11,296] INFO [KafkaServer id=2] Error from controller: NONE (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:16,297] WARN [KafkaServer id=2] Retrying controlled shutdown after the previous attempt failed... (kafka.server.KafkaServer:70)
      [2020-03-21 00:06:16,307] WARN [KafkaServer id=2] Proceeding to do an unclean shutdown as all the controlled shutdown attempts failed (kafka.server.KafkaServer:70)
      [2020-03-21 00:06:16,308] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2020-03-21 00:06:16,309] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2020-03-21 00:06:16,309] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66)
      [2020-03-21 00:06:16,309] INFO [SocketServer brokerId=2] Stopping socket server request processors (kafka.network.SocketServer:66)
      [2020-03-21 00:06:16,312] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,318] INFO [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Group coordinator localhost:42191 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:847)
      [2020-03-21 00:06:16,319] INFO [SocketServer brokerId=2] Stopped socket server request processors (kafka.network.SocketServer:66)
      [2020-03-21 00:06:16,319] INFO [data-plane Kafka Request Handler on Broker 2], shutting down (kafka.server.KafkaRequestHandlerPool:66)
      [2020-03-21 00:06:16,319] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,320] INFO [data-plane Kafka Request Handler on Broker 2], shut down completely (kafka.server.KafkaRequestHandlerPool:66)
      [2020-03-21 00:06:16,321] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,323] INFO [ExpirationReaper-2-AlterAcls]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,413] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,414] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,417] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,423] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,424] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,461] INFO [ExpirationReaper-2-AlterAcls]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,461] INFO [ExpirationReaper-2-AlterAcls]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,462] INFO [KafkaApi-2] Shutdown complete. (kafka.server.KafkaApis:66)
      [2020-03-21 00:06:16,462] INFO [ExpirationReaper-2-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,515] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,525] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,526] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,564] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,617] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,627] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,661] INFO [ExpirationReaper-2-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,661] INFO [ExpirationReaper-2-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,662] INFO [TransactionCoordinator id=2] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator:66)
      [2020-03-21 00:06:16,662] INFO [ProducerId Manager 2]: Shutdown complete: last producerId assigned 2000 (kafka.coordinator.transaction.ProducerIdManager:66)
      [2020-03-21 00:06:16,663] INFO [Transaction State Manager 2]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager:66)
      [2020-03-21 00:06:16,663] INFO [Transaction Marker Channel Manager 2]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2020-03-21 00:06:16,664] INFO [Transaction Marker Channel Manager 2]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2020-03-21 00:06:16,664] INFO [Transaction Marker Channel Manager 2]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager:66)
      [2020-03-21 00:06:16,664] INFO [TransactionCoordinator id=2] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator:66)
      [2020-03-21 00:06:16,665] INFO [GroupCoordinator 2]: Shutting down. (kafka.coordinator.group.GroupCoordinator:66)
      [2020-03-21 00:06:16,665] INFO [ExpirationReaper-2-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,718] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,728] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,815] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,830] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,838] INFO [ExpirationReaper-2-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,838] INFO [ExpirationReaper-2-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,838] INFO [ExpirationReaper-2-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,917] INFO [ExpirationReaper-2-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,917] INFO [ExpirationReaper-2-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,918] INFO [GroupCoordinator 2]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator:66)
      [2020-03-21 00:06:16,918] INFO [ReplicaManager broker=2] Shutting down (kafka.server.ReplicaManager:66)
      [2020-03-21 00:06:16,919] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2020-03-21 00:06:16,919] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2020-03-21 00:06:16,919] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler:66)
      [2020-03-21 00:06:16,920] INFO [ReplicaFetcherManager on broker 2] shutting down (kafka.server.ReplicaFetcherManager:66)
      [2020-03-21 00:06:16,920] INFO [ReplicaFetcherManager on broker 2] shutdown completed (kafka.server.ReplicaFetcherManager:66)
      [2020-03-21 00:06:16,921] INFO [ReplicaAlterLogDirsManager on broker 2] shutting down (kafka.server.ReplicaAlterLogDirsManager:66)
      [2020-03-21 00:06:16,921] INFO [ReplicaAlterLogDirsManager on broker 2] shutdown completed (kafka.server.ReplicaAlterLogDirsManager:66)
      [2020-03-21 00:06:16,921] INFO [ExpirationReaper-2-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,931] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:16,968] INFO [ExpirationReaper-2-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,968] INFO [ExpirationReaper-2-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:16,968] INFO [ExpirationReaper-2-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,061] INFO [ExpirationReaper-2-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,061] INFO [ExpirationReaper-2-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,062] INFO [ExpirationReaper-2-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,120] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,121] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,232] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,261] INFO [ExpirationReaper-2-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,262] INFO [ExpirationReaper-2-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,262] INFO [ExpirationReaper-2-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,268] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,334] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,334] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,462] INFO [ExpirationReaper-2-ElectLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,462] INFO [ExpirationReaper-2-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66)
      [2020-03-21 00:06:17,538] INFO [ReplicaManager broker=2] Shut down completely (kafka.server.ReplicaManager:66)
      [2020-03-21 00:06:17,538] INFO Shutting down. (kafka.log.LogManager:66)
      [2020-03-21 00:06:17,539] INFO Shutting down the log cleaner. (kafka.log.LogCleaner:66)
      [2020-03-21 00:06:17,540] INFO [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner:66)
      [2020-03-21 00:06:17,540] INFO [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner:66)
      [2020-03-21 00:06:17,540] INFO [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner:66)
      [2020-03-21 00:06:17,673] INFO Shutdown complete. (kafka.log.LogManager:66)
      [2020-03-21 00:06:17,673] INFO [ControllerEventThread controllerId=2] Shutting down (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2020-03-21 00:06:17,674] INFO [ControllerEventThread controllerId=2] Stopped (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2020-03-21 00:06:17,674] INFO [ControllerEventThread controllerId=2] Shutdown completed (kafka.controller.ControllerEventManager$ControllerEventThread:66)
      [2020-03-21 00:06:17,675] INFO [PartitionStateMachine controllerId=2] Stopped partition state machine (kafka.controller.ZkPartitionStateMachine:66)
      [2020-03-21 00:06:17,676] INFO [ReplicaStateMachine controllerId=2] Stopped replica state machine (kafka.controller.ZkReplicaStateMachine:66)
      [2020-03-21 00:06:17,676] INFO [RequestSendThread controllerId=2] Shutting down (kafka.controller.RequestSendThread:66)
      [2020-03-21 00:06:17,677] INFO [RequestSendThread controllerId=2] Stopped (kafka.controller.RequestSendThread:66)
      [2020-03-21 00:06:17,677] INFO [RequestSendThread controllerId=2] Shutdown completed (kafka.controller.RequestSendThread:66)
      [2020-03-21 00:06:17,680] INFO [Controller id=2] Resigned (kafka.controller.KafkaController:66)
      [2020-03-21 00:06:17,681] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient:66)
      [2020-03-21 00:06:17,786] INFO Session: 0x100fb9d3b070002 closed (org.apache.zookeeper.ZooKeeper:1422)
      [2020-03-21 00:06:17,786] INFO EventThread shut down for session: 0x100fb9d3b070002 (org.apache.zookeeper.ClientCnxn:524)
      [2020-03-21 00:06:17,786] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient:66)
      [2020-03-21 00:06:17,787] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:17,924] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,925] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,937] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:17,972] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:18,038] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:18,239] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:18,638] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:18,638] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:18,639] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:18,942] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:18,943] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:19,025] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:19,128] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:19,129] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:19,344] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:19,638] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:19,638] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:19,639] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:19,847] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 0 (localhost/127.0.0.1:36215) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:20,048] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:20,186] WARN [Consumer clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-StreamThread-1-consumer, groupId=eos-test-app] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:20,233] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 1 (localhost/127.0.0.1:45421) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:20,334] WARN [AdminClient clientId=eos-test-app-e1abba99-766c-451f-81f7-f047906b6445-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:20,550] WARN [AdminClient clientId=eos-test-app-c008d814-f6fd-40d4-a541-5373b56cfd7b-admin] Connection to node 2 (localhost/127.0.0.1:42191) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:762)
      [2020-03-21 00:06:20,638] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:20,638] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66)
      [2020-03-21 00:06:20,639] INFO [SocketServer brokerId=2] Shutting down socket server (kafka.network.SocketServer:66)
      [2020-03-21 00:06:20,674] INFO [SocketServer brokerId=2] Shutdown completed (kafka.network.SocketServer:66)
      [2020-03-21 00:06:20,675] INFO [KafkaServer id=2] shut down completed (kafka.server.KafkaServer:66)
      [2020-03-21 00:06:20,679] INFO ConnnectionExpirerThread interrupted (org.apache.zookeeper.server.NIOServerCnxnFactory:583)
      [2020-03-21 00:06:20,681] INFO selector thread exitted run method (org.apache.zookeeper.server.NIOServerCnxnFactory:420)
      [2020-03-21 00:06:20,682] INFO selector thread exitted run method (org.apache.zookeeper.server.NIOServerCnxnFactory:420)
      [2020-03-21 00:06:20,684] INFO selector thread exitted run method (org.apache.zookeeper.server.NIOServerCnxnFactory:420)
      [2020-03-21 00:06:20,689] INFO accept thread exitted run method (org.apache.zookeeper.server.NIOServerCnxnFactory:219)
      [2020-03-21 00:06:20,692] INFO shutting down (org.apache.zookeeper.server.ZooKeeperServer:558)
      [2020-03-21 00:06:20,692] INFO Shutting down (org.apache.zookeeper.server.SessionTrackerImpl:237)
      [2020-03-21 00:06:20,692] INFO Shutting down (org.apache.zookeeper.server.PrepRequestProcessor:1007)
      [2020-03-21 00:06:20,693] INFO Shutting down (org.apache.zookeeper.server.SyncRequestProcessor:191)
      [2020-03-21 00:06:20,693] INFO PrepRequestProcessor exited loop! (org.apache.zookeeper.server.PrepRequestProcessor:155)
      [2020-03-21 00:06:20,693] INFO SyncRequestProcessor exited! (org.apache.zookeeper.server.SyncRequestProcessor:169)
      [2020-03-21 00:06:20,693] INFO shutdown of request processor complete (org.apache.zookeeper.server.FinalRequestProcessor:514)
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                vvcephei John Roesler
                Reporter:
                vvcephei John Roesler
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: