I am new to docker-compose.
I try to run: https://github.com/wurstmeister/kafka-docker with an adapted docker-compose.yml file: https://github.com/geoHeil/sparkplay
This is the output of docker-compose ps.
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------
docker_kafka_1 /bin/sh -c start-kafka.sh Up 0.0.0.0:32780->9092/tcp
docker_kafka_2 /bin/sh -c start-kafka.sh Up 0.0.0.0:32782->9092/tcp
docker_kafka_3 /bin/sh -c start-kafka.sh Up 0.0.0.0:32781->9092/tcp
docker_zookeeper_1 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:32779->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
docker_zookeeper_2 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:32783->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
docker_zookeeper_3 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:32784->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
So apparently some ports should be available. I am using osx with kitematic / the docker-toolkit. However if I hit any of these IP-addresses with my browser no connection can be established.
edit:
This is the docker-compose.yml file: https://github.com/geoHeil/sparkplay/blob/master/docker-compose.yml
The logs of the docker containers after connecting with the browser:
kafka_1 | [2015-11-14 16:15:34,000] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
kafka_1 | [2015-11-14 16:15:34,002] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
kafka_1 | [2015-11-14 16:15:34,004] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
kafka_1 | [2015-11-14 16:15:34,092] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
zookeeper_1 | 2015-11-14 16:15:51,375 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52315
zookeeper_1 | 2015-11-14 16:15:51,375 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52316
zookeeper_1 | 2015-11-14 16:15:51,549 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,549 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52315 (no session established for client)
zookeeper_1 | 2015-11-14 16:15:51,550 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,550 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52316 (no session established for client)
zookeeper_1 | 2015-11-14 16:15:51,552 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52317
zookeeper_1 | 2015-11-14 16:15:51,552 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,553 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52317 (no session established for client)
zookeeper_1 | 2015-11-14 16:15:51,651 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52318
zookeeper_1 | 2015-11-14 16:15:51,651 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,652 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52318 (no session established for client)
And:
zookeeper_1 | 2015-11-14 16:25:09,810 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
zookeeper_1 | 2015-11-14 16:25:09,820 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:create cxid:0xa zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
zookeeper_1 | 2015-11-14 16:25:09,825 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:create cxid:0x10 zxid:0xb txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
zookeeper_1 | 2015-11-14 16:25:09,991 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:setData cxid:0x1a zxid:0xf txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
zookeeper_1 | 2015-11-14 16:25:10,030 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:delete cxid:0x27 zxid:0x11 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
Following the instructions there: http://sookocheff.com/post/kafka/kafka-quick-start/ helped me a lot to get kafka up and running in docker.
edit:
I re-cloned the wurstmeister/kafka repo and started from scratch. This seemed to work.
Related
I have this setup that has been working fine since last year December which suddenly refused to work.
I have this docker-compose yaml file like this:
version: "3.8"
services:
zookeeper1:
image: debezium/zookeeper:1.8
container_name: zookeeper1
ports:
- 2181:2181
networks:
- internalnet
kafka1:
image: debezium/kafka:1.8
container_name: kafka1
ports:
- 9092:9092
depends_on:
- zookeeper1
environment:
- KAFKA_BROKER_ID=100
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181
- KAFKA_ADVERTISED_HOST_NAME=kafka1
- KAFKA_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092
- KAFKA_ADVERTISED_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=LISTENER_BOB
- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS=60000
networks:
- internalnet
volumes:
- ./kafka/kafka1/kafka_data:/kafka/data
- ./kafka/kafka1/kafka_logs:/kafka/logs
networks:
internalnet:
driver: bridge
Zookeeper is running ok but kafka fails to run with the following log:
WARNING: Using default NODE_ID=1, which is valid only for non-clustered installations.
Starting in ZooKeeper mode using NODE_ID=1.
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
Using configuration config/server.properties.
Using KAFKA_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092 and KAFKA_ADVERTISED_LISTENERS=LISTENER_BOB://kafka1:29092,LISTENER_FRED://localhost:9092
2022-09-16 16:26:57,844 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 16:26:58,521 - INFO [main:X509Util#77] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2022-09-16 16:26:58,667 - INFO [main:LoggingSignalHandler#72] - Registered signal handlers for TERM, INT, HUP
2022-09-16 16:26:58,674 - INFO [main:Logging#66] - starting
2022-09-16 16:26:58,678 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 16:26:58,719 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 16:26:58,733 - INFO [main:Environment#98] - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:host.name=44841d8b6caa
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:java.version=11.0.14.1
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:java.vendor=Red Hat, Inc.
2022-09-16 16:26:58,734 - INFO [main:Environment#98] - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-11.0.14.1.1-5.fc34.x86_64
2022-09-16 16:26:58,735 - INFO [main:Environment#98] - Client environment:java.class.path=/kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.5.0.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-lang3-3.8.1.jar:/kafka/libs/connect-api-3.0.0.jar:/kafka/libs/connect-basic-auth-extension-3.0.0.jar:/kafka/libs/connect-file-3.0.0.jar:/kafka/libs/connect-json-3.0.0.jar:/kafka/libs/connect-mirror-3.0.0.jar:/kafka/libs/connect-mirror-client-3.0.0.jar:/kafka/libs/connect-runtime-3.0.0.jar:/kafka/libs/connect-transforms-3.0.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.12.3.jar:/kafka/libs/jackson-core-2.12.3.jar:/kafka/libs/jackson-databind-2.12.3.jar:/kafka/libs/jackson-dataformat-csv-2.12.3.jar:/kafka/libs/jackson-datatype-jdk8-2.12.3.jar:/kafka/libs/jackson-jaxrs-base-2.12.3.jar:/kafka/libs/jackson-jaxrs-json-provider-2.12.3.jar:/kafka/libs/jackson-module-jaxb-annotations-2.12.3.jar:/kafka/libs/jackson-module-scala_2.12-2.12.3.jar:/kafka/libs/jakarta.activation-api-1.2.1.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.2.jar:/kafka/libs/javassist-3.27.0-GA.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.0.jar:/kafka/libs/jersey-client-2.34.jar:/kafka/libs/jersey-common-2.34.jar:/kafka/libs/jersey-container-servlet-2.34.jar:/kafka/libs/jersey-container-servlet-core-2.34.jar:/kafka/libs/jersey-hk2-2.34.jar:/kafka/libs/jersey-server-2.34.jar:/kafka/libs/jetty-client-9.4.43.v20210629.jar:/kafka/libs/jetty-continuation-9.4.43.v20210629.jar:/kafka/libs/jetty-http-9.4.43.v20210629.jar:/kafka/libs/jetty-io-9.4.43.v20210629.jar:/kafka/libs/jetty-security-9.4.43.v20210629.jar:/kafka/libs/jetty-server-9.4.43.v20210629.jar:/kafka/libs/jetty-servlet-9.4.43.v20210629.jar:/kafka/libs/jetty-servlets-9.4.43.v20210629.jar:/kafka/libs/jetty-util-9.4.43.v20210629.jar:/kafka/libs/jetty-util-ajax-9.4.43.v20210629.jar:/kafka/libs/jline-3.12.1.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/kafka-clients-3.0.0.jar:/kafka/libs/kafka-log4j-appender-3.0.0.jar:/kafka/libs/kafka-metadata-3.0.0.jar:/kafka/libs/kafka-raft-3.0.0.jar:/kafka/libs/kafka-server-common-3.0.0.jar:/kafka/libs/kafka-shell-3.0.0.jar:/kafka/libs/kafka-storage-3.0.0.jar:/kafka/libs/kafka-storage-api-3.0.0.jar:/kafka/libs/kafka-streams-3.0.0.jar:/kafka/libs/kafka-streams-examples-3.0.0.jar:/kafka/libs/kafka-streams-scala_2.12-3.0.0.jar:/kafka/libs/kafka-streams-test-utils-3.0.0.jar:/kafka/libs/kafka-tools-3.0.0.jar:/kafka/libs/kafka_2.12-3.0.0.jar:/kafka/libs/log4j-1.2.17.jar:/kafka/libs/lz4-java-1.7.1.jar:/kafka/libs/maven-artifact-3.8.1.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.62.Final.jar:/kafka/libs/netty-codec-4.1.62.Final.jar:/kafka/libs/netty-common-4.1.62.Final.jar:/kafka/libs/netty-handler-4.1.62.Final.jar:/kafka/libs/netty-resolver-4.1.62.Final.jar:/kafka/libs/netty-transport-4.1.62.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.62.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.62.Final.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/plexus-utils-3.2.1.jar:/kafka/libs/reflections-0.9.12.jar:/kafka/libs/rocksdbjni-6.19.3.jar:/kafka/libs/scala-collection-compat_2.12-2.4.4.jar:/kafka/libs/scala-java8-compat_2.12-1.0.0.jar:/kafka/libs/scala-library-2.12.14.jar:/kafka/libs/scala-logging_2.12-3.9.3.jar:/kafka/libs/scala-reflect-2.12.14.jar:/kafka/libs/slf4j-api-1.7.30.jar:/kafka/libs/slf4j-log4j12-1.7.30.jar:/kafka/libs/snappy-java-1.1.8.1.jar:/kafka/libs/trogdor-3.0.0.jar:/kafka/libs/zookeeper-3.6.3.jar:/kafka/libs/zookeeper-jute-3.6.3.jar:/kafka/libs/zstd-jni-1.5.0-2.jar
2022-09-16 16:26:58,740 - INFO [main:Environment#98] - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2022-09-16 16:26:58,745 - INFO [main:Environment#98] - Client environment:java.io.tmpdir=/tmp
2022-09-16 16:26:58,745 - INFO [main:Environment#98] - Client environment:java.compiler=<NA>
2022-09-16 16:26:58,745 - INFO [main:Environment#98] - Client environment:os.name=Linux
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:os.arch=amd64
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:os.version=5.10.16.3-microsoft-standard-WSL2
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:user.name=kafka
2022-09-16 16:26:58,748 - INFO [main:Environment#98] - Client environment:user.home=/kafka
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:user.dir=/kafka
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:os.memory.free=975MB
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:os.memory.max=1024MB
2022-09-16 16:26:58,749 - INFO [main:Environment#98] - Client environment:os.memory.total=1024MB
2022-09-16 16:26:58,754 - INFO [main:ZooKeeper#1006] - Initiating client connection, connectString=0.0.0.0:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#3fc79729
2022-09-16 16:26:58,782 - INFO [main:ClientCnxnSocket#239] - jute.maxbuffer value is 4194304 Bytes
2022-09-16 16:26:58,797 - INFO [main:ClientCnxn#1736] - zookeeper.request.timeout value is 0. feature enabled=false
2022-09-16 16:26:58,807 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 16:26:58,840 - INFO [main-SendThread(0.0.0.0:2181):ClientCnxn$SendThread#1181] - Opening socket connection to server 0.0.0.0/0.0.0.0:2181.
2022-09-16 16:26:58,842 - INFO [main-SendThread(0.0.0.0:2181):ClientCnxn$SendThread#1183] - SASL config status: Will not attempt to authenticate using SASL (unknown error)
2022-09-16 16:26:58,861 - WARN [main-SendThread(0.0.0.0:2181):ClientCnxn$SendThread#1300] - Session 0x0 for sever 0.0.0.0/0.0.0.0:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
Also got the following log from kafka server.log
2022-09-16 14:26:52,915 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 14:26:54,942 - INFO [main:Logging#66] - starting
2022-09-16 14:26:54,965 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 14:26:55,082 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 14:26:55,341 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 14:27:55,367 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closing.
2022-09-16 14:27:55,758 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closed.
2022-09-16 14:27:55,797 - ERROR [main:MarkerIgnoringBase#159] - Fatal error during KafkaServer startup. Prepare to shutdown
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:250)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:108)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1981)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:457)
at kafka.server.KafkaServer.startup(KafkaServer.scala:196)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
2022-09-16 14:27:55,813 - INFO [main:Logging#66] - shutting down
2022-09-16 14:27:55,858 - INFO [main:Logging#66] - shut down completed
2022-09-16 14:27:55,861 - ERROR [main:MarkerIgnoringBase#143] - Exiting Kafka.
2022-09-16 14:27:55,864 - INFO [kafka-shutdown-hook:Logging#66] - shutting down
2022-09-16 14:42:16,757 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 14:42:18,622 - INFO [main:Logging#66] - starting
2022-09-16 14:42:18,624 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 14:42:18,656 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 14:42:18,749 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 14:43:18,769 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closing.
2022-09-16 14:43:19,784 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closed.
2022-09-16 14:43:19,796 - ERROR [main:MarkerIgnoringBase#159] - Fatal error during KafkaServer startup. Prepare to shutdown
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:250)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:108)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1981)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:457)
at kafka.server.KafkaServer.startup(KafkaServer.scala:196)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
2022-09-16 14:43:19,809 - INFO [main:Logging#66] - shutting down
2022-09-16 14:43:19,858 - INFO [main:Logging#66] - shut down completed
2022-09-16 14:43:19,870 - ERROR [main:MarkerIgnoringBase#143] - Exiting Kafka.
2022-09-16 14:43:19,876 - INFO [kafka-shutdown-hook:Logging#66] - shutting down
2022-09-16 14:53:57,029 - INFO [main:Log4jControllerRegistration$#31] - Registered kafka:type=kafka.Log4jController MBean
2022-09-16 14:53:59,011 - INFO [main:Logging#66] - starting
2022-09-16 14:53:59,017 - INFO [main:Logging#66] - Connecting to zookeeper on 0.0.0.0:2181
2022-09-16 14:53:59,115 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Initializing a new session to 0.0.0.0:2181.
2022-09-16 14:53:59,247 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Waiting until connected.
2022-09-16 14:54:59,256 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closing.
2022-09-16 14:55:00,389 - INFO [main:Logging#66] - [ZooKeeperClient Kafka server] Closed.
2022-09-16 14:55:00,397 - ERROR [main:MarkerIgnoringBase#159] - Fatal error during KafkaServer startup. Prepare to shutdown
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:254)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:250)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:108)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1981)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:457)
at kafka.server.KafkaServer.startup(KafkaServer.scala:196)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
2022-09-16 14:55:00,400 - INFO [main:Logging#66] - shutting down
2022-09-16 14:55:00,491 - INFO [main:Logging#66] - shut down completed
2022-09-16 14:55:00,525 - ERROR [main:MarkerIgnoringBase#143] - Exiting Kafka.
2022-09-16 14:55:00,529 - INFO [kafka-shutdown-hook:Logging#66] - shutting down
I even went as far as uninstalling docker-desktop and re-installing, still the same issue.
Extra info: i am running docker-desktop (with wsl-2 on ubuntu distro) on windows 11.
It's possible Debezium did an update which broke your setup, so I suggest you grab a latest compose file, many of which exist at
https://github.com/debezium/debezium-examples
Look at the logs,
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
It's not using KAFKA_ZOOKEEPER_CONNECT ... Remove the KAFKA_ prefix to set the appropriate value, then your logs should say something like ZOOKEEPER_CONNECT=zookeeper1:2181
I have SASL/SCRAM config working with confluentinc kafka/zookeeper:
docker-compose.yml
# Based on: https://github.com/iwpnd/tile38-kafka-sasl
version: "2"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper
container_name: zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2181
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- ./secrets:/etc/kafka/secrets/sasl
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper:2181 120 && \
kafka-configs --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- ./secrets:/etc/kafka/secrets/sasl
broker:
image: confluentinc/cp-kafka:6.0.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9091:9091"
- "9101:9101"
- "9092:9092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:9092,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker:29090,OUTSIDE://localhost:9092,SASL_PLAINHOST://broker:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- ./secrets:/etc/kafka/secrets/sasl
sercrets/kafka_server_jaas.conf
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
sercerts/zk_server_jaas.conf
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
sercrets/zookeeper_jaas.conf zk_server_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
Above config works as I expected with confluentinc/cp-zookeeper:6.0.1 image, but when I change images to wurstmeister/zookeeper and wurstmeister/kafka:2.13-2.7.1 I get below errors:
[36mbroker |[0m [Configuring] 'security.inter.broker.protocal' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'jmx.port' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'advertised.listeners' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'port' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'inter.broker.listener.name' in '/opt/kafka/config/server.properties'
[36mbroker |[0m Excluding KAFKA_OPTS from broker config
[36mbroker |[0m Excluding KAFKA_HOME from broker config
[36mbroker |[0m [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'listeners' in '/opt/kafka/config/server.properties'
[36mbroker |[0m Excluding KAFKA_VERSION from broker config
[33mzookeeper |[0m ZooKeeper JMX enabled by default
[33mzookeeper |[0m Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
[33mzookeeper |[0m 2021-12-04 13:17:55,364 [myid:] - INFO [main:QuorumPeerConfig#136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
[33mzookeeper |[0m 2021-12-04 13:17:55,370 [myid:] - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
[33mzookeeper |[0m 2021-12-04 13:17:55,370 [myid:] - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 1
[33mzookeeper |[0m 2021-12-04 13:17:55,371 [myid:] - WARN [main:QuorumPeerMain#116] - Either no config or no quorum defined in config, running in standalone mode
[33mzookeeper |[0m 2021-12-04 13:17:55,376 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask#138] - Purge task started.
[33mzookeeper |[0m 2021-12-04 13:17:55,396 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask#144] - Purge task completed.
[33mzookeeper |[0m 2021-12-04 13:17:55,396 [myid:] - INFO [main:QuorumPeerConfig#136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
[33mzookeeper |[0m 2021-12-04 13:17:55,397 [myid:] - INFO [main:ZooKeeperServerMain#98] - Starting server
[33mzookeeper |[0m 2021-12-04 13:17:55,409 [myid:] - INFO [main:Environment#100] - Server environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
[33mzookeeper |[0m 2021-12-04 13:17:55,409 [myid:] - INFO [main:Environment#100] - Server environment:host.name=zookeeper
[33mzookeeper |[0m 2021-12-04 13:17:55,409 [myid:] - INFO [main:Environment#100] - Server environment:java.version=1.7.0_65
[33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment#100] - Server environment:java.vendor=Oracle Corporation
[33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment#100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
[33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment#100] - Server environment:java.class.path=/opt/zookeeper-3.4.13/bin/../build/classes:/opt/zookeeper-3.4.13/bin/../build/lib/*.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/netty-3.10.6.Final.jar:/opt/zookeeper-3.4.13/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper-3.4.13/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.13/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper-3.4.13/bin/../zookeeper-3.4.13.jar:/opt/zookeeper-3.4.13/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.13/bin/../conf:
[33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment#100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
[33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment#100] - Server environment:java.io.tmpdir=/tmp
[33mzookeeper |[0m 2021-12-04 13:17:55,413 [myid:] - INFO [main:Environment#100] - Server environment:java.compiler=<NA>
[33mzookeeper |[0m 2021-12-04 13:17:55,413 [myid:] - INFO [main:Environment#100] - Server environment:os.name=Linux
[33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment#100] - Server environment:os.arch=amd64
[33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment#100] - Server environment:os.version=5.11.0-40-generic
[33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment#100] - Server environment:user.name=root
[33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment#100] - Server environment:user.home=/root
[33mzookeeper |[0m 2021-12-04 13:17:55,415 [myid:] - INFO [main:Environment#100] - Server environment:user.dir=/opt/zookeeper-3.4.13
[33mzookeeper |[0m 2021-12-04 13:17:55,422 [myid:] - INFO [main:ZooKeeperServer#836] - tickTime set to 2000
[33mzookeeper |[0m 2021-12-04 13:17:55,425 [myid:] - INFO [main:ZooKeeperServer#845] - minSessionTimeout set to -1
[33mzookeeper |[0m 2021-12-04 13:17:55,426 [myid:] - INFO [main:ZooKeeperServer#854] - maxSessionTimeout set to -1
[33mzookeeper |[0m 2021-12-04 13:17:55,443 [myid:] - INFO [main:ServerCnxnFactory#117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
[33mzookeeper |[0m 2021-12-04 13:17:55,453 [myid:] - INFO [main:NIOServerCnxnFactory#89] - binding to port 0.0.0.0/0.0.0.0:2181
[32mzookeeper-add-kafka-users |[0m Waiting for Zookeeper to be ready...
[32mzookeeper-add-kafka-users |[0m bash: line 1: cub: command not found
[36mbroker |[0m [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'sasl.mechanism.inter.broker.protocol' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'offsets.topic.replication.factor' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'listener.security.protocol.map' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'jmx.hostname' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'sasl.enabled.mechanisms' in '/opt/kafka/config/server.properties'
[36mbroker |[0m [Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
[32mzookeeper-add-kafka-users exited with code 127
[0m[36mbroker |[0m [2021-12-04 13:17:58,599] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[36mbroker |[0m [2021-12-04 13:17:59,195] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[36mbroker |[0m [2021-12-04 13:17:59,343] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[36mbroker |[0m [2021-12-04 13:17:59,357] INFO starting (kafka.server.KafkaServer)
[36mbroker |[0m [2021-12-04 13:17:59,360] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
[36mbroker |[0m [2021-12-04 13:17:59,398] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient)
[36mbroker |[0m [2021-12-04 13:17:59,429] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,434] INFO Client environment:host.name=broker (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.version=1.8.0_292 (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.home=/usr/lib/jvm/zulu8-ca/jre (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.7.1.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.7.1.jar:/opt/kafka/bin/../libs/connect-file-2.7.1.jar:/opt/kafka/bin/../libs/connect-json-2.7.1.jar:/opt/kafka/bin/../libs/connect-mirror-2.7.1.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.7.1.jar:/opt/kafka/bin/../libs/connect-runtime-2.7.1.jar:/opt/kafka/bin/../libs/connect-transforms-2.7.1.jar:/opt/kafka/bin/../libs/hk2-api-2.6.1.jar:/opt/kafka/bin/../libs/hk2-locator-2.6.1.jar:/opt/kafka/bin/../libs/hk2-utils-2.6.1.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-core-2.10.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/kafka/bin/../libs/jakarta.inject-2.6.1.jar:/opt/kafka/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.25.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.31.jar:/opt/kafka/bin/../libs/jersey-common-2.31.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.31.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.31.jar:/opt/kafka/bin/../libs/jersey-hk2-2.31.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.31.jar:/opt/kafka/bin/../libs/jersey-server-2.31.jar:/opt/kafka/bin/../libs/jetty-client-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-http-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-io-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-security-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-server-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-util-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-util-ajax-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.7.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.7.1.jar:/opt/kafka/bin/../libs/kafka-raft-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.13-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.7.1.jar:/opt/kafka/bin/../libs/kafka-tools-2.7.1.jar:/opt/kafka/bin/../libs/kafka_2.13-2.7.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.13-2.7.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.59.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka/bin/../libs/scala-library-2.13.3.jar:/opt/kafka/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.13.3.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.7.jar:/opt/kafka/bin/../libs/zookeeper-3.5.9.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.9.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.5-6.jar (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,437] INFO Client environment:java.library.path=/usr/lib/jvm/zulu8-ca/jre/lib/amd64/server:/usr/lib/jvm/zulu8-ca/jre/lib/amd64:/usr/lib/jvm/zulu8-ca/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,437] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,440] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,441] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,441] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,441] INFO Client environment:os.version=5.11.0-40-generic (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,442] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,442] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,442] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,443] INFO Client environment:os.memory.free=1014MB (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,443] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,443] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,447] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#4cc451f2 (org.apache.zookeeper.ZooKeeper)
[36mbroker |[0m [2021-12-04 13:17:59,459] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[36mbroker |[0m [2021-12-04 13:17:59,469] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[36mbroker |[0m [2021-12-04 13:17:59,481] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[36mbroker |[0m [2021-12-04 13:17:59,580] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[36mbroker |[0m [2021-12-04 13:17:59,582] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[36mbroker |[0m [2021-12-04 13:17:59,595] INFO Opening socket connection to server zookeeper/172.20.0.2:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[33mzookeeper |[0m 2021-12-04 13:17:59,605 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /172.20.0.3:57480
[36mbroker |[0m [2021-12-04 13:17:59,609] INFO Socket connection established, initiating session, client: /172.20.0.3:57480, server: zookeeper/172.20.0.2:2181 (org.apache.zookeeper.ClientCnxn)
[33mzookeeper |[0m 2021-12-04 13:17:59,621 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /172.20.0.3:57480
[33mzookeeper |[0m 2021-12-04 13:17:59,624 [myid:] - INFO [SyncThread:0:FileTxnLog#213] - Creating new log file: log.1
[36mbroker |[0m [2021-12-04 13:17:59,642] INFO Session establishment complete on server zookeeper/172.20.0.2:2181, sessionid = 0x100474bc7f70000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[33mzookeeper |[0m 2021-12-04 13:17:59,642 [myid:] - INFO [SyncThread:0:ZooKeeperServer#694] - Established session 0x100474bc7f70000 with negotiated timeout 18000 for client /172.20.0.3:57480
[36mbroker |[0m [2021-12-04 13:17:59,646] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[33mzookeeper |[0m 2021-12-04 13:17:59,657 [myid:] - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#1063] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly.
[36mbroker |[0m [2021-12-04 13:17:59,660] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
[36mbroker |[0m javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
[36mbroker |[0m at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:312)
[36mbroker |[0m at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:275)
[36mbroker |[0m at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:882)
[36mbroker |[0m at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:103)
[36mbroker |[0m at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:365)
[36mbroker |[0m at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223)
[36mbroker |[0m [2021-12-04 13:17:59,669] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[36mbroker |[0m [2021-12-04 13:17:59,672] INFO EventThread shut down for session: 0x100474bc7f70000 (org.apache.zookeeper.ClientCnxn)
[33mzookeeper |[0m 2021-12-04 13:17:59,794 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#376] - Unable to read additional data from client sessionid 0x100474bc7f70000, likely client has closed socket
[33mzookeeper |[0m 2021-12-04 13:17:59,795 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /172.20.0.3:57480 which had sessionid 0x100474bc7f70000
[36mbroker |[0m [2021-12-04 13:17:59,823] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
[36mbroker |[0m org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers
[36mbroker |[0m at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
[36mbroker |[0m at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
[36mbroker |[0m at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
[36mbroker |[0m at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1662)
[36mbroker |[0m at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1560)
[36mbroker |[0m at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1(KafkaZkClient.scala:1552)
[36mbroker |[0m at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1$adapted(KafkaZkClient.scala:1552)
[36mbroker |[0m at scala.collection.immutable.List.foreach(List.scala:333)
[36mbroker |[0m at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1552)
[36mbroker |[0m at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:467)
[36mbroker |[0m at kafka.server.KafkaServer.startup(KafkaServer.scala:233)
[36mbroker |[0m at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
[36mbroker |[0m at kafka.Kafka$.main(Kafka.scala:82)
[36mbroker |[0m at kafka.Kafka.main(Kafka.scala)
[36mbroker |[0m [2021-12-04 13:17:59,825] INFO shutting down (kafka.server.KafkaServer)
[36mbroker |[0m [2021-12-04 13:17:59,836] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[36mbroker |[0m [2021-12-04 13:17:59,845] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[36mbroker |[0m [2021-12-04 13:17:59,849] INFO App info kafka.server for -1 unregistered (org.apache.kafka.common.utils.AppInfoParser)
[36mbroker |[0m [2021-12-04 13:17:59,854] INFO shut down completed (kafka.server.KafkaServer)
[36mbroker |[0m [2021-12-04 13:17:59,855] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[36mbroker |[0m [2021-12-04 13:17:59,859] INFO shutting down (kafka.server.KafkaServer)
[36mbroker exited with code 1
[0m
Any tips how to get it work with wurstmeister images?
I am trying to write data on kafka topic but, stuck with some errors. Below are my configuration & error details.
Kubernetes Service:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service NodePort 10.105.214.246 <none> 9092:30998/TCP 17m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
zoo1 ClusterIP 10.101.3.128 <none> 2181/TCP,2888/TCP,3888/TCP 20m
Kubernetes Pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-broker0-69c97b67f-4pmw9 1/1 Running 1 1m
zookeeper-deployment-1-796f9d9bcc-cr756 1/1 Running 0 20m
Kafka Docker Process:
docker ps | grep kafka
f79cd0196083 wurstmeister/kafka#sha256:d04dafd2b308f26dbeed8454f67c321579c2818c1eff5e8f695e14a19b1d599b "start-kafka.sh" About a minute ago Up About a minute k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1
75393e9e25c1 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_0
Topic test is created successfully in Kafka as shown below:
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1 /opt/kafka_2.12-2.1.0/bin/kafka-topics.sh --list --zookeeper zoo1:2181
OR
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1 /opt/kafka_2.12-2.1.0/bin/kafka-topics.sh --list --zookeeper 10.101.3.128:2181
Output of above command:
test
As the topic is available to write data on it, I had executed below command with host machine IP 10.225.36.98 or with service IP 10.105.214.246 :
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t --
/opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list
10.225.36.98:30998 --topic test ]
>{"k":"v"}
But none of them is working for me & throw below exception:
[2019-01-01 09:26:52,215] ERROR Error when sending message to topic test with key: null, value: 9 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>[2019-01-01 09:27:59,513] WARN [Producer clientId=console-producer]
Connection to node -1 (/10.225.36.98:30998) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
When tried to write on broker with hostname kafka:
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t -- /opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list kafka:9092 --topic test ]
[2019-01-01 09:34:41,293] WARN Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
(org.apache.kafka.clients.ClientUtils)
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
As the host & service IP were not working, I tried with pod IP, but get test=LEADER_NOT_AVAILABLE error.
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t -- /opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list 172.17.0.7:9092 --topic test ]
>{"k":"v"}
[2019-01-01 09:52:30,733] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
After searching Google I found command to get list of available brokers in Zookeeper. So I tried to run it from container & stuck on below error:
bash-4.4# ./opt/zookeeper/bin/zkCli.sh -server zoo1:2181 ls /brokers/ids
Connecting to zoo1:2181
Exception from Zookeeper:
2019-01-01 09:18:05,215 [myid:] - INFO [main:Environment#100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2019-01-01 09:18:05,219 [myid:] - INFO [main:Environment#100] - Client environment:host.name=zookeeper-deployment-1-796f9d9bcc-cr756
2019-01-01 09:18:05,220 [myid:] - INFO [main:Environment#100] - Client environment:java.version=1.8.0_151
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.vendor=Oracle Corporation
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/b
in/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-
1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-
1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.10.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.library.path=/usr/lib/jvm/java-1.8-
openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.io.tmpdir=/tmp
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:java.compiler=<NA>
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.name=Linux
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.arch=amd64
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.version=3.10.0-693.11.6.el7.x86_64
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.name=root
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.home=/root
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.dir=/
2019-01-01 09:18:05,225 [myid:] - INFO [main:ZooKeeper#438] - Initiating client connection, connectString=zoo1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher#25f38edc
2019-01-01 09:18:05,259 [myid:] - INFO [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1032] - Opening socket connection to server zoo1.default.svc.cluster.local/10.101.3.128:2181. Will not attempt to authenticate using SASL (unknown error)
2019-01-01 09:18:35,280 [myid:] - WARN [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1108] - Client session timed out, have not heard from server in 30027ms for sessionid 0x0
2019-01-01 09:18:35,282 [myid:] - INFO [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1156] - Client session timed out, have not heard from server in 30027ms for sessionid 0x0, closing socket connection and attempting reconnect
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /brokers/ids
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1532)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1560)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:731)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:599)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:362)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:290)
I also tried to create Kafka service of type LoadBalancer type, but, No LoadBalancer IP is assigned to service.
References to resolve this issue:
https://rmoff.net/2018/08/02/kafka-listeners-explained/
https://github.com/wurstmeister/kafka-docker/wiki/Connectivity#additional-listener-information
https://github.com/kubernetes/contrib/issues/2891
https://dzone.com/articles/ultimate-guide-to-installing-kafka-docker-on-kuber
https://github.com/wurstmeister/kafka-docker/issues/85
Any help would be appreciated.
Try following command to send data to topic:
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1
/opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh
--broker-list kafka-service:30998 --topic test
Good Day,
I wanted to test the config store which is built using spring boot. The instruction given to me is run the project using docker-compose.yml files. I'm new with this,I've tired to execute but while running those commands on iMAC terminal I'm facing the following exception.
platform-config-store | 2018-03-05 11:55:12.167 INFO 1 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#22bbbe6
platform-config-store | 2018-03-05 11:55:12.286 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:12.314 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
platform-config-store | java.net.ConnectException: Connection refused
platform-config-store | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_144]
platform-config-store | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_144]
platform-config-store | at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[zookeeper-3.4.6.jar!/:3.4.6-1569965]
platform-config-store |
platform-config-store | 2018-03-05 11:55:13.422 INFO 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
platform-config-store | 2018-03-05 11:55:13.424 WARN 1 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I've googled this problem and on some posts it was mentioned that zookeeper client server is not available that's why this error is occurring. So for this I've configured the zookeeper local instance on my machine and made changes in docker-compose.yml file. Instead of getting the image from docker, I tried to get it from local machine. It didn't work and faced the same issue.
Also some of them posted that this related to the firewall. I've verified and firewall's turned off.
Following is the docker-compose file I'm executing.
docker-compose.yml
version: "3.0"
services:
zookeeper:
container_name: zookeeper
image: docker.*****.net/zookeeper
#image: zookeeper // tired to connect with local zookeeper instance
ports:
- 2181:2181
postgres:
container_name: postgres
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=p3rmission
redis:
container_name: redis
image: redis
ports:
- 6379:6379
Could anyone please guide me, what I'm missing here. Help will be appreciated. Thanks
I'm trying to use Spring Boot with Kafka and ZooKeeper with Docker :
docker-compose.yml:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
restart: always
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
980e6b09f4e3 wurstmeister/kafka "start-kafka.sh" 29 minutes ago Up 29 minutes 0.0.0.0:9092->9092/tcp samplespringkafkaproducerconsumermaster_kafka_1
64519d4808aa wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 2 hours ago Up 29 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp samplespringkafkaproducerconsumermaster_zookeeper_1
docker-compose up output log:
kafka_1 | [2018-01-12 13:14:49,545] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,546] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,546] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,547] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,547] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,548] INFO Client environment:os.version=4.9.60-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,548] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,549] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,549] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,552] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#1534f01b (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,574] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2018-01-12 13:14:49,578] INFO Opening socket connection to server samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2018-01-12 13:14:49,591 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /192.168.32.3:51466
kafka_1 | [2018-01-12 13:14:49,593] INFO Socket connection established to samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2018-01-12 13:14:49,600 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#928] - Client attempting to establish new session at /192.168.32.3:51466
zookeeper_1 | 2018-01-12 13:14:49,603 [myid:] - INFO [SyncThread:0:FileTxnLog#203] - Creating new log file: log.fd
zookeeper_1 | 2018-01-12 13:14:49,613 [myid:] - INFO [SyncThread:0:ZooKeeperServer#673] - Established session 0x160ea8232b00000 with negotiated timeout 6000 for client /192.168.32.3:51466
kafka_1 | [2018-01-12 13:14:49,616] INFO Session establishment complete on server samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181, sessionid = 0x160ea8232b00000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2018-01-12 13:14:49,619] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2018-01-12 13:14:49,992] INFO Cluster ID = Fgy9ybPPQQ-QdLINzHpmVA (kafka.server.KafkaServer)
kafka_1 | [2018-01-12 13:14:50,003] WARN No meta.properties file under dir /kafka/kafka-logs-980e6b09f4e3/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2018-01-12 13:14:50,065] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,065] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,067] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,167] INFO Log directory '/kafka/kafka-logs-980e6b09f4e3' not found, creating it. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,183] INFO Loading logs. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,199] INFO Logs loading complete in 15 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,283] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,291] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,633] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2018-01-12 13:14:50,639] INFO [SocketServer brokerId=1005] Started 1 acceptor threads (kafka.network.SocketServer)
kafka_1 | [2018-01-12 13:14:50,673] INFO [ExpirationReaper-1005-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,674] INFO [ExpirationReaper-1005-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,675] INFO [ExpirationReaper-1005-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,691] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1 | [2018-01-12 13:14:50,753] INFO [ExpirationReaper-1005-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,757] INFO [ExpirationReaper-1005-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,762] INFO [ExpirationReaper-1005-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,777] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,791] INFO [GroupCoordinator 1005]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2018-01-12 13:14:50,791] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,793] INFO [GroupCoordinator 1005]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2018-01-12 13:14:50,798] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1 | [2018-01-12 13:14:50,811] INFO [ProducerId Manager 1005]: Acquired new producerId block (brokerId:1005,blockStartProducerId:5000,blockEndProducerId:5999) by writing to Zk with path version 6 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1 | [2018-01-12 13:14:50,848] INFO [TransactionCoordinator id=1005] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2018-01-12 13:14:50,850] INFO [Transaction Marker Channel Manager 1005]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1 | [2018-01-12 13:14:50,850] INFO [TransactionCoordinator id=1005] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2018-01-12 13:14:50,949] INFO Creating /brokers/ids/1005 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper_1 | 2018-01-12 13:14:50,952 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:create cxid:0x70 zxid:0x102 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper_1 | 2018-01-12 13:14:50,952 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:create cxid:0x71 zxid:0x103 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
kafka_1 | [2018-01-12 13:14:50,957] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,959] INFO Registered broker 1005 at path /brokers/ids/1005 with addresses: EndPoint(localhost,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka_1 | [2018-01-12 13:14:50,961] WARN No meta.properties file under dir /kafka/kafka-logs-980e6b09f4e3/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2018-01-12 13:14:50,992] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2018-01-12 13:14:50,993] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2018-01-12 13:14:51,004] INFO [KafkaServer id=1005] started (kafka.server.KafkaServer)
zookeeper_1 | 2018-01-12 13:14:51,263 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:delete cxid:0xe3 zxid:0x105 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2018-01-12 13:24:50,793] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1 | [2018-01-12 13:34:50,795] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
Kafka maven dependency in Producer and Consumer:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.9.RELEASE</version>
<relativePath/>
</parent>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
application.properties in Producer:
spring.kafka.producer.bootstrap-servers=0.0.0.0:9092
spring.kafka.consumer.topic=kafka_topic
server.port=8080
application.properties in Consumer:
spring.kafka.consumer.bootstrap-servers=0.0.0.0:9092
spring.kafka.consumer.group-id=WorkUnitApp
spring.kafka.consumer.topic=kafka_topic
server.port=8081
Consumer:
#Component
public class Consumer {
private static final Logger LOGGER = LoggerFactory.getLogger(Consumer.class);
#KafkaListener(topics = "${spring.kafka.consumer.topic}")
public void receive(ConsumerRecord<?, ?> consumerRecord) {
LOGGER.info("received payload='{}'", consumerRecord.toString());
}
}
Producer:
#Component
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String topic, String payload) {
LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
kafkaTemplate.send(topic, payload);
}
}
ConsumerConfig log:
2018-01-12 15:25:48.220 INFO 20919 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [0.0.0.0:9092]
check.crcs = true
client.id = consumer-1
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = WorkUnitApp
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ProducerConfig log:
2018-01-12 15:26:27.956 INFO 20924 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [0.0.0.0:9092]
buffer.memory = 33554432
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
When i try to send a message I get an exception:
producer.send("kafka_topic", "test")
exception log:
2018-01-12 15:26:27.975 INFO 20924 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2018-01-12 15:26:27.975 INFO 20924 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
2018-01-12 15:26:58.152 ERROR 20924 --- [ad | producer-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='test' to topic kafka_topic:
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for kafka_topic-0 due to 30033 ms has passed since batch creation plus linger time
How to fix it ?
Problem is not with sending key as null, Connection to a broker may not be established
try using local Kafka installation.
If you are using mac Docker for mac having some networking
limitations
https://docs.docker.com/docker-for-mac/networking/#known-limitations-use-cases-and-workarounds
i ran into the same issue. the issue was with my dockercompose file. not 100% but i think KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_LISTENERS both need to refernce localhost. my working compose file.
version: '2'
networks:
sb_net:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
networks:
- sb_net
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
networks:
- sb_net
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
error on mindfulness, need to add a link:
links:
- zookeeper:zookeeper
full docker-compose.yml:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
restart: always
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka
container_name: kafka
restart: always
ports:
- 9092:9092
depends_on:
- zookeeper
links:
- zookeeper:zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
I got same problem, Kafka only allow 127.0.0.0/localhost by default.
My solution:
Add this line in Kafka server.properties, and restart Kafka service
listeners=PLAINTEXT://192.168.31.72:9092