Kafka broker container fails with invalid argument - docker

I added the Kafka as docker containers
This is the docker-compose file:
kafka:
image: "confluentinc/cp-kafka"
container_name: kafka
restart: always
depends_on:
- zookeeper
ports:
- '9092:9092'
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
CONFLUENT_METRICS_ENABLE: 'false'
networks:
- ddn
deploy:
resources:
limits:
memory: 2G
init-kafka:
image: confluentinc/cp-kafka
depends_on:
- kafka
entrypoint: [ '/bin/sh', '-c' ]
command: |
"
# blocks until kafka is reachable
kafka-topics --bootstrap-server kafka:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic EVENT_TYPE1_NOTIFY --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server kafka:29092 --list
"
networks:
- ddn
kafka-rest:
container_name: kafka-rest
image: confluentinc/cp-kafka-rest
hostname: kafka-rest
restart: always
depends_on:
- init-kafka
ports:
- "8082:8082"
environment:
KAFKA_REST_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_REST_HOST_NAME: kafka-rest
KAFKA_REST_LISTENERS: http://kafka-rest:8082
KAFKA_REST_BOOTSTRAP_SERVERS: kafka:29092
healthcheck:
test: ["CMD", "curl", "-f", "http://kafka-rest:8082/topics/EVENT_TYPE1_NOTIFY "]
interval: 5s
timeout: 1s
retries: 12
networks:
- ddn
networks:
ddn:
Though, while running the docker-compose inside the integration tests agent which is based on Windows server, i'm encountering the following error:
[2023-01-03 13:11:26,453] ERROR [KafkaApi-1] Error when handling request: clientId=1, correlationId=5, api=LEADER_AND_ISR, version=5, body=LeaderAndIsrRequestData(controllerId=1,
controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='EVENT_TYPE1_NOTIFY',
topicId=MU_GDxwKTkywLg9wPNCrsg, partitionStates=[LeaderAndIsrPartitionState(topicName='EVENT_TYPE1_NOTIFY', partitionIndex=0, controllerEpoch=1,
leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1,
hostName='kafka', port=9092)]) (kafka.server.RequestHandlerHelper)
java.io.IOException: Invalid argument
at java.base/sun.nio.ch.FileChannelImpl.map0(Native Method)
at java.base/sun.nio.ch.FileChannelImpl.map(Unknown Source)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:124)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54)
at kafka.log.LazyIndex$.$anonfun$forOffset$1(LazyIndex.scala:106)
at kafka.log.LazyIndex.$anonfun$get$1(LazyIndex.scala:63)
at kafka.log.LazyIndex.get(LazyIndex.scala:60)
at kafka.log.LogSegment.offsetIndex(LogSegment.scala:64)
at kafka.log.LogSegment.readNextOffset(LogSegment.scala:456)
at kafka.log.Log.$anonfun$recoverLog$6(Log.scala:944)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
at scala.Option.getOrElse(Option.scala:201)
at kafka.log.Log.recoverLog(Log.scala:944)
at kafka.log.Log.$anonfun$loadSegments$3(Log.scala:824)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2494)
at kafka.log.Log.loadSegments(Log.scala:824)
at kafka.log.Log.<init>(Log.scala:328)
at kafka.log.Log$.apply(Log.scala:2630)
at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:830)
at scala.Option.getOrElse(Option.scala:201)
at kafka.log.LogManager.getOrCreateLog(LogManager.scala:783)
at kafka.cluster.Partition.createLog(Partition.scala:344)
at kafka.cluster.Partition.createLogIfNotExists(Partition.scala:324)
at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:563)
at kafka.cluster.Partition.makeLeader(Partition.scala:547)
at kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1568)
at kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)
at scala.collection.mutable.HashMap$Node.foreachEntry(HashMap.scala:633)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:499)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1566)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1411)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:258)
at kafka.server.KafkaApis.handle(KafkaApis.scala:171)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:74)
at java.base/java.lang.Thread.run(Unknown Source)
Is there any configuration I miss?

Related

kafka broker connection failed when create new topic with a cluster of 3 brokers

I'm trying to setup a kafka cluster with 3 brokers on Docker.
The problem is: when I do an operation (i.e create/list/delete topics), there's always 1 broker fails to be connected and restart Docker container. This problem doesn't happen on a cluster of 2 or single broker.
My steps to reproduce is:
Run docker-compose up
Open shell of 1 of kafka containers and create a topic kafka-topics --bootstrap-server ":9092" --create --topic topic-name --partitions 3 --replication-factor 3
After this, 1 random broker is disconnected and deleted from the cluster. Sometimes the reponse of the above execution is the error said that replication factor cannot be larger than 2 (since 1 broker has removed from cluster)
I'm new to Kafka. I think I'm just having some silly mistakes but I don't have any clue of what it is. I search through docs but haven't found yet.
Here is my docker-compose file:
version: "3.9"
networks:
kafka-cluster:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
# ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
# ZOOKEEPER_TICK_TIME: 2000
# ZOOKEEPER_SERVERS: "zookeeper:22888:23888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
ports:
- 2181:2181
restart: unless-stopped
networks:
- kafka-cluster
kafka1:
image: confluentinc/cp-kafka:latest
container_name: kafka1
depends_on:
- zookeeper
ports:
- "9093:9093"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka1:9092,EXTERNAL://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka2:
image: confluentinc/cp-kafka:latest
container_name: kafka2
depends_on:
- zookeeper
ports:
- "9094:9094"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka2:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka3:
image: confluentinc/cp-kafka:latest
container_name: kafka3
depends_on:
- zookeeper
ports:
- "9095:9095"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9095
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka3:9092,EXTERNAL://localhost:9095
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafdrop:
image: obsidiandynamics/kafdrop:latest
container_name: kafdrop
ports:
- "9000:9000"
environment:
- KAFKA_BROKERCONNECT=kafka1:9092,kafka2:9092,kafka3:9092
- JVM_OPTS="-Xms32M -Xmx64M"
- SERVER_SERVLET_CONTEXTPATH="/"
depends_on:
- kafka1
networks:
- kafka-cluster
Here is the error log on the other 2 brokers:
[2022-01-17 04:32:40,078] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=1002, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={test-topic-3-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty), test-topic-2-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty)}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=28449961, epoch=INITIAL), rackId=) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
[2022-01-17 04:32:42,088] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Connection to node 1001 (kafka1/192.168.48.3:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-01-17 04:32:42,088] INFO [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error sending fetch request (sessionId=28449961, epoch=INITIAL) to node 1001: (org.apache.kafka.clients.FetchSessionHandler)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Assuming you don't need host connections (since you're running the Kafka CLI commands directly in the containers), you could greatly simplify your Compose file
Remove host ports
Remove non-CLIENT listeners, and stick to the defaults.
Remove the Compose network (for debugging) since one is automatically created
All in all, you'd end up with something like this
x-kafka-setup: &kafka-setup
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
version: "3.8"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.7
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: &broker-image docker.io/bitnami/kafka:3
environment:
KAFKA_BROKER_ID: 1
<<: *kafka-setup
depends_on:
- zookeeper
kafka2:
image: *broker-image
environment:
KAFKA_BROKER_ID: 2
<<: *kafka-setup
depends_on:
- zookeeper
kafka3:
image: *broker-image
environment:
KAFKA_BROKER_ID: 3
<<: *kafka-setup
depends_on:
- zookeeper
kafdrop:
image: obsidiandynamics/kafdrop:latest
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: kafka1:9092,kafka2:9092,kafka3:9092
JVM_OPTS: "-Xms32M -Xmx64M"
SERVER_SERVLET_CONTEXTPATH: /
depends_on:
- kafka1
- kafka2
- kafka3

Can not run kafka+sonar in same docker compose

I'm trying to run a docker-compose with Kafka and Sonar to run my tests and get its coverage, but they can not run in parallel and I could not find the reason.
If I move them to different docker-compose files, run the sonar one and wait it be up, right after run the kafka one, I can see in the sonar log:
sonarqube_1 | 2021.03.07 12:06:43 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 137
sonarqube_1 | 2021.03.07 12:06:43 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
sonarqube_1 | 2021.03.07 12:06:43 INFO ce[][o.s.p.ProcessEntryPoint] Hard stopping process
Here is my docker-compose:
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.1
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.1
hostname: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_LOG_RETENTION_MS: 1000
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:

kafka-avro-console-consumer is command is not available in `confluentinc/cp-enterprise-kafka`

I've below docker-compose.yml file
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:6.0.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
When I run docker-compose exec kafka bash I get the bash prompt.
In the bash prompt I've kafka-console-consumer but I don't have access to kafka-avro-console-consumer?
How can I get access to kafka-avro-console-consumer? Is it not in $PATH but in some other directory?
I tried to use find and which commands but those are not present in docker running container
use confluentinc/cp-schema-registry
docker run confluentinc/cp-schema-registry \
kafka-avro-console-consumer --bootstrap-server localhost:29092 --topic quickstart-jdbc-test --from-beginning --max-messages 10

Apache Kafka cluster not connecting to Zookeeper on Docker

I am creating a 3 node Zookeeper and 3 node Kafka cluster on Docker. I need to link the Kafka cluster to the Zookeeper cluster.
Below is docker-compose.yml code -
version: '2'
services:
zookeeper-1:
image: zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888
zookeeper-2:
image: zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888
zookeeper-3:
image: zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 42181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888
kafka1:
image: abc/kafka:latest
hostname: kafka1
network_mode: host
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
kafka2:
image: abc/kafka:latest
hostname: kafka2
network_mode: host
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
kafka3:
image: abc/kafka:latest
hostname: kafka3
network_mode: host
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
When i run the docker-compose up command, the 3 node zookeeper cluster gets created successfully. But the Kafka cluster doesn't get created. It only displays the following error on command line -
kafka_kafka2_1 exited with code 0
kafka_kafka1_1 exited with code 0
kafka_kafka3_1 exited with code 0
I have also added the Dockerfile for creating the image, below -
FROM centos:7
MAINTAINER abc#xyz.com
ENV KAFKA_BIN=http://www-eu.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz
RUN yum install -y wget java-1.8.0-openjdk \
&& cd /tmp && wget -q $KAFKA_BIN \
&& export K_TAR=/tmp/$(ls kafka* | head -1) \
&& mkdir -p /opt/apache/kafka/ && tar -zxf $K_TAR -C /opt/apache/kafka/ \
&& cd /opt/apache/kafka && ln -s $(ls) current \
&& rm -rf $K_TAR
ENV KAFKA_HOME /opt/apache/kafka/current
ENV PATH $PATH:$KAFKA_HOME/bin
ADD resources /home/kafka
RUN groupadd -r kafka \
&& useradd -r -g kafka kafka \
&& mkdir -p /home/kafka \
&& chown -R kafka:kafka /home/kafka \
&& chmod -R +x /home/kafka/scripts \
&& mkdir -p /var/log/kafka \
&& chown -R kafka:kafka /var/log/kafka \
&& mkdir -p /etc/kafka \
&& chown -R kafka:kafka /etc/kafka
USER kafka
There is no error message of any sort. Can someone please help me debug this issue as to why the kafka cluster is not starting. I am unable to debug this, especially because it only exits with code 0 and provides no error message.
First, you forgot to expose the zookeeper ports. Second, localhost means localhost – even for the docker containers. So your Kafka containers would try to connect to zookeeper within the same container, which does not work.
Have a look here how it is done correctly: https://github.com/confluentinc/cp-docker-images/blob/4.0.x/examples/cp-all-in-one/docker-compose.yml
If you don't like that approach, you can do it without exposing any port. For that, set the zookeeper client port in the zookeeper environment section of your compose file:
...
services:
zookeeper:
...
environment:
ZOOKEEPER_CLIENT_PORT: 2181
Second, connect your kafka container to it:
...
kafka:
...
environment:
ZOOKEEPER_SERVERS: zookeeper:2181
With this approach, everything will happen inside the docker compose network.
localhost inside Docker refers to the container instance, not the host. This is a major source of confusion for Kafka installations. You need to configure Kafka's listeners to advertise a different host:port combination — one that is reachable from within the container.
An example of how this is done for multiple brokers is shown below. Credit: The example is taken from a repo accompanying Effective Kafka.
version: "3.2"
services:
zookeeper:
image: bitnami/zookeeper:3
ports:
- 2181:2181
environment:
ALLOW_ANONYMOUS_LOGIN: "yes"
kafka-0:
image: bitnami/kafka:2
ports:
- 9092:9092
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENERS: >-
INTERNAL://:29092,EXTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: >-
INTERNAL://kafka-0:29092,EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
depends_on:
- zookeeper
kafka-1:
image: bitnami/kafka:2
ports:
- 9093:9093
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENERS: >-
INTERNAL://:29092,EXTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: >-
INTERNAL://kafka-1:29092,EXTERNAL://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
depends_on:
- zookeeper
kafka-2:
image: bitnami/kafka:2
ports:
- 9094:9094
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENERS: >-
INTERNAL://:29092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: >-
INTERNAL://kafka-2:29092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
depends_on:
- zookeeper
kafdrop:
image: obsidiandynamics/kafdrop:latest
ports:
- 9000:9000
environment:
KAFKA_BROKERCONNECT: >-
kafka-0:29092,kafka-1:29092,kafka-2:29092
depends_on:
- kafka-0
- kafka-1
- kafka-2
Here, the application that connects to the brokers is Kafdrop, but you would replace it with your own image. Also, the example only uses one ZK node with 3 brokers for brevity; this should be a straightforward change.
you need to set the proper hostname in /etc/hosts to ba able to connect the kafka containers outside the docker :
cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.0.1 kafka1
127.0.0.1 kafka2
127.0.0.1 kafka3
Here is My Working Config :
version: '2'
services:
zookeeper1:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper1
hostname: zookeeper1
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2181:2181
volumes:
- ./data/zoo-1/data:/var/lib/zookeeper/data
- ./data/zoo-1/log:/var/lib/zookeeper/log
zookeeper2:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper2
hostname: zookeeper2
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2182:2181
volumes:
- ./data/zoo-2/data:/var/lib/zookeeper/data
- ./data/zoo-2/log:/var/lib/zookeeper/log
zookeeper3:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper3
hostname: zookeeper3
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2183:2181
volumes:
- ./data/zoo-3/data:/var/lib/zookeeper/data
- ./data/zoo-3/log:/var/lib/zookeeper/log
kafka1:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka1
hostname: kafka1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9091:9091
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9091
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,OUTSIDE://kafka1:9091
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka2:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka2
hostname: kafka2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:29092,OUTSIDE://kafka2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka3:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka3
hostname: kafka3
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:29092,OUTSIDE://kafka3:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs

Have Docker wait for Kafka to startup before running tests

I have the following docker-compose.yml file
version: '3'
services:
zookeeper:
ports:
- "2181:2181"
hostname: zookeeper
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka
hostname: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:9092'
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
test:
image: java:8
volumes:
- .:/src
- ${GRADLE_USER_HOME}:/gradle_mount
environment:
- GRADLE_USER_HOME=/gradle_mount
- SPRING_KAFKA_BOOTSTRAP-SERVERS=broker:9092
depends_on:
- broker
- zookeeper
working_dir: /src
privileged: true
command: "./gradlew --no-daemon clean test --info"
The tests pass locally but fail in Jenkins and I suspect it is because the tests are firing off before the Kafka service is running.
Is there any way to make the test container wait for the service on the Kafka container (broker) to start?

Resources