I have the following docker-compose.yml file
version: '3'
services:
zookeeper:
ports:
- "2181:2181"
hostname: zookeeper
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka
hostname: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:9092'
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
test:
image: java:8
volumes:
- .:/src
- ${GRADLE_USER_HOME}:/gradle_mount
environment:
- GRADLE_USER_HOME=/gradle_mount
- SPRING_KAFKA_BOOTSTRAP-SERVERS=broker:9092
depends_on:
- broker
- zookeeper
working_dir: /src
privileged: true
command: "./gradlew --no-daemon clean test --info"
The tests pass locally but fail in Jenkins and I suspect it is because the tests are firing off before the Kafka service is running.
Is there any way to make the test container wait for the service on the Kafka container (broker) to start?
Related
I'm trying to setup a kafka cluster with 3 brokers on Docker.
The problem is: when I do an operation (i.e create/list/delete topics), there's always 1 broker fails to be connected and restart Docker container. This problem doesn't happen on a cluster of 2 or single broker.
My steps to reproduce is:
Run docker-compose up
Open shell of 1 of kafka containers and create a topic kafka-topics --bootstrap-server ":9092" --create --topic topic-name --partitions 3 --replication-factor 3
After this, 1 random broker is disconnected and deleted from the cluster. Sometimes the reponse of the above execution is the error said that replication factor cannot be larger than 2 (since 1 broker has removed from cluster)
I'm new to Kafka. I think I'm just having some silly mistakes but I don't have any clue of what it is. I search through docs but haven't found yet.
Here is my docker-compose file:
version: "3.9"
networks:
kafka-cluster:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
# ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
# ZOOKEEPER_TICK_TIME: 2000
# ZOOKEEPER_SERVERS: "zookeeper:22888:23888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
ports:
- 2181:2181
restart: unless-stopped
networks:
- kafka-cluster
kafka1:
image: confluentinc/cp-kafka:latest
container_name: kafka1
depends_on:
- zookeeper
ports:
- "9093:9093"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka1:9092,EXTERNAL://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka2:
image: confluentinc/cp-kafka:latest
container_name: kafka2
depends_on:
- zookeeper
ports:
- "9094:9094"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka2:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka3:
image: confluentinc/cp-kafka:latest
container_name: kafka3
depends_on:
- zookeeper
ports:
- "9095:9095"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9095
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka3:9092,EXTERNAL://localhost:9095
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafdrop:
image: obsidiandynamics/kafdrop:latest
container_name: kafdrop
ports:
- "9000:9000"
environment:
- KAFKA_BROKERCONNECT=kafka1:9092,kafka2:9092,kafka3:9092
- JVM_OPTS="-Xms32M -Xmx64M"
- SERVER_SERVLET_CONTEXTPATH="/"
depends_on:
- kafka1
networks:
- kafka-cluster
Here is the error log on the other 2 brokers:
[2022-01-17 04:32:40,078] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=1002, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={test-topic-3-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty), test-topic-2-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty)}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=28449961, epoch=INITIAL), rackId=) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
[2022-01-17 04:32:42,088] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Connection to node 1001 (kafka1/192.168.48.3:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-01-17 04:32:42,088] INFO [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error sending fetch request (sessionId=28449961, epoch=INITIAL) to node 1001: (org.apache.kafka.clients.FetchSessionHandler)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Assuming you don't need host connections (since you're running the Kafka CLI commands directly in the containers), you could greatly simplify your Compose file
Remove host ports
Remove non-CLIENT listeners, and stick to the defaults.
Remove the Compose network (for debugging) since one is automatically created
All in all, you'd end up with something like this
x-kafka-setup: &kafka-setup
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
version: "3.8"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.7
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: &broker-image docker.io/bitnami/kafka:3
environment:
KAFKA_BROKER_ID: 1
<<: *kafka-setup
depends_on:
- zookeeper
kafka2:
image: *broker-image
environment:
KAFKA_BROKER_ID: 2
<<: *kafka-setup
depends_on:
- zookeeper
kafka3:
image: *broker-image
environment:
KAFKA_BROKER_ID: 3
<<: *kafka-setup
depends_on:
- zookeeper
kafdrop:
image: obsidiandynamics/kafdrop:latest
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: kafka1:9092,kafka2:9092,kafka3:9092
JVM_OPTS: "-Xms32M -Xmx64M"
SERVER_SERVLET_CONTEXTPATH: /
depends_on:
- kafka1
- kafka2
- kafka3
I'm learning docker and trying to setup a Kafka Cluster with 3 Zookeepers instances and 3 brokers.
Sometimes the brokers runs fine, but sometimes they are exiting with status 1 and the logs shows the following message:
kafka.common.InconsistentClusterIdException: The Cluster ID
p2ouSB1rTKyUddeL26jpxg doesn't match stored clusterId
Some(4QuK9rOmTleiRryAOA5zwA) in meta.properties. The broker is trying
to join the wrong cluster. Configured zookeeper.connect may be wrong.
I searched a lot before post this question and nothing I founded resolved the issue.
First a though it was the volumes I was using to persist Zookeeper data and logs on the host, but now that I got ride of them, I'm still getting the same error and have no idea of how to setup a Cluster Id through docker-compose and share it among all brokers. Is there a fix to this?
I'm running Docker version 20.10.10, build b485636 and Docker Compose version v2.1.1 on Windows.
This is my docker-compose.yml:
version: '3.9'
services:
zookeeper-1:
hostname: zookeeper-1
container_name: zookeeper-1
extends:
file: ./docker/services.yml
service: zookeeper
ports:
- '2181:2181'
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
zookeeper-2:
hostname: zookeeper-2
container_name: zookeeper-2
extends:
file: ./docker/services.yml
service: zookeeper
ports:
- '2182:2182'
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 2182
zookeeper-3:
hostname: zookeeper-3
container_name: zookeeper-3
extends:
file: ./docker/services.yml
service: zookeeper
ports:
- '2183:2183'
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 2183
broker-1:
container_name: broker-1
extends:
file: ./docker/services.yml
service: broker
ports:
- '19092:9092'
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183'
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-1:9092,PLAINTEXT_HOST://localhost:19092
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
broker-2:
container_name: broker-2
extends:
file: ./docker/services.yml
service: broker
ports:
- '29092:9092'
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183'
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-2:9092,PLAINTEXT_HOST://localhost:29092
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-2:9092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
broker-3:
container_name: broker-3
extends:
file: ./docker/services.yml
service: broker
ports:
- '39092:9092'
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper-1:2181,zookeeper-2:2182,zookeeper-3:2183'
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-3:9092,PLAINTEXT_HOST://localhost:39092
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-3:9092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
connect:
image: confluentinc/cp-kafka-connect:latest
hostname: connect
container_name: connect
depends_on:
- broker-1
- broker-2
- broker-3
ports:
- '8083:8083'
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker-1:9092,broker-2:9092,broker-3:9092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.2.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor'
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor'
CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components,/connect-plugins'
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:6.2.1
hostname: control-center
container_name: control-center
depends_on:
- broker-1
- broker-2
- broker-3
- connect
ports:
- '9021:9021'
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker-1:9092,broker-2:9092,broker-3:9092'
CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
#CONTROL_CENTER_KSQL_KSQLDB1_URL: 'http://ksqldb-server:8088'
#CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: 'http://localhost:8088'
#CONTROL_CENTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
This is my services.yml file:
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.1
environment:
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
broker:
image: confluentinc/cp-server:6.2.1
environment:
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
# KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I'm trying to run a docker-compose with Kafka and Sonar to run my tests and get its coverage, but they can not run in parallel and I could not find the reason.
If I move them to different docker-compose files, run the sonar one and wait it be up, right after run the kafka one, I can see in the sonar log:
sonarqube_1 | 2021.03.07 12:06:43 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 137
sonarqube_1 | 2021.03.07 12:06:43 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
sonarqube_1 | 2021.03.07 12:06:43 INFO ce[][o.s.p.ProcessEntryPoint] Hard stopping process
Here is my docker-compose:
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.1
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.1
hostname: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_LOG_RETENTION_MS: 1000
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
sonarqube:
image: sonarqube
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
I'm trying to configure the Kafka on Docker and Spring.
The spring application is responsible for producing messages. Docker should consume them. I have prepared container and I am trying to connect.
Unfortunately, I get an error:
[Producer clientId=1-1] 1 partitions have leader brokers without a matching listener, including [myTopic-0]
My docker-component
version: '3'
services:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
container_name: kafka
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
My Spring Configuration
Please Help.
Using bitnami stack
version: '3'
services:
zookeeper:
container_name: zookeeper
image: 'bitnami/zookeeper:3.4.14'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
restart: unless-stopped
kafka:
container_name: kafka
image: 'bitnami/kafka:2.3.0'
ports:
- '9092:9092'
volumes:
- 'kafka_data:/bitnami'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
depends_on:
- zookeeper
restart: unless-stopped
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
Check their guide - there are a lot of insights on how it works + Kafka cluster + more
https://hub.docker.com/r/bitnami/kafka
I think your listener's address is not configured well. It should be localhost or the actual docker host name.
This is a working example from my project. Note that I use the original confluence image, but it should work with wurstmeister's image too:
version: '3.7'
services:
zookeeper:
image: confluentinc/cp-zookeeper
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
ZOOKEEPER_SERVERS: "zookeeper:22888:23888"
ports:
- 2181:2181
kafka:
image: confluentinc/cp-kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,PLAINTEXT_UI:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_BROKER_ID: 1
KAFKA_BROKER_RACK: "r1"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
ports:
- 9092:9092
- 29092:29092
My docker-compose file:
version: "3"
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper:5.3.1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- zookeeper-data:/var/lib/zookeeper/data
- zookeeper-log:/var/lib/zookeeper/log
kafka:
container_name: kafka
image: confluentinc/cp-kafka:5.3.1
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- kafka-data:/var/lib/kafka/data
volumes:
zookeeper-data:
zookeeper-log:
kafka-data:
I'm trying to connect my job inside a container which sends events to kafka cluster in another container.
No matter what i've tried, i can't send the event to kafka topic
I 've tried telnet and kafkacat to the address listener port of my kafka, everything works just fine:
Telnet output
Kafkacat output
This is my job compose file, "172.16.33.91" is my local ip address:
version: '3'
services:
events-processor:
build:
context: ./events-processor
extra_hosts:
- "host:172.16.33.91"
restart: unless-stopped
This is my job code, which sends data from 1 -> 1000 to a existed topic num-test:
from time import sleep
from json import dumps
from kafka import KafkaProducer
if __name__=="__main__":
producer = KafkaProducer(bootstrap_servers=['host:9093'],
value_serializer=lambda x: dumps(x).encode('utf-8'))
for e in range(1000):
data = {'number' : e}
producer.send('numtest', value=data)
print(data)
sleep(5)
This is my kafka-zookeeper compose file:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- zk-data:/var/lib/zookeeper/data
- zk-logs:/var/lib/zookeeper/log
- secrets:/etc/zookeeper/secrets
restart: unless-stopped
kafka:
image: confluentinc/cp-kafka
depends_on:
- zookeeper
ports:
- "9093:9093"
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL://:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9093
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
volumes:
- kafka-data:/var/lib/kafka/data
- secrets:/etc/kafka/secrets
restart: unless-stopped
volumes:
zk-logs: {}
zk-data: {}
kafka-data: {}
secrets: {}
Anyone have any idea what've i done wrong? Any help appreciated!!!
I remembering encountering a similar issue with connectivity. Turns out, for your container to be able to access the ports on your host machine, you may need to add additional rules to your firewall to open it up.
For example with iptables, you can add a rule like the below to allow your host machine to accept requests from your docker containers.
-A INPUT -i docker0 -j ACCEPT
Alternatively, you can put your job container on the same docker network as your kafka/zookeeper. Either by putting your application in the same docker-compose file, or using a common external docker network.
If there is a better place this information should be put, or keywords you're missing to find the solutions, let the community know
Multiple blogs are written on this already
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,HOST://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,HOST:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://0.0.0.0:9092,HOST://0.0.0.0:9093
Using ://:port address will only listen to the local hostname, container or not
You must listen to external connections by binding to all interfaces, 0.0.0.0
You must advertise internally and/or any external hostnames
Here is another Working Sample :
version: '2'
services:
zookeeper1:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper1
hostname: zookeeper1
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2181:2181
volumes:
- ./data/zoo-1/data:/var/lib/zookeeper/data
- ./data/zoo-1/log:/var/lib/zookeeper/log
zookeeper2:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper2
hostname: zookeeper2
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2182:2181
volumes:
- ./data/zoo-2/data:/var/lib/zookeeper/data
- ./data/zoo-2/log:/var/lib/zookeeper/log
zookeeper3:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper3
hostname: zookeeper3
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2183:2181
volumes:
- ./data/zoo-3/data:/var/lib/zookeeper/data
- ./data/zoo-3/log:/var/lib/zookeeper/log
kafka1:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka1
hostname: kafka1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9091:9091
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9091
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,OUTSIDE://kafka1:9091
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka2:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka2
hostname: kafka2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:29092,OUTSIDE://kafka2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka3:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka3
hostname: kafka3
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:29092,OUTSIDE://kafka3:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
dont forget to edit /etc/hosts and the proper hostnames to ba able to work with above kafka cluster outside the Docker.
my /etc/hosts file is shown below :
127.0.0.1 localhost
127.0.0.1 kafka1
127.0.0.1 kafka2
127.0.0.1 kafka3