I am creating a 3 node Zookeeper and 3 node Kafka cluster on Docker. I need to link the Kafka cluster to the Zookeeper cluster.
Below is docker-compose.yml code -
version: '2'
services:
zookeeper-1:
image: zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888
zookeeper-2:
image: zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888
zookeeper-3:
image: zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 42181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888
kafka1:
image: abc/kafka:latest
hostname: kafka1
network_mode: host
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
kafka2:
image: abc/kafka:latest
hostname: kafka2
network_mode: host
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
kafka3:
image: abc/kafka:latest
hostname: kafka3
network_mode: host
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
When i run the docker-compose up command, the 3 node zookeeper cluster gets created successfully. But the Kafka cluster doesn't get created. It only displays the following error on command line -
kafka_kafka2_1 exited with code 0
kafka_kafka1_1 exited with code 0
kafka_kafka3_1 exited with code 0
I have also added the Dockerfile for creating the image, below -
FROM centos:7
MAINTAINER abc#xyz.com
ENV KAFKA_BIN=http://www-eu.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz
RUN yum install -y wget java-1.8.0-openjdk \
&& cd /tmp && wget -q $KAFKA_BIN \
&& export K_TAR=/tmp/$(ls kafka* | head -1) \
&& mkdir -p /opt/apache/kafka/ && tar -zxf $K_TAR -C /opt/apache/kafka/ \
&& cd /opt/apache/kafka && ln -s $(ls) current \
&& rm -rf $K_TAR
ENV KAFKA_HOME /opt/apache/kafka/current
ENV PATH $PATH:$KAFKA_HOME/bin
ADD resources /home/kafka
RUN groupadd -r kafka \
&& useradd -r -g kafka kafka \
&& mkdir -p /home/kafka \
&& chown -R kafka:kafka /home/kafka \
&& chmod -R +x /home/kafka/scripts \
&& mkdir -p /var/log/kafka \
&& chown -R kafka:kafka /var/log/kafka \
&& mkdir -p /etc/kafka \
&& chown -R kafka:kafka /etc/kafka
USER kafka
There is no error message of any sort. Can someone please help me debug this issue as to why the kafka cluster is not starting. I am unable to debug this, especially because it only exits with code 0 and provides no error message.
First, you forgot to expose the zookeeper ports. Second, localhost means localhost – even for the docker containers. So your Kafka containers would try to connect to zookeeper within the same container, which does not work.
Have a look here how it is done correctly: https://github.com/confluentinc/cp-docker-images/blob/4.0.x/examples/cp-all-in-one/docker-compose.yml
If you don't like that approach, you can do it without exposing any port. For that, set the zookeeper client port in the zookeeper environment section of your compose file:
...
services:
zookeeper:
...
environment:
ZOOKEEPER_CLIENT_PORT: 2181
Second, connect your kafka container to it:
...
kafka:
...
environment:
ZOOKEEPER_SERVERS: zookeeper:2181
With this approach, everything will happen inside the docker compose network.
localhost inside Docker refers to the container instance, not the host. This is a major source of confusion for Kafka installations. You need to configure Kafka's listeners to advertise a different host:port combination — one that is reachable from within the container.
An example of how this is done for multiple brokers is shown below. Credit: The example is taken from a repo accompanying Effective Kafka.
version: "3.2"
services:
zookeeper:
image: bitnami/zookeeper:3
ports:
- 2181:2181
environment:
ALLOW_ANONYMOUS_LOGIN: "yes"
kafka-0:
image: bitnami/kafka:2
ports:
- 9092:9092
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENERS: >-
INTERNAL://:29092,EXTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: >-
INTERNAL://kafka-0:29092,EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
depends_on:
- zookeeper
kafka-1:
image: bitnami/kafka:2
ports:
- 9093:9093
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENERS: >-
INTERNAL://:29092,EXTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: >-
INTERNAL://kafka-1:29092,EXTERNAL://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
depends_on:
- zookeeper
kafka-2:
image: bitnami/kafka:2
ports:
- 9094:9094
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_LISTENERS: >-
INTERNAL://:29092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: >-
INTERNAL://kafka-2:29092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >-
INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
depends_on:
- zookeeper
kafdrop:
image: obsidiandynamics/kafdrop:latest
ports:
- 9000:9000
environment:
KAFKA_BROKERCONNECT: >-
kafka-0:29092,kafka-1:29092,kafka-2:29092
depends_on:
- kafka-0
- kafka-1
- kafka-2
Here, the application that connects to the brokers is Kafdrop, but you would replace it with your own image. Also, the example only uses one ZK node with 3 brokers for brevity; this should be a straightforward change.
you need to set the proper hostname in /etc/hosts to ba able to connect the kafka containers outside the docker :
cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.0.1 kafka1
127.0.0.1 kafka2
127.0.0.1 kafka3
Here is My Working Config :
version: '2'
services:
zookeeper1:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper1
hostname: zookeeper1
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2181:2181
volumes:
- ./data/zoo-1/data:/var/lib/zookeeper/data
- ./data/zoo-1/log:/var/lib/zookeeper/log
zookeeper2:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper2
hostname: zookeeper2
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2182:2181
volumes:
- ./data/zoo-2/data:/var/lib/zookeeper/data
- ./data/zoo-2/log:/var/lib/zookeeper/log
zookeeper3:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper3
hostname: zookeeper3
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2183:2181
volumes:
- ./data/zoo-3/data:/var/lib/zookeeper/data
- ./data/zoo-3/log:/var/lib/zookeeper/log
kafka1:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka1
hostname: kafka1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9091:9091
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9091
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,OUTSIDE://kafka1:9091
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka2:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka2
hostname: kafka2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:29092,OUTSIDE://kafka2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka3:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka3
hostname: kafka3
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:29092,OUTSIDE://kafka3:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
Related
I'm trying to setup a kafka cluster with 3 brokers on Docker.
The problem is: when I do an operation (i.e create/list/delete topics), there's always 1 broker fails to be connected and restart Docker container. This problem doesn't happen on a cluster of 2 or single broker.
My steps to reproduce is:
Run docker-compose up
Open shell of 1 of kafka containers and create a topic kafka-topics --bootstrap-server ":9092" --create --topic topic-name --partitions 3 --replication-factor 3
After this, 1 random broker is disconnected and deleted from the cluster. Sometimes the reponse of the above execution is the error said that replication factor cannot be larger than 2 (since 1 broker has removed from cluster)
I'm new to Kafka. I think I'm just having some silly mistakes but I don't have any clue of what it is. I search through docs but haven't found yet.
Here is my docker-compose file:
version: "3.9"
networks:
kafka-cluster:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
# ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
# ZOOKEEPER_TICK_TIME: 2000
# ZOOKEEPER_SERVERS: "zookeeper:22888:23888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
ports:
- 2181:2181
restart: unless-stopped
networks:
- kafka-cluster
kafka1:
image: confluentinc/cp-kafka:latest
container_name: kafka1
depends_on:
- zookeeper
ports:
- "9093:9093"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka1:9092,EXTERNAL://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka2:
image: confluentinc/cp-kafka:latest
container_name: kafka2
depends_on:
- zookeeper
ports:
- "9094:9094"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka2:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka3:
image: confluentinc/cp-kafka:latest
container_name: kafka3
depends_on:
- zookeeper
ports:
- "9095:9095"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9095
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka3:9092,EXTERNAL://localhost:9095
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafdrop:
image: obsidiandynamics/kafdrop:latest
container_name: kafdrop
ports:
- "9000:9000"
environment:
- KAFKA_BROKERCONNECT=kafka1:9092,kafka2:9092,kafka3:9092
- JVM_OPTS="-Xms32M -Xmx64M"
- SERVER_SERVLET_CONTEXTPATH="/"
depends_on:
- kafka1
networks:
- kafka-cluster
Here is the error log on the other 2 brokers:
[2022-01-17 04:32:40,078] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=1002, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={test-topic-3-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty), test-topic-2-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty)}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=28449961, epoch=INITIAL), rackId=) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
[2022-01-17 04:32:42,088] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Connection to node 1001 (kafka1/192.168.48.3:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-01-17 04:32:42,088] INFO [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error sending fetch request (sessionId=28449961, epoch=INITIAL) to node 1001: (org.apache.kafka.clients.FetchSessionHandler)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Assuming you don't need host connections (since you're running the Kafka CLI commands directly in the containers), you could greatly simplify your Compose file
Remove host ports
Remove non-CLIENT listeners, and stick to the defaults.
Remove the Compose network (for debugging) since one is automatically created
All in all, you'd end up with something like this
x-kafka-setup: &kafka-setup
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
version: "3.8"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.7
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: &broker-image docker.io/bitnami/kafka:3
environment:
KAFKA_BROKER_ID: 1
<<: *kafka-setup
depends_on:
- zookeeper
kafka2:
image: *broker-image
environment:
KAFKA_BROKER_ID: 2
<<: *kafka-setup
depends_on:
- zookeeper
kafka3:
image: *broker-image
environment:
KAFKA_BROKER_ID: 3
<<: *kafka-setup
depends_on:
- zookeeper
kafdrop:
image: obsidiandynamics/kafdrop:latest
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: kafka1:9092,kafka2:9092,kafka3:9092
JVM_OPTS: "-Xms32M -Xmx64M"
SERVER_SERVLET_CONTEXTPATH: /
depends_on:
- kafka1
- kafka2
- kafka3
errorI have a dockerized deployment of neo4j on the google cloud platform. I am able to bring up the UI of neo4j on 127.0.0.1/7474 and I am forwarding the port on my local PC where I am able to access the UI but not able to authenticate neo4j.
here's my docker file
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.0
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:6.1.0
container_name: broker
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
schema-registry:
image: confluentinc/cp-schema-registry:6.1.0
container_name: schema-registry
ports:
- 8081:8081
depends_on:
- zookeeper
- broker
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:29092
kafka-connect:
image: confluentinc/cp-kafka-connect-base:6.1.0
container_name: kafka-connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components/'
command:
- bash
- -c
- |
echo "Installing connector plugins"
confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.0.2
confluent-hub install --no-prompt jcustenborder/kafka-connect-spooldir:2.0.60
confluent-hub install --no-prompt streamthoughts/kafka-connect-file-pulse:1.5.0
confluent-hub install --no-prompt neo4j/kafka-connect-neo4j:1.0.0
#
# -----------
# Launch the Kafka Connect worker
/etc/confluent/docker/run &
#
# Don't exit
sleep infinity
volumes:
- $PWD/data:/data
ksqldb:
image: confluentinc/ksqldb-server:0.15.0
container_name: ksqldb
depends_on:
- broker
- kafka-connect
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: broker:29092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_CONNECT_URL: http://kafka-connect:8083
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
KSQL_KSQL_SERVICE_ID: confluent_rmoff_01
KSQL_KSQL_HIDDEN_TOPICS: '^_.*'
postgres:
# *-----------------------------*
# To connect to the DB:
# docker exec -it postgres bash -c 'psql -U $POSTGRES_USER $POSTGRES_DB'
# *-----------------------------*
image: postgres:12
container_name: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
kafkacat:
image: edenhill/kafkacat:1.6.0
container_name: kafkacat
links:
- broker
- schema-registry
entrypoint:
- /bin/sh
- -c
- |
apk add jq;
while [ 1 -eq 1 ];do sleep 60;done
neo4j:
image: neo4j:3.5
hostname: neo4j
container_name: neo4j
depends_on:
- zookeeper
- broker
ports:
- "7474:7474"
- "7687:7687"
environment:
NEO4J_AUTH: neo4j/connect
NEO4J_dbms_memory_heap_max__size: 8G
NEO4J_dbms_security_procedures_unrestricted: "apoc.*,algo.*"
NEO4J_apoc_export_file_enabled: "true"
NEO4J_kafka_zookeeper_connect: zookeeper:2181
NEO4J_kafka_bootstrap_servers: broker:9093
volumes:
- $PWD/nodes_2k19/neo4j/plugins:/plugins
- $PWD/nodes_2k19/neo4j/scripts:/scripts
- $PWD/nodes_2k19/neo4j/docker-entrypoint.sh:/docker-entrypoint.sh
I have made changes in neo4j.conf file that is suggested on the other StackOverflow answers.
I've below docker-compose.yml file
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:6.0.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
When I run docker-compose exec kafka bash I get the bash prompt.
In the bash prompt I've kafka-console-consumer but I don't have access to kafka-avro-console-consumer?
How can I get access to kafka-avro-console-consumer? Is it not in $PATH but in some other directory?
I tried to use find and which commands but those are not present in docker running container
use confluentinc/cp-schema-registry
docker run confluentinc/cp-schema-registry \
kafka-avro-console-consumer --bootstrap-server localhost:29092 --topic quickstart-jdbc-test --from-beginning --max-messages 10
I'm trying to connect my job inside a container which sends events to kafka cluster in another container.
No matter what i've tried, i can't send the event to kafka topic
I 've tried telnet and kafkacat to the address listener port of my kafka, everything works just fine:
Telnet output
Kafkacat output
This is my job compose file, "172.16.33.91" is my local ip address:
version: '3'
services:
events-processor:
build:
context: ./events-processor
extra_hosts:
- "host:172.16.33.91"
restart: unless-stopped
This is my job code, which sends data from 1 -> 1000 to a existed topic num-test:
from time import sleep
from json import dumps
from kafka import KafkaProducer
if __name__=="__main__":
producer = KafkaProducer(bootstrap_servers=['host:9093'],
value_serializer=lambda x: dumps(x).encode('utf-8'))
for e in range(1000):
data = {'number' : e}
producer.send('numtest', value=data)
print(data)
sleep(5)
This is my kafka-zookeeper compose file:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- zk-data:/var/lib/zookeeper/data
- zk-logs:/var/lib/zookeeper/log
- secrets:/etc/zookeeper/secrets
restart: unless-stopped
kafka:
image: confluentinc/cp-kafka
depends_on:
- zookeeper
ports:
- "9093:9093"
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL://:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9093
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
volumes:
- kafka-data:/var/lib/kafka/data
- secrets:/etc/kafka/secrets
restart: unless-stopped
volumes:
zk-logs: {}
zk-data: {}
kafka-data: {}
secrets: {}
Anyone have any idea what've i done wrong? Any help appreciated!!!
I remembering encountering a similar issue with connectivity. Turns out, for your container to be able to access the ports on your host machine, you may need to add additional rules to your firewall to open it up.
For example with iptables, you can add a rule like the below to allow your host machine to accept requests from your docker containers.
-A INPUT -i docker0 -j ACCEPT
Alternatively, you can put your job container on the same docker network as your kafka/zookeeper. Either by putting your application in the same docker-compose file, or using a common external docker network.
If there is a better place this information should be put, or keywords you're missing to find the solutions, let the community know
Multiple blogs are written on this already
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,HOST://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,HOST:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://0.0.0.0:9092,HOST://0.0.0.0:9093
Using ://:port address will only listen to the local hostname, container or not
You must listen to external connections by binding to all interfaces, 0.0.0.0
You must advertise internally and/or any external hostnames
Here is another Working Sample :
version: '2'
services:
zookeeper1:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper1
hostname: zookeeper1
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2181:2181
volumes:
- ./data/zoo-1/data:/var/lib/zookeeper/data
- ./data/zoo-1/log:/var/lib/zookeeper/log
zookeeper2:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper2
hostname: zookeeper2
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2182:2181
volumes:
- ./data/zoo-2/data:/var/lib/zookeeper/data
- ./data/zoo-2/log:/var/lib/zookeeper/log
zookeeper3:
image: confluentinc/cp-zookeeper:$CP_VERSION
container_name: zookeeper3
hostname: zookeeper3
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_SERVERS: "zookeeper1:22888:23888;zookeeper2:22888:23888;zookeeper3:22888:23888"
ports:
- 2183:2181
volumes:
- ./data/zoo-3/data:/var/lib/zookeeper/data
- ./data/zoo-3/log:/var/lib/zookeeper/log
kafka1:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka1
hostname: kafka1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9091:9091
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9091
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,OUTSIDE://kafka1:9091
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka2:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka2
hostname: kafka2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:29092,OUTSIDE://kafka2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
kafka3:
image: confluentinc/cp-kafka:$CP_VERSION
container_name: kafka3
hostname: kafka3
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:29092,OUTSIDE://kafka3:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/logs
dont forget to edit /etc/hosts and the proper hostnames to ba able to work with above kafka cluster outside the Docker.
my /etc/hosts file is shown below :
127.0.0.1 localhost
127.0.0.1 kafka1
127.0.0.1 kafka2
127.0.0.1 kafka3
I have the following docker-compose.yml file
version: '3'
services:
zookeeper:
ports:
- "2181:2181"
hostname: zookeeper
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka
hostname: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:9092'
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
test:
image: java:8
volumes:
- .:/src
- ${GRADLE_USER_HOME}:/gradle_mount
environment:
- GRADLE_USER_HOME=/gradle_mount
- SPRING_KAFKA_BOOTSTRAP-SERVERS=broker:9092
depends_on:
- broker
- zookeeper
working_dir: /src
privileged: true
command: "./gradlew --no-daemon clean test --info"
The tests pass locally but fail in Jenkins and I suspect it is because the tests are firing off before the Kafka service is running.
Is there any way to make the test container wait for the service on the Kafka container (broker) to start?