fail to dockerize the second kafka instance to zookeeper - docker

I want to dockerize a kafka cluster with two kafka instances.
running zookeeper
docker run -it --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper
running kafka
docker run -it --name kafka -p 9092:9092 -e ADVERTISED_HOST_NAME=$(hostname -f) --link zookeeper:zookeeper debezium/kafka
running one zookeeper and one kafka is successful but adding the second kafka container shown as below
docker run -it --name kafka2 -p 9096:9092 -e ADVERTISED_HOST_NAME=$(hostname -f) --link zookeeper:zookeeper debezium/kafka
gives me the following error:
2019-09-24 04:11:48,728 - ERROR [main:Logging#74] - Error while creating ephemeral at /brokers/ids/1, node already exists and owner '72057611679825940' does not match current session '72057611679825967'
2019-09-24 04:11:48,748 - ERROR [main:MarkerIgnoringBase#159] - [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists
at org.apache.zookeeper.KeeperException.create(KeeperException.java:122)
at kafka.zk.KafkaZkClient$CheckedEphemeral.getAfterNodeExists(KafkaZkClient.scala:1784)
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1722)
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689)
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97)
at kafka.server.KafkaServer.startup(KafkaServer.scala:260)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)

In the debezium Kafka container, you need to pass -e BROKER_ID=2. The default is 1.
https://github.com/debezium/docker-images/blob/master/kafka/0.10/docker-entrypoint.sh#L6-L9
In other Kafka containers, there's a KAFKA_BROKER_ID variable you can use that needs to be different for each container that also sets up replication factors to 2 for internal topics and correctly sets up the advertised listeners.
Here's an example compose file
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_HOST://localhost:19092,PLAINTEXT://kafka-1:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
kafka-2:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_HOST://localhost:29092,PLAINTEXT://kafka-2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
Ex.
$ kafkacat -L -b localhost:19092
Metadata for all topics (from broker 1: localhost:19092/1):
2 brokers:
broker 2 at localhost:29092
broker 1 at localhost:19092 (controller)
1 topics:
topic "__confluent.support.metrics" with 1 partitions:
partition 0, leader 1, replicas: 1,2, isrs: 1,2
Note: running multiple brokers on one machine won't improve any performance or reliability of Kafka

Related

500 Internal Server Error Error while creating AdminClient for Cluster Default

I get an error when I try to view topics and consumers using UI for apache kafka
docker command i use:
docker run -p 8080:8080 -e KAFKA_CLUSTERS_0_ZOOKEEPER=2181:2181 -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=127.0.0.1:9092 -d provectuslabs/kafka-ui:latest
or docker-compose.yml file
services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_JMXPORT: 9997
kafka:
image: johnnypark/kafka-zookeeper
ports:
- "2181:2181"
- "9092:9092"
network_mode: bridge
environment:
ADVERTISED_HOST: 127.0.0.1
NUM_PARTITIONS: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I tried both ways, both didn't work
Where did i go wrong?
2181:2181 is two ports, not a Zookeeper hostname/ip and port
Then, Kafka address is referring to the container you're running, not the actual Kafka server. You'll need to modify your Kafka server properties if it's running on the host, and you need to connect from Docker. Related - Connect to Kafka on host from Docker (ksqlDB)
Beyond that, remove the -d or use docker logs to see the actual error.
I was using the johnnypark/kafka-zookeeper library for both kafka and zookeeper. I was able to solve this problem by using two separate libraries as in the example below
zookeeper1:
image: confluentinc/cp-zookeeper:5.2.4
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka1:
image: confluentinc/cp-kafka:5.3.1
depends_on:
- zookeeper1
ports:
- 9093:9093
- 9998:9998
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
JMX_PORT: 9998
KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka1 -Dcom.sun.management.jmxremote.rmi.port=9998
https://github.com/provectus/kafka-ui/blob/master/docker/kafka-ui.yaml

Increasing number of brokers in kafka using KAFKA_ADVERTISED_HOST_NAME

I'm new to Kafka and I'm trying to run a Kafka service on my local machine and use it to transfer some data from one .NET project to another.
I'm using docker-compose.yml file to create two docker containers for zookeeper and Kafka from wurstmeister.
In Kafka definition in environment variables there is KAFKA_ADVERTISED_HOST_NAME which I set to 127.0.0.1.
In docker hub of wurstmeister/kafka says I quote
"modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host IP if you want to run multiple brokers.)".
When I set my topic to have more than 1 replica and more than 1 partition I get this message
"Error while executing topic command : Replication factor: 2 larger than available brokers: 1.".
What is the right IP address to define in KAFKA_ADVERTISED_HOST_NAME which will allow me to get more than 1 broker?
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "simpletalk_topic:2:2"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Note - Running multiple brokers on one machine does not provide true fault tolerance.
use it to transfer some data from one .NET project to another
You only need one broker for this.
First, suggest reading https://github.com/wurstmeister/kafka-docker/wiki/Connectivity
KAFKA_ADVERTISED_HOST_NAME is deprecated. Don't use it.
It has been replaced by KAFKA_ADVERTISED_LISTENERS, which can both contain ${DOCKER_HOST_IP:-127.0.0.1} (or your host IP from ifconfig) since this is the IP your client will use (from the host).
Since brokers are clients of themselves, you also need to advertise the container names. And, if you want to containerize your client, those addresses would also be used by your app.
Beyond Kafka networking configs, KAFKA_BROKER_ID needs to be different between each, and your error is suggesting you need to override all the other replication factor broker configs.
All in all.
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
kafka-1:
image: wurstmeister/kafka
depends_on: [zookeeper]
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://${DOCKER_HOST_IP:-127.0.0.1}:9092,PLAINTEXT_INTERNAL://kafka-1:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,PLAINTEXT_INTERNAL://:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT_INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
kafka-2:
image: wurstmeister/kafka
ports:
- "9093:9093"
environment:
KAFKA_BROKER_ID: 2 # unique
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://${DOCKER_HOST_IP:-127.0.0.1}:9093,PLAINTEXT_INTERNAL://kafka-2:29093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093,PLAINTEXT_INTERNAL://:29093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT_INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
KAFKA_CREATE_TOPICS: "simpletalk_topic:2:2"
depends_on: # ensure this joins the other
- zookeeper
- kafka-1
I'm trying to run a Kafka service on my local machine
Set bootstrap.servers="localhost:9092,localhost:9093"
Further reading - Connect to Kafka running in Docker
Example usage
Note that topic creation doesn't seem to automatically work. You should ideally be using your application to actually create / check topic existence.
$ kcat -b localhost:9093 -L
Metadata for all topics (from broker -1: localhost:9093/bootstrap):
2 brokers:
broker 2 at 127.0.0.1:9093
broker 1 at 127.0.0.1:9092 (controller)
0 topics:
producing some data... and consuming it
$ kcat -b localhost:9093 -t test -o earilest
% Auto-selecting Consumer mode (use -P or -C to override)
hello
world
sample
data
of
no
importance
it should be ip address of your machine.
Linux: use ifconfig command and your IP will be inet <IP_ADDRESS>
in windows you will get this by using ipconfig

Docker compose create kafka topics

Problem: Cannot create topics from docker-compose. I need to create kafka topics before I run a system under test. Planning to use it as a part of the pipeline, hence using UI is not an option.
Note: it takes ~15 seconds for kafka to be ready so I would need to put a sleep for 15 seconds prior to adding the topics.
Possible solution:
create a shell.sh file with commands to wait for 15 sec then add a bunch of topics
create a dockerfile for it
include that docker image in the docker-compose.yml just before starting the system under test
Current flow:
create zookeeper - OK
create kafka1 - OK
rest-proxy - OK
create topics <- PROBLEM
create SUT - OK
Current docker-compose.yml:
version: '2'
services:
zookeeper:
image: docker.io/confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
Kafka1:
image: docker.io/confluentinc/cp-enterprise-kafka:5.4.1
hostname: Kafka1
container_name: Kafka1
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_HOST_NAME: Kafka1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://Kafka1:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: Kafka1:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
rest-proxy:
image: docker.io/confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- Kafka1
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'Kafka1:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
topics:
image: topics:latest
hostname: topics
container_name: topics
depends_on:
- zookeeper
- Kafka1
- rest-proxy
sut:
image: sut:latest
hostname: sut
container_name: sut
depends_on:
- zookeeper
- Kafka1
- rest-proxy
ports:
- 5000:80
Current Dockerfile for topics container:
FROM ubuntu:14.04
ADD topics.sh /usr/local/bin/topics.sh
RUN chmod +x /usr/local/bin/topics.sh
CMD /usr/local/bin/topics.sh
Current topics.sh file:
#!/bin/sh
echo "Start: Sleep 15 seconds"
sleep 30;
wait;
echo "Begin creating topics"
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_ONE
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_TWO
echo "Done creating topics"
Current output:
/usr/local/bin/topics.sh: 1: /usr/local/bin/topics.sh: #!/bin/sh: not found
Start: Sleep 15 seconds
Begin creating topics
/usr/local/bin/topics.sh: 8: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 9: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 10: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 11: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 12: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 13: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 14: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 15: /usr/local/bin/topics.sh: docker: not found
Done creating topics
Topics are not created. I'm stuck. Please help.
The simplest way is to start a separate container inside the docker-compose file (called init-kafka in the example below) to launch the various kafka-topics --create ... commands, while first making it wait for Kafka to be reachble by simply running kafka-topics --list ....
Like this:
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
# reachable on 9092 from the host and on 29092 from inside docker compose
kafka:
image: confluentinc/cp-kafka:6.1.1
depends_on:
- zookeeper
ports:
- '9092:9092'
expose:
- '29092'
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
KAFKA_MIN_INSYNC_REPLICAS: '1'
init-kafka:
image: confluentinc/cp-kafka:6.1.1
depends_on:
- kafka
entrypoint: [ '/bin/sh', '-c' ]
command: |
"
# blocks until kafka is reachable
kafka-topics --bootstrap-server kafka:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-1 --replication-factor 1 --partitions 1
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-2 --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server kafka:29092 --list
"
When running it, the init-kafka container should log something like:
docker logs docker_init-kafka_1
[2021-10-12 02:00:28,728] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:28,832] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,033] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,335] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Creating kafka topics
Created topic my-topic-1.
Created topic my-topic-2.
Successfully created the following topics:
my-topic-1
my-topic-2
This solution allows use to create a topic from the docker-compse.yml
Refer to the DockerFile of your respective kafka image service
Take note of the last command in respective dockerFile image (DockerHub repo of image/Image Layers)
In my case for image blockconfluentinc/cp-kafka:latest, the last command that starts the kafka service was "/etc/confluent/docker/run"
Hence in your docker-compose.yml include the below command
command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic topicName)&) && /etc/confluent/docker/run "
This will start the kafka service, delay for 15 seconds then create a topic.
Please note that we are assuming it takes 15 seconds for the kafka service to be fully operational
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic quick-starter)&) && /etc/confluent/docker/run ">
Variant 1: Run topic.sh (just the kafka-topics --create in another docker container)
Sorry for providing no full example but let me share the idea:
Create a separate docker container 'kafka-setup' which is just required to get the kafka command-line tools. In that replace the startup command to execute some (good enough) wait operations and runs the /kafka/topic_creator.sh (with host:port-parameter of zookeeper and kafka) which is injected via volume. After that script is executed a file K_OUTPUT_FILE is created and exposed also via volume (prerequisite before calling docker-compose up, the file needs to be deleted).
Snippet from docker-compose.yml (./kafka folder contains the topic creator script, ./output gets the topics created file 'kafka-done.txt'):
# This "container" is a helper to pre-create topics
kafka-setup:
image: confluentinc/cp-kafka:5.4.3
depends_on:
- kafka
volumes:
- ./kafka:/kafka
- ./output:/output
command: "bash -c 'chmod +x /kafka/topic_creator.sh && \
/kafka/topic_creator.sh /kafka/topics.txt $$K_ZK $$K_KAFKA && \
touch \"$${K_OUTPUT_FILE}\" && chmod a+rw \"$${K_OUTPUT_FILE}\"'"
environment:
K_ZK: localhost:22181
K_KAFKA: localhost:19092
K_OUTPUT_FILE: "/output/kafka-done.txt"
# dummy values
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
network_mode: host
To run everything in the right order
bash script snippet:
K_SETUP_OUTPUT="./output"
mkdir -p "$K_SETUP_OUTPUT"
rm -f "$K_SETUP_OUTPUT/kafka-done.txt"
# Start stuff
docker-compose up -d --force-recreate --build --remove-orphans
wait_for_file "$K_SETUP_OUTPUT/kafka-done.txt"
sleep 5
# do your stuff here (e.g. read -r -p "Press any key to continue..." key )
do_something
with wait_for_file function
function wait_for_file {
local name=$1
echo "File waiting ${name}."
seconds=0
while [[ "$seconds" -lt "$timeout" && ! -f "$name" ]];
do
echo -n .
seconds=$((seconds+1))
sleep 1
done
if [ "$seconds" -lt "$timeout" ]; then
echo "${name} created (${seconds}s)!"
else
echo " ERROR: not found ${name}" >&2
exit 1
fi
}
How it works in sequence:
run the test script
create folders and delete ./output/kafka-done.txt
execute docker-compose up
in kafka-setup first wait for availability of zookeeper and kafka ports
run ./kafka/topic_creator.sh with parameters for zookeeper and kafka ports
create ./output/kafka-done.txt
wait_for_file ./output/kafka-done.txt succeeds
do_something .. that's your tests or whatever
Variant 2: Just allow docker to run without root
See Manage Docker as a non-root user
I needed to solve the topic creation myself and as I had the liberty of choosing kafka image of my preference I chose wurstmeister kafka docker image which allows for topic specification using env variable KAFKA_CREATE_TOPICS like so:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Here is a link to docker-compose example.
Another advantage for me was ARM64 version being available, just had to switch the zookeeper to official one.
ubuntu container doesn't have docker installed.
It also doesn't have the kafka-topics command, so instead you should re-use the cp-enterprise-kafka image that you've already pulled and change the ENTRYPOINT or CMD to be your script but running the kafka-topics command directly
Or replace your Kafka container with wurstmeister/kafka and add an environment variable for creating the topics
please include docker path in volume of kafka container in compose file of kafka service add -v $(which docker):/usr/bin/docker

docker-compose kafka - local machine client cannot produce message to kafka

I've read a lot of similar subjects but they aren't able to answer my problem here.
Trying to run some short integration tests, I'm using docker-compose 3, a single-node kafka. On client side I'm using Go shopify/sarama to consume / produce
zookeeper:
image: confluentinc/cp-zookeeper:5.2.2
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:5.2.2
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
expose:
- 9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
I have another container from the docker-compose that will listen to
- "BROKERS_URL=kafka:9092"
the consumer is working just fine:
Sarama consumer up and running. {"brokers": ["kafka:9092"], "topics": ["validated"], "group": "event-service"}
But on the producer part, running directly from my machine:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
producer, err := sarama.NewSyncProducer([]string{"http://localhost:29092"}, nil)
...
msg := &sarama.ProducerMessage{
Topic: "validated",
Key: sarama.StringEncoder(""),
Value: sarama.ByteEncoder(payload),
}
partition, offset, err := producer.SendMessage(msg)
...
Nothing weird / extravagante here, but it's not working and I'm confused.
also:
nc -vz localhost 29092
Connection to localhost port 29092 [tcp/*] succeeded!
Instead of
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
you need
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://0.0.0.0:29092
Testing connectivity from my host machine using kafkacat shows that this works:
➜ kafkacat -b localhost:29092 -L
Metadata for all topics (from broker 1: localhost:29092/1):
1 brokers:
broker 1 at localhost:29092 (controller)
0 topics:
This difference is that the listener is binding to all available interfaces (0.0.0.0). With your original configuration it binds to the loopback interface (lo) for localhost, and so only accepts traffic on this and not externally.

How to expose Kafka from Docker to the outside world?

Having to communicate with Kafka from a Dockerized Spring-Boot application,
the only option I was able to get working was Dockerizing Kafka too.
Here is my docker-compose-yml:
version: '3.5'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- kafka-network
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
networks:
- kafka-network
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
kafka-network:
name: kafka-network
This way I'm able to connect to the Kafka broker from anothe container on the kafka-network using the url kafka:9092
How do I make it also available from the localhost and from other machines?
UPDATE
I updated my docker-compose as follows:
version: '3.5'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- kafka-network
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
networks:
- kafka-network
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: ${KAFKA_LISTENERS:-PLAINTEXT://:9092}
KAFKA_ADVERTISED_LISTENERS: ${KAFKA_ADVERTISED_LISTENERS:-PLAINTEXT://127.0.0.1:9092}
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
kafka-network:
name: kafka-network
and created a .env file with the following content :
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://the.ip.of.machine:9092
I tested it on my PC (without the .env file) and I'm able to communicate with the broker using kafkakat from localhost:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6844a16fa14f wurstmeister/kafka "start-kafka.sh" 5 seconds ago Up 3 seconds 0.0.0.0:9092->9092/tcp kafka-compose_kafka_1_9573f71109c7
15d62557f3bd wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 6 seconds ago Up 4 seconds 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp kafka-compose_zookeeper_1_61a19213cde7
$ kafkacat -P -b localhost:9092 -t topic1
New test
^C
$ kafkacat -C -b localhost:9092 -t topic1
New test
% Reached end of topic topic1 [0] at offset 1
$ kafkacat -b localhost:9092 -L
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
1 brokers:
broker 1001 at 127.0.0.1:9092
4 topics:
...
However I'm not able to do the same on the server, the only difference is the IP of the host machine for KAFKA_ADVERTISED_LISTENERS.
What I can see is that it keeps saying leader not available
# kafkacat -b localhost:9092 -L
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
1 brokers:
broker 1002 at the.ip.of.machine:9092
topic "__consumer_offsets" with 50 partitions:
partition 0, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 1, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 2, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 3, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 4, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 5, leader -1, replicas: 1001, isrs: , Broker: Leader not available
Shouldn't I set the IP of the server for the KAFKA_ADVERTISED_LISTENERS ?
You have to set the LISTENERS in environments in order to expose the Kafka brokers to external network like below :
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://one.prod.com:9092
Here is the example :
https://github.com/wurstmeister/kafka-docker/blob/85821409d4d49a4edc7c5be83b68b71eceeab1bc/docker-compose-swarm.yml
You can refer here for more details:
https://github.com/wurstmeister/kafka-docker/wiki/Connectivity
For new kafka versions try to look here:
https://www.kaaproject.org/kafka-docker
https://github.com/bitnami/bitnami-docker-kafka/issues/29#issuecomment-435216430
For old kafka if you have only one computer might help this kafka variables:
environment:
ADVERTISED_HOST: ${COMPUTERNAME}
ADVERTISED_PORT: "9092"

Resources