Starting a Kafka topics using Docker Compose with spotify/kafka? - docker

I am attempting to connect Kafka topics to my front-end Java Spring application. I am utilizing Docker Compose and have tried to connect using two different Kafka images.
With wurstmeister/kafka I have been able to get the Kafka topics up and running by this service in my docker.compose.yml file. But I have not been able to connect the created topics to my front-end Java Spring application.
kafka:
image: wurstmeister/kafka:0.10.2.0
ports:
- "9092:9092"
expose:
- "9092"
- "2181"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_CREATE_TOPICS: "test-topic1:1:1, test-topic2:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
Secondly, with spotify/kafka, I am having difficulty actually creating the topics with Kafka. In the documentation, it is looking for the topics as an environment variable, but the following docker-compose.yml service is not creating a topic. I have also tried putting quotes around the test-topic but that did not work as well.
kafka:
image: spotify/kafka
ports:
- "9092:9092"
- "2181:2181"
hostname: kafka
expose:
- "9092"
- "2181"
environment:
TOPICS: test-topic
I do not know if this is necessary, but my entire docker-compose.yml file is as follows, take note that the zookeeper service is only required if you use wurstmeister/kafka.
docker-compose.yml
version: '2'
services:
# zookeeper:
# image: wurstmeister/zookeeper
# ports:
# - "2181:2181"
kafka:
image: spotify/kafka
ports:
- "9092:9092"
- "2181:2181"
hostname: kafka
expose:
- "9092"
- "2181"
environment:
TOPICS: test-topic
redis:
image: redis
ports:
- "6379"
restart: always
kafka-websocket-connector:
build: ./kafka-websocket-connector
image: andrewterra/kafka-websocket-connector
ports:
- "8077:8077"
# - "9092:9092"
depends_on:
- kafka
- redis
# - zookeeper
links:
- kafka
- redis

Rather late, but you could use something like the following to use a shell script to create your topic:
command: >
bash -c
"(sleep 15s &&
/opt/kafka_2.11-0.10.1.0/bin/kafka-topics.sh
--create
--zookeeper
localhost:2181 --replication-factor 1 --partitions 1
--topic my_topic &) && (supervisord -n)"

Container run command to create topic
Use the container command docker run --net=host --rm. In the following example, the zookeeper is running on port 22181, please use the respective topic name, port.
Create
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --create --topic customer --partitions 1 --replication-factor 1 --if-not-exists --zookeeper localhost:22181
Describe
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic customer --describe
List
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --list --zookeeper localhost:22181
Delete
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --delete --topic customer --zookeeper localhost:22181

The TOPICS environment variable is only used for the kafkaproxy image. https://github.com/spotify/docker-kafka#running-the-proxy
For kafka image, you will need to create the topics with a client.

Related

500 Internal Server Error Error while creating AdminClient for Cluster Default

I get an error when I try to view topics and consumers using UI for apache kafka
docker command i use:
docker run -p 8080:8080 -e KAFKA_CLUSTERS_0_ZOOKEEPER=2181:2181 -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=127.0.0.1:9092 -d provectuslabs/kafka-ui:latest
or docker-compose.yml file
services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_JMXPORT: 9997
kafka:
image: johnnypark/kafka-zookeeper
ports:
- "2181:2181"
- "9092:9092"
network_mode: bridge
environment:
ADVERTISED_HOST: 127.0.0.1
NUM_PARTITIONS: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I tried both ways, both didn't work
Where did i go wrong?
2181:2181 is two ports, not a Zookeeper hostname/ip and port
Then, Kafka address is referring to the container you're running, not the actual Kafka server. You'll need to modify your Kafka server properties if it's running on the host, and you need to connect from Docker. Related - Connect to Kafka on host from Docker (ksqlDB)
Beyond that, remove the -d or use docker logs to see the actual error.
I was using the johnnypark/kafka-zookeeper library for both kafka and zookeeper. I was able to solve this problem by using two separate libraries as in the example below
zookeeper1:
image: confluentinc/cp-zookeeper:5.2.4
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka1:
image: confluentinc/cp-kafka:5.3.1
depends_on:
- zookeeper1
ports:
- 9093:9093
- 9998:9998
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
JMX_PORT: 9998
KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka1 -Dcom.sun.management.jmxremote.rmi.port=9998
https://github.com/provectus/kafka-ui/blob/master/docker/kafka-ui.yaml

Creating Kafka Topic with docker-compose

I am starting my first docker proyect, trying to setup zookeeper and Kafka.
I have the following docker-compose.yml:
version: "3.8"
networks:
mynet:
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- mynet
kafka:
image: wurstmeister/kafka:2.12-2.4.0
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
# Create a topic NAME:PARTITION:REPLICAS
KAFKA_CREATE_TOPICS: "example-topic:1:1"
networks:
- mynet
kafka-manager:
image: sheepkiller/kafka-manager:latest
environment:
ZK_HOSTS: "zookeeper:2181"
ports:
- 9000:9000
networks:
- mynet
I execute the docker-compose.yml:
sudo docker-compose up -d
To check if I have the topic created, I access the Kafka shell through:
sudo docker exec -it <<CONTAINER ID>> sh
In this shell I go to:
cd opt/kafka
And execute:
bin/kafka-topics.sh --list --zookeeper localhost:2181
And the output is a timeout error:
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:259)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:255)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:113)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1858)
at kafka.admin.TopicCommand$ZookeeperTopicService$.apply(TopicCommand.scala:321)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:54)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Zookeeper isn't running in the Kafka container.
You should use --bootstrap-server localhost:9092 anyway since the Zookeeper argument is deprecated

Docker compose create kafka topics

Problem: Cannot create topics from docker-compose. I need to create kafka topics before I run a system under test. Planning to use it as a part of the pipeline, hence using UI is not an option.
Note: it takes ~15 seconds for kafka to be ready so I would need to put a sleep for 15 seconds prior to adding the topics.
Possible solution:
create a shell.sh file with commands to wait for 15 sec then add a bunch of topics
create a dockerfile for it
include that docker image in the docker-compose.yml just before starting the system under test
Current flow:
create zookeeper - OK
create kafka1 - OK
rest-proxy - OK
create topics <- PROBLEM
create SUT - OK
Current docker-compose.yml:
version: '2'
services:
zookeeper:
image: docker.io/confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
Kafka1:
image: docker.io/confluentinc/cp-enterprise-kafka:5.4.1
hostname: Kafka1
container_name: Kafka1
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_HOST_NAME: Kafka1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://Kafka1:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: Kafka1:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
rest-proxy:
image: docker.io/confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- Kafka1
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'Kafka1:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
topics:
image: topics:latest
hostname: topics
container_name: topics
depends_on:
- zookeeper
- Kafka1
- rest-proxy
sut:
image: sut:latest
hostname: sut
container_name: sut
depends_on:
- zookeeper
- Kafka1
- rest-proxy
ports:
- 5000:80
Current Dockerfile for topics container:
FROM ubuntu:14.04
ADD topics.sh /usr/local/bin/topics.sh
RUN chmod +x /usr/local/bin/topics.sh
CMD /usr/local/bin/topics.sh
Current topics.sh file:
#!/bin/sh
echo "Start: Sleep 15 seconds"
sleep 30;
wait;
echo "Begin creating topics"
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_ONE
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_TWO
echo "Done creating topics"
Current output:
/usr/local/bin/topics.sh: 1: /usr/local/bin/topics.sh: #!/bin/sh: not found
Start: Sleep 15 seconds
Begin creating topics
/usr/local/bin/topics.sh: 8: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 9: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 10: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 11: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 12: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 13: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 14: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 15: /usr/local/bin/topics.sh: docker: not found
Done creating topics
Topics are not created. I'm stuck. Please help.
The simplest way is to start a separate container inside the docker-compose file (called init-kafka in the example below) to launch the various kafka-topics --create ... commands, while first making it wait for Kafka to be reachble by simply running kafka-topics --list ....
Like this:
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
# reachable on 9092 from the host and on 29092 from inside docker compose
kafka:
image: confluentinc/cp-kafka:6.1.1
depends_on:
- zookeeper
ports:
- '9092:9092'
expose:
- '29092'
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
KAFKA_MIN_INSYNC_REPLICAS: '1'
init-kafka:
image: confluentinc/cp-kafka:6.1.1
depends_on:
- kafka
entrypoint: [ '/bin/sh', '-c' ]
command: |
"
# blocks until kafka is reachable
kafka-topics --bootstrap-server kafka:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-1 --replication-factor 1 --partitions 1
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-2 --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server kafka:29092 --list
"
When running it, the init-kafka container should log something like:
docker logs docker_init-kafka_1
[2021-10-12 02:00:28,728] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:28,832] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,033] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,335] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Creating kafka topics
Created topic my-topic-1.
Created topic my-topic-2.
Successfully created the following topics:
my-topic-1
my-topic-2
This solution allows use to create a topic from the docker-compse.yml
Refer to the DockerFile of your respective kafka image service
Take note of the last command in respective dockerFile image (DockerHub repo of image/Image Layers)
In my case for image blockconfluentinc/cp-kafka:latest, the last command that starts the kafka service was "/etc/confluent/docker/run"
Hence in your docker-compose.yml include the below command
command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic topicName)&) && /etc/confluent/docker/run "
This will start the kafka service, delay for 15 seconds then create a topic.
Please note that we are assuming it takes 15 seconds for the kafka service to be fully operational
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic quick-starter)&) && /etc/confluent/docker/run ">
Variant 1: Run topic.sh (just the kafka-topics --create in another docker container)
Sorry for providing no full example but let me share the idea:
Create a separate docker container 'kafka-setup' which is just required to get the kafka command-line tools. In that replace the startup command to execute some (good enough) wait operations and runs the /kafka/topic_creator.sh (with host:port-parameter of zookeeper and kafka) which is injected via volume. After that script is executed a file K_OUTPUT_FILE is created and exposed also via volume (prerequisite before calling docker-compose up, the file needs to be deleted).
Snippet from docker-compose.yml (./kafka folder contains the topic creator script, ./output gets the topics created file 'kafka-done.txt'):
# This "container" is a helper to pre-create topics
kafka-setup:
image: confluentinc/cp-kafka:5.4.3
depends_on:
- kafka
volumes:
- ./kafka:/kafka
- ./output:/output
command: "bash -c 'chmod +x /kafka/topic_creator.sh && \
/kafka/topic_creator.sh /kafka/topics.txt $$K_ZK $$K_KAFKA && \
touch \"$${K_OUTPUT_FILE}\" && chmod a+rw \"$${K_OUTPUT_FILE}\"'"
environment:
K_ZK: localhost:22181
K_KAFKA: localhost:19092
K_OUTPUT_FILE: "/output/kafka-done.txt"
# dummy values
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
network_mode: host
To run everything in the right order
bash script snippet:
K_SETUP_OUTPUT="./output"
mkdir -p "$K_SETUP_OUTPUT"
rm -f "$K_SETUP_OUTPUT/kafka-done.txt"
# Start stuff
docker-compose up -d --force-recreate --build --remove-orphans
wait_for_file "$K_SETUP_OUTPUT/kafka-done.txt"
sleep 5
# do your stuff here (e.g. read -r -p "Press any key to continue..." key )
do_something
with wait_for_file function
function wait_for_file {
local name=$1
echo "File waiting ${name}."
seconds=0
while [[ "$seconds" -lt "$timeout" && ! -f "$name" ]];
do
echo -n .
seconds=$((seconds+1))
sleep 1
done
if [ "$seconds" -lt "$timeout" ]; then
echo "${name} created (${seconds}s)!"
else
echo " ERROR: not found ${name}" >&2
exit 1
fi
}
How it works in sequence:
run the test script
create folders and delete ./output/kafka-done.txt
execute docker-compose up
in kafka-setup first wait for availability of zookeeper and kafka ports
run ./kafka/topic_creator.sh with parameters for zookeeper and kafka ports
create ./output/kafka-done.txt
wait_for_file ./output/kafka-done.txt succeeds
do_something .. that's your tests or whatever
Variant 2: Just allow docker to run without root
See Manage Docker as a non-root user
I needed to solve the topic creation myself and as I had the liberty of choosing kafka image of my preference I chose wurstmeister kafka docker image which allows for topic specification using env variable KAFKA_CREATE_TOPICS like so:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Here is a link to docker-compose example.
Another advantage for me was ARM64 version being available, just had to switch the zookeeper to official one.
ubuntu container doesn't have docker installed.
It also doesn't have the kafka-topics command, so instead you should re-use the cp-enterprise-kafka image that you've already pulled and change the ENTRYPOINT or CMD to be your script but running the kafka-topics command directly
Or replace your Kafka container with wurstmeister/kafka and add an environment variable for creating the topics
please include docker path in volume of kafka container in compose file of kafka service add -v $(which docker):/usr/bin/docker

I need to create a kafka image with topics already created

I have a requirement that i need to setup kafka locally with topics already there in the container.I am using ladoop/fast-data-dev for doing that
How manually i am doing it-
docker run -d --name landoopkafka -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -e ADV_HOST=localhost landoop/fast-data-dev
After running this command my container is up and running.
now i go to bash inside this container like docker -exec -it landopkafka bash
and create topic using this command
kafka-topics --zookeeper localhost:2181 --create --topic hello_topic --partitions 1 --replication-factor 1
My topic is created.
But my requirement is i need to have a docker file which will have topic created and i just need to run it.
OR
A docker compose file which i need to run
Guys i need help on this ,as i am absolutely new to docker and kafka
I had to do it too! What if I did not want to use wurstmeister images? I decided to make a custom script which will do the job, and run this script in a separate container.
Repository
https://github.com/yan-khonski-it/kafka-compose
Note, it will work with kafka versions that use zookeeper.
Is Zookeeper a must for Kafka?
To start kafka with all your topics and zookeeper - docker-compose up -d.
Implementation details.
docker-compose.yml
# These services are kafka related. This docker-compose allows to start kafka locally quickly.
version: '2.1'
networks:
demo-network:
name: demo-network
driver: bridge
services:
zookeeper:
image: "confluentinc/cp-zookeeper:${CONFLUENT_PLATFORM_VERSION}"
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 32181:32181
hostname: zookeeper
networks:
- demo-network
kafka:
image: "confluentinc/cp-kafka:${CONFLUENT_PLATFORM_VERSION}"
container_name: kafka
hostname: kafka
ports:
- 9092:9092
- 29092:29092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_HOST://kafka:29092
LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
networks:
- demo-network
# Automatically creates required kafka topics if they were not created.
kafka-topics-creator:
build:
context: kafka-topic-creator
dockerfile: Dockerfile
container_name: kafka-topics-creator
depends_on:
- zookeeper
- kafka
environment:
ZOOKEEPER_HOSTS: "zookeeper:32181"
KAFKA_TOPICS: "topic_v1 topic_v2"
networks:
- demo-network
Then I have a directory kafka-topics-creator.
Here, I have three files
create-kafka-topics.sh, Dockerfile, README.md.
Dockerfile
# It is recommened to use same version as kafka broker is used.
# So no additional images are pulled.
FROM confluentinc/cp-kafka:4.1.2
WORKDIR usr/bin
# Once it is executed, this container is not needed.
COPY create-kafka-topics.sh create-kafka-topics.sh
ENTRYPOINT ["./create-kafka-topics.sh"]
create-kafka-topics.sh
#!/bin/bash
# Simply wait until original kafka container and zookeeper are started.
sleep 15.0s
# Parse string of kafka topics into an array
# https://stackoverflow.com/a/10586169/4587961
kafkatopicsArrayString="$KAFKA_TOPICS"
IFS=' ' read -r -a kafkaTopicsArray <<< "$kafkatopicsArrayString"
# A separate variable for zookeeper hosts.
zookeeperHostsValue=$ZOOKEEPER_HOSTS
# Create kafka topic for each topic item from split array of topics.
for newTopic in "${kafkaTopicsArray[#]}"; do
# https://kafka.apache.org/quickstart
kafka-topics --create --topic "$newTopic" --partitions 1 --replication-factor 1 --if-not-exists --zookeeper "$zookeeperHostsValue"
done
README.md - so other people know how to use it.Always document your stuff - good advise.
# Creates kafka topics automatically.
## Parameters
`ZOOKEEPER_HOSTS` - zookeeper hosts, I used value `"zookeeper:32181"` to run it locally.
`KAFKA_TOPICS` - space separated list of kafka topics. Example, `topic_1, topic_2, topic_3`.
Note, this container should run only **after** your original kafka broker and zookeeper are running.
After this container creates topics, it is not needed anymore.
How to check that the topics were created.
One solution is to check logs of kafka-topics-creator container.
docker logs kafka-topics-creator should print
$ docker logs kafka-topics-creator
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "topic_v1".
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "topic_v2".
You can create a docker-compose file like this...
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.10.2.1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "MY_TOPIC_ONE:1:1,/
MY_TOPIC_TWO:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Put your topics there and run docker-compose up
You should instead try to use the wurstmeister/kafka image which supports an environment variable to create topics during container startup.
Sure, the Landoop container has a bunch of other useful things, but sounds like you only want Kafka and don't want to mess with editing any Dockerfiles
The other solution is to startup a second container after Kafka which runs the create scripts, then stops itself

Kafka producer says "unknown_topic_or_partition"

I've been trying to make kafka-docker work for a few days now and I don't know what I'm doing wrong. Right now, I can't access any topic with my ruby-kafka client because the node "doesn't exist". This is my docker-compose.yml file:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.9.0.1
ports:
- "9092:9092"
links:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
kafka2:
image: wurstmeister/kafka:0.9.0.1
ports:
- "9093:9092"
links:
- zookeeper
environment:
KAFKA_BROKER_ID: 2
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_ADVERTISED_PORT: 9093
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
kafka3:
image: wurstmeister/kafka:0.9.0.1
ports:
- "9094:9092"
links:
- zookeeper
environment:
KAFKA_BROKER_ID: 3
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_ADVERTISED_PORT: 9094
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I specify "KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'" because I want to create topics by hand, so I entered into my first broker container and typed this:
./kafka-topics.sh --create --zookeeper 172.19.0.2:2181 --topic test1 --partitions 4 --replication-factor 3
And everything seems fine:
./kafka-topics.sh --list --zookeeper 172.19.0.2:2181 -> test1
But, when I try to do this:
./kafka-console-producer.sh --broker-list localhost:9092 --topic test1
It says:
WARN Error while fetching metadata with correlation id 24 : {test1=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
If I create again the topic, it says it already exists, so I don't know what is happening anymore.
You need to get your networking configuration right, as Kafka works across hosts and needs to be able to access them all.
This post explains it in detail.
You might also want to reference https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/cp-all-in-one/docker-compose.yml for an example of a working Docker Compose.
so we got this issue when we were working with kafka connect. Thre are multiple solutions to this. Either prune all the docker images or change the group id in the configuration for connect in the connect image as below:-
image: debezium/connect:1.1
ports:
- 8083:8083
links:
- schema-registry
environment:
- BOOTSTRAP_SERVERS=kafkaanalytics-mgmt.fptsinternal.com:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_connect_statuses
- INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter

Resources