Docker compose create kafka topics - docker

Problem: Cannot create topics from docker-compose. I need to create kafka topics before I run a system under test. Planning to use it as a part of the pipeline, hence using UI is not an option.
Note: it takes ~15 seconds for kafka to be ready so I would need to put a sleep for 15 seconds prior to adding the topics.
Possible solution:
create a shell.sh file with commands to wait for 15 sec then add a bunch of topics
create a dockerfile for it
include that docker image in the docker-compose.yml just before starting the system under test
Current flow:
create zookeeper - OK
create kafka1 - OK
rest-proxy - OK
create topics <- PROBLEM
create SUT - OK
Current docker-compose.yml:
version: '2'
services:
zookeeper:
image: docker.io/confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
Kafka1:
image: docker.io/confluentinc/cp-enterprise-kafka:5.4.1
hostname: Kafka1
container_name: Kafka1
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_HOST_NAME: Kafka1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://Kafka1:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: Kafka1:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
rest-proxy:
image: docker.io/confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- Kafka1
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'Kafka1:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
topics:
image: topics:latest
hostname: topics
container_name: topics
depends_on:
- zookeeper
- Kafka1
- rest-proxy
sut:
image: sut:latest
hostname: sut
container_name: sut
depends_on:
- zookeeper
- Kafka1
- rest-proxy
ports:
- 5000:80
Current Dockerfile for topics container:
FROM ubuntu:14.04
ADD topics.sh /usr/local/bin/topics.sh
RUN chmod +x /usr/local/bin/topics.sh
CMD /usr/local/bin/topics.sh
Current topics.sh file:
#!/bin/sh
echo "Start: Sleep 15 seconds"
sleep 30;
wait;
echo "Begin creating topics"
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_ONE
docker exec Kafka1 kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic MY_AWESOME_TOPIC_TWO
echo "Done creating topics"
Current output:
/usr/local/bin/topics.sh: 1: /usr/local/bin/topics.sh: #!/bin/sh: not found
Start: Sleep 15 seconds
Begin creating topics
/usr/local/bin/topics.sh: 8: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 9: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 10: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 11: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 12: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 13: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 14: /usr/local/bin/topics.sh: docker: not found
/usr/local/bin/topics.sh: 15: /usr/local/bin/topics.sh: docker: not found
Done creating topics
Topics are not created. I'm stuck. Please help.

The simplest way is to start a separate container inside the docker-compose file (called init-kafka in the example below) to launch the various kafka-topics --create ... commands, while first making it wait for Kafka to be reachble by simply running kafka-topics --list ....
Like this:
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
# reachable on 9092 from the host and on 29092 from inside docker compose
kafka:
image: confluentinc/cp-kafka:6.1.1
depends_on:
- zookeeper
ports:
- '9092:9092'
expose:
- '29092'
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
KAFKA_MIN_INSYNC_REPLICAS: '1'
init-kafka:
image: confluentinc/cp-kafka:6.1.1
depends_on:
- kafka
entrypoint: [ '/bin/sh', '-c' ]
command: |
"
# blocks until kafka is reachable
kafka-topics --bootstrap-server kafka:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-1 --replication-factor 1 --partitions 1
kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic my-topic-2 --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server kafka:29092 --list
"
When running it, the init-kafka container should log something like:
docker logs docker_init-kafka_1
[2021-10-12 02:00:28,728] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:28,832] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,033] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2021-10-12 02:00:29,335] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.24.0.3:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Creating kafka topics
Created topic my-topic-1.
Created topic my-topic-2.
Successfully created the following topics:
my-topic-1
my-topic-2

This solution allows use to create a topic from the docker-compse.yml
Refer to the DockerFile of your respective kafka image service
Take note of the last command in respective dockerFile image (DockerHub repo of image/Image Layers)
In my case for image blockconfluentinc/cp-kafka:latest, the last command that starts the kafka service was "/etc/confluent/docker/run"
Hence in your docker-compose.yml include the below command
command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic topicName)&) && /etc/confluent/docker/run "
This will start the kafka service, delay for 15 seconds then create a topic.
Please note that we are assuming it takes 15 seconds for the kafka service to be fully operational
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
command: sh -c "((sleep 15 && kafka-topics --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic quick-starter)&) && /etc/confluent/docker/run ">

Variant 1: Run topic.sh (just the kafka-topics --create in another docker container)
Sorry for providing no full example but let me share the idea:
Create a separate docker container 'kafka-setup' which is just required to get the kafka command-line tools. In that replace the startup command to execute some (good enough) wait operations and runs the /kafka/topic_creator.sh (with host:port-parameter of zookeeper and kafka) which is injected via volume. After that script is executed a file K_OUTPUT_FILE is created and exposed also via volume (prerequisite before calling docker-compose up, the file needs to be deleted).
Snippet from docker-compose.yml (./kafka folder contains the topic creator script, ./output gets the topics created file 'kafka-done.txt'):
# This "container" is a helper to pre-create topics
kafka-setup:
image: confluentinc/cp-kafka:5.4.3
depends_on:
- kafka
volumes:
- ./kafka:/kafka
- ./output:/output
command: "bash -c 'chmod +x /kafka/topic_creator.sh && \
/kafka/topic_creator.sh /kafka/topics.txt $$K_ZK $$K_KAFKA && \
touch \"$${K_OUTPUT_FILE}\" && chmod a+rw \"$${K_OUTPUT_FILE}\"'"
environment:
K_ZK: localhost:22181
K_KAFKA: localhost:19092
K_OUTPUT_FILE: "/output/kafka-done.txt"
# dummy values
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
network_mode: host
To run everything in the right order
bash script snippet:
K_SETUP_OUTPUT="./output"
mkdir -p "$K_SETUP_OUTPUT"
rm -f "$K_SETUP_OUTPUT/kafka-done.txt"
# Start stuff
docker-compose up -d --force-recreate --build --remove-orphans
wait_for_file "$K_SETUP_OUTPUT/kafka-done.txt"
sleep 5
# do your stuff here (e.g. read -r -p "Press any key to continue..." key )
do_something
with wait_for_file function
function wait_for_file {
local name=$1
echo "File waiting ${name}."
seconds=0
while [[ "$seconds" -lt "$timeout" && ! -f "$name" ]];
do
echo -n .
seconds=$((seconds+1))
sleep 1
done
if [ "$seconds" -lt "$timeout" ]; then
echo "${name} created (${seconds}s)!"
else
echo " ERROR: not found ${name}" >&2
exit 1
fi
}
How it works in sequence:
run the test script
create folders and delete ./output/kafka-done.txt
execute docker-compose up
in kafka-setup first wait for availability of zookeeper and kafka ports
run ./kafka/topic_creator.sh with parameters for zookeeper and kafka ports
create ./output/kafka-done.txt
wait_for_file ./output/kafka-done.txt succeeds
do_something .. that's your tests or whatever
Variant 2: Just allow docker to run without root
See Manage Docker as a non-root user

I needed to solve the topic creation myself and as I had the liberty of choosing kafka image of my preference I chose wurstmeister kafka docker image which allows for topic specification using env variable KAFKA_CREATE_TOPICS like so:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Here is a link to docker-compose example.
Another advantage for me was ARM64 version being available, just had to switch the zookeeper to official one.

ubuntu container doesn't have docker installed.
It also doesn't have the kafka-topics command, so instead you should re-use the cp-enterprise-kafka image that you've already pulled and change the ENTRYPOINT or CMD to be your script but running the kafka-topics command directly
Or replace your Kafka container with wurstmeister/kafka and add an environment variable for creating the topics

please include docker path in volume of kafka container in compose file of kafka service add -v $(which docker):/usr/bin/docker

Related

How do I create a Kafka topic using a docker-compose file?

I'm new to Kafka and I am trying to create a demo Kafka server and work with it.
I have already created Zookeeper and Kafka containers using a docker-compose file and they are started and running fine.
If I docker exec into the Kafka container and run:
kafka-topics --create --zookeeper zookeeper-1 --replication-factor 1 --partitions 1 --topic demo-topic
the topic is created successfully... But I would like to spin up the Kafka broker and then programmatically create the topics without user interaction (Will ultimately be for a pipeline)
I've tried two different Kafka images (other was confluentinc/cp-kafka)... I have also tried changing bash to sh under kafka-1.command
Could really do with some help here. Below is my docker-compose file and the error response I'm getting in my terminal for bitnami image & for confluent image.
ERROR (BITNAMI)
/opt/bitnami/scripts/kafka/entrypoint.sh: line 27: exec: bash -c "kafka-topics --create --zookeeper zookeeper-1 --replication-factor 1 --partitions 1 --topic demo-topic": not found
ERROR (CONFLUENT)
Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bash -c \"kafka-topics --create --zookeeper zookeeper-1 --replication-factor 1 --partitions 1 --topic demo-topic\"": executable file not found in $PATH: unknown
DOCKER-COMPOSE.YAML
services:
zookeeper-1:
container_name: zookeeper-1
image: zookeeper
restart: always
ports:
- 2181:2181
environment:
- ZOOKEEPER_CLIENT_PORT=2181
volumes:
- ./config/zookeeper-1/zookeeper.properties:/kafka/config/zookeeper.properties
kafka-1:
container_name: kafka-1
image: bitnami/kafka
depends_on:
- zookeeper-1
ports:
- 29092:29092
- 9092:9092
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper-1:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092
- ALLOW_PLAINTEXT_LISTENER=yes
command:
- bash -c "kafka-topics --create --zookeeper zookeeper-1 --replication-factor 1 --partitions 1 --topic demo-topic"
For both errors, the command you are providing is being appended to the container's entrypoints, which do not accept bash commands... You would need to override both the entrypoint and the command, as well as give the full path to the kafka-topics script since it is not on the $PATH.
However, you cannot do this with the command on the same container as the broker because that overrides the command that actually starts the server.
You will need a secondary "init container" that creates the topics , but it would be easier for you to write this topic creation logic into your own producer/consumer applications with an AdminClient.createTopics call (assuming Java).
Otherwise, you can use wurstmeister/kafka image which has an environment variable KAFKA_CREATE_TOPICS for this purpose, or just use docker-compose exec kafka-1 "..." after the containers are up

kafka-avro-console-consumer is command is not available in `confluentinc/cp-enterprise-kafka`

I've below docker-compose.yml file
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:6.0.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
When I run docker-compose exec kafka bash I get the bash prompt.
In the bash prompt I've kafka-console-consumer but I don't have access to kafka-avro-console-consumer?
How can I get access to kafka-avro-console-consumer? Is it not in $PATH but in some other directory?
I tried to use find and which commands but those are not present in docker running container
use confluentinc/cp-schema-registry
docker run confluentinc/cp-schema-registry \
kafka-avro-console-consumer --bootstrap-server localhost:29092 --topic quickstart-jdbc-test --from-beginning --max-messages 10

Kafka Server issue in Docker

I am using docker for my sample Spark + Kafka project in windows machine.
I am facing
WARN ClientUtils: Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
[error] (run-main-0) org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
[error] org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
------
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
Below is my docker-compose.yml
version: '2'
services:
test1:
build: test1service/.
depends_on:
- kafka
test2:
build: test2/.
depends_on:
- kafka
- test1
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
extra_hosts:
- "localhost: 127.0.0.1"
kafka:
image: confluentinc/cp-kafka:5.1.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
#KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "localhost: 127.0.0.1"
Below is my sample code in test2 service
val inputStreamDF = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "test1")
.option("startingOffsets", "earliest")
.load()
docker ps command output is
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c8feb49e12b test1 "/usr/bin/start.sh" About an hour ago Up 50 minutes test1
4535ce246541 test2 "/usr/bin/myservice-…" About an hour ago Up 50 minutes test2
733766f72adb confluentinc/cp-kafka:5.1.0 "/etc/confluent/dock…" About an hour ago Up 51 minutes 0.0.0.0:9092->9092/tcp kafka_1
d915e25cb226 confluentinc/cp-zookeeper:5.1.0 "/etc/confluent/dock…" About an hour ago Up 51 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper_1
Has anyone faced similar issue, how did you resolve?
You try to reach kafka:9092, but docker compose has generated the container kafka_1, which is why there is no name resolution.
Docker gives internal ip to your containers, and use their container name to create an internal DNS on this network (with the embbed dns server)
Your environment variables are not changed by Docker Compose to fit the name it gives to your container.
You should use "container_name: kafka" in your container description to get a static container name.
Only thing you need is network. After that all services will be connected and you can use kafka:9092.
version: '2'
services:
test1:
build: test1service/.
depends_on:
- kafka
networks:
- app-network
test2:
build: test2/.
depends_on:
- kafka
- test1
networks:
- app-network
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
kafka:
image: confluentinc/cp-kafka:5.1.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
#KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
networks:
app-network:
driver: bridge
Try with localhost and then the container port:
localhost:9092
Here is an example command for creating a topic named "orders"
docker exec kafka kafka-topics --bootstrap-server localhost:9092 --create --topic orders --partitions 1 --replication-factor 1

fail to dockerize the second kafka instance to zookeeper

I want to dockerize a kafka cluster with two kafka instances.
running zookeeper
docker run -it --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper
running kafka
docker run -it --name kafka -p 9092:9092 -e ADVERTISED_HOST_NAME=$(hostname -f) --link zookeeper:zookeeper debezium/kafka
running one zookeeper and one kafka is successful but adding the second kafka container shown as below
docker run -it --name kafka2 -p 9096:9092 -e ADVERTISED_HOST_NAME=$(hostname -f) --link zookeeper:zookeeper debezium/kafka
gives me the following error:
2019-09-24 04:11:48,728 - ERROR [main:Logging#74] - Error while creating ephemeral at /brokers/ids/1, node already exists and owner '72057611679825940' does not match current session '72057611679825967'
2019-09-24 04:11:48,748 - ERROR [main:MarkerIgnoringBase#159] - [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists
at org.apache.zookeeper.KeeperException.create(KeeperException.java:122)
at kafka.zk.KafkaZkClient$CheckedEphemeral.getAfterNodeExists(KafkaZkClient.scala:1784)
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1722)
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689)
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97)
at kafka.server.KafkaServer.startup(KafkaServer.scala:260)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
In the debezium Kafka container, you need to pass -e BROKER_ID=2. The default is 1.
https://github.com/debezium/docker-images/blob/master/kafka/0.10/docker-entrypoint.sh#L6-L9
In other Kafka containers, there's a KAFKA_BROKER_ID variable you can use that needs to be different for each container that also sets up replication factors to 2 for internal topics and correctly sets up the advertised listeners.
Here's an example compose file
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_HOST://localhost:19092,PLAINTEXT://kafka-1:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
kafka-2:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_HOST://localhost:29092,PLAINTEXT://kafka-2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
Ex.
$ kafkacat -L -b localhost:19092
Metadata for all topics (from broker 1: localhost:19092/1):
2 brokers:
broker 2 at localhost:29092
broker 1 at localhost:19092 (controller)
1 topics:
topic "__confluent.support.metrics" with 1 partitions:
partition 0, leader 1, replicas: 1,2, isrs: 1,2
Note: running multiple brokers on one machine won't improve any performance or reliability of Kafka

Starting a Kafka topics using Docker Compose with spotify/kafka?

I am attempting to connect Kafka topics to my front-end Java Spring application. I am utilizing Docker Compose and have tried to connect using two different Kafka images.
With wurstmeister/kafka I have been able to get the Kafka topics up and running by this service in my docker.compose.yml file. But I have not been able to connect the created topics to my front-end Java Spring application.
kafka:
image: wurstmeister/kafka:0.10.2.0
ports:
- "9092:9092"
expose:
- "9092"
- "2181"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_CREATE_TOPICS: "test-topic1:1:1, test-topic2:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
Secondly, with spotify/kafka, I am having difficulty actually creating the topics with Kafka. In the documentation, it is looking for the topics as an environment variable, but the following docker-compose.yml service is not creating a topic. I have also tried putting quotes around the test-topic but that did not work as well.
kafka:
image: spotify/kafka
ports:
- "9092:9092"
- "2181:2181"
hostname: kafka
expose:
- "9092"
- "2181"
environment:
TOPICS: test-topic
I do not know if this is necessary, but my entire docker-compose.yml file is as follows, take note that the zookeeper service is only required if you use wurstmeister/kafka.
docker-compose.yml
version: '2'
services:
# zookeeper:
# image: wurstmeister/zookeeper
# ports:
# - "2181:2181"
kafka:
image: spotify/kafka
ports:
- "9092:9092"
- "2181:2181"
hostname: kafka
expose:
- "9092"
- "2181"
environment:
TOPICS: test-topic
redis:
image: redis
ports:
- "6379"
restart: always
kafka-websocket-connector:
build: ./kafka-websocket-connector
image: andrewterra/kafka-websocket-connector
ports:
- "8077:8077"
# - "9092:9092"
depends_on:
- kafka
- redis
# - zookeeper
links:
- kafka
- redis
Rather late, but you could use something like the following to use a shell script to create your topic:
command: >
bash -c
"(sleep 15s &&
/opt/kafka_2.11-0.10.1.0/bin/kafka-topics.sh
--create
--zookeeper
localhost:2181 --replication-factor 1 --partitions 1
--topic my_topic &) && (supervisord -n)"
Container run command to create topic
Use the container command docker run --net=host --rm. In the following example, the zookeeper is running on port 22181, please use the respective topic name, port.
Create
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --create --topic customer --partitions 1 --replication-factor 1 --if-not-exists --zookeeper localhost:22181
Describe
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --zookeeper localhost:22181 --topic customer --describe
List
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --list --zookeeper localhost:22181
Delete
docker run --net=host --rm confluentinc/cp-kafka:4.0.0 kafka-topics --delete --topic customer --zookeeper localhost:22181
The TOPICS environment variable is only used for the kafkaproxy image. https://github.com/spotify/docker-kafka#running-the-proxy
For kafka image, you will need to create the topics with a client.

Resources