Kafka in a Docker Container - External and and Internal connections - docker

I have a situation where, Kafka is running in a docker container using a specific IP address within a network. The network is created using the following command
sudo docker network create --subnet=172.19.0.0/16 --gateway 172.19.0.1 --ip-range=172.19.0.1/24 my_net
Kafka container is started using the following
docker run -d --name kafkanode --net my_net --hostname=kafkahost01 kafka_zook:212-358 -p 2181:2181 -p 9092:9092 tail -f /dev/null
I have producers within the same host from a different container.
Kafka's server.properties a simple configuration like the below works for a producer within the same host and from a different container.
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://kafkahost01:9092
However, in our case, we will have producers who will also be sending messages from outside of that machine.
Unfortunately, i am not able to get connected from outside the docker host machine. Can someone please help me with the configuration?
We are using Kafka 2.12-2.6.0
Zookeeper -- 3.5.8
Server properties edited with the following values
listeners=INTERNAL://0.0.0.0:29092,EXTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://kafkahost01:29092,EXTERNAL://10.20.30.40:9092
inter.broker.listener.name=INTERNAL
Thanks
Balaji

Here you have a docker-compose example with inside and outside listeners configured. Try out.
(Replace localhost with your desired IP or DNS)
version: '3.7'
services:
zookeeper:
image: zookeeper:3.5.8
hostname: zookeeper
volumes:
- zookeeper-data:/data
- zookeeper-datalog:/datalog
kafka:
image: wurstmeister/kafka:2.13-2.6.0
hostname: kafka
depends_on:
- zookeeper
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://localhost:9093
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- kafka:/kafka
volumes:
zookeeper-data:
zookeeper-datalog:
kafka:
Running a producer within the same network:
# note: I just placed my docker-compose.yml in example dir, thats the reason for the example_default network
$ docker run -it --rm \
--name producer \
--network example_default \
wurstmeister/kafka:2.13-2.6.0 bash
bash-4.4# /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server kafka:9092 --topic
example
>some
>test
And consuming from outside docker using kaf:
$ cat ~/.kaf/config
current-cluster: single
clusteroverride: ""
clusters:
- name: single
version: 2.7.0
brokers:
- localhost:9093
SASL: null
TLS: null
security-protocol: PLAINTEXT
schema-registry-url: ""
$ kaf nodes
ID ADDRESS
1 localhost:9093
$ kaf consume example -f --raw
some
test
Hope this example can help you define your own setup.

Related

Kafka is not accessbile from outside of the docker container [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 6 months ago.
I'm trying to use Debezium with Kafka connect, I followed this tutorial, and everything connected just fine. However, the problem is that I cannot access Kafka from outside of docker containers anymore.
I use these commands to start containers:
docker run -it --rm --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:2.0.0.Beta1
docker run -it --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:2.0.0.Beta1
docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets --link kafka:kafka debezium/connect:2.0.0.Beta1
I tried to set KAFKA_ADVERTISED_LISTENERS to PLAINTEXT://127.0.0.1:9092 which allowed me to connect to Kafka from the outside of the container but I could not connect from connect container to kafka container anymore. How can I achieve both?
with this you can access the kafka container from your host on the port 9092
zookeeper:
image: confluentinc/cp-zookeeper:7.2.0
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka-broker:
image: confluentinc/cp-kafka:7.2.0
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
I think it's not a Kafka issue, but a docker network one. It's probably accessible via docker network or you need to expose it. https://docs.docker.com/network/network-tutorial-standalone/

Cannot connect to docker container (redis) in host mode

This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above
According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.

Kafka docker compose external connection [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed last month.
I want to expost 9093 to outside of the docker container. When I set the kafka-0 ports exposed to 9093 and the KAFKA_ADVERTISED_LISTENERS as follow, i am unable to connect to localhost:9093 as shown in the following docker-compose file.
version: '3'
services:
kafka-0:
image: confluentinc/cp-kafka:5.2.1
container_name: kafka-0
hostname: kafka-0
ports:
- "9093:9092"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=wise-nlp-zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-0:29094,PLAINTEXT_HOST://localhost:9093
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
depends_on:
- zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:5.3.1
container_name: zookeeper
ports:
- "2182:2181"
environment:
- ZOOKEEPER_CLIENT_PORT=2181
However, when i change to
ports:
- "9092:9092"
and
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-0:29094,PLAINTEXT_HOST://localhost:9092
I am able to connect to kafka broker localhost:9092.
How can i change external port to 9093 for applications to connect? I want to set up multiple brokers.
Why's it not working currently?
Advertised listener(s) (as defined in KAFKA_ADVERTISED_LISTENERS) are the host and port that the broker returns to the client in its initial connection for it to use in subsequent connections.
If you want external clients to use 9093 then KAFKA_ADVERTISED_LISTENERS=…PLAINTEXT_HOST://localhost:9093 is correct. However, you've not configured your KAFKA_LISTENERS, which if you check the broker log when it starts up will default to the value set by KAFKA_ADVERTISED_LISTENERS:
kafka-0 | listeners = PLAINTEXT://0.0.0.0:29094,PLAINTEXT_HOST://0.0.0.0:9093
So in this state, the broker is listening on port 9093, but with this Docker Compose instruction you've redirected external connections into the container on 9093 to 9092 within the container:
ports:
- "9093:9092"
➜ docker ps
CONTAINER ID IMAGE … PORTS NAMES
8b934ef4145c confluentinc/cp-kafka:5.4.1 … 0.0.0.0:9093->9092/tcp kafka-0
So your external connections will go to port 9092 in the container—and the broker is not listening on this port. You can verify this with nc:
-- Port 9093 is open on the host machine
➜ nc -vz localhost 9093
Connection to localhost port 9093 [tcp/*] succeeded!
-- Port 9092 is _not_ open on the Kafka container
➜ docker exec -it kafka-0 nc -vz localhost 9092
localhost [127.0.0.1] 9092 (?) : Connection refused
❌ You'll see that a client connection fails
➜ kafkacat -b localhost:9093 -L
% ERROR: Failed to acquire metadata: Local: Broker transport failure
How can you fix it?
You can either:
Change the listener to be on the port that you target with the Docker port redirect. This will work but personally I think is more confusing.
Change the Docker port redirect to target the port on which the listener is on. This is the option I would use as it is clearer (e.g. port 9093 is used throughout, rather than mixing 9092 and 9093 together)
Option 1: Change the listener to be on the port that you target with the Docker port redirect
version: '3'
services:
kafka-0:
image: confluentinc/cp-kafka:5.4.1
container_name: kafka-0
ports:
- "9093:9092"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-0:29094,PLAINTEXT_HOST://localhost:9093
- KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:29094,PLAINTEXT_HOST://0.0.0.0:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
depends_on:
- zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:5.4.1
container_name: zookeeper
ports:
- "2182:2181"
environment:
- ZOOKEEPER_CLIENT_PORT=2181
✅Test:
➜ kafkacat -b localhost:9093 -L
Metadata for all topics (from broker 1: localhost:9093/1):
1 brokers:
broker 1 at localhost:9093 (controller)
Option 2: Change the Docker port redirect to target the port on which the listener is on
version: '3'
services:
kafka-0:
image: confluentinc/cp-kafka:5.4.1
container_name: kafka-0
ports:
- "9093:9093"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-0:29094,PLAINTEXT_HOST://localhost:9093
# If you don't specify KAFKA_LISTENERS it will default to the ports used in
# KAFKA_ADVERTISED_LISTENERS, but IMO it's better to be explicit about these settings
- KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:29094,PLAINTEXT_HOST://0.0.0.0:9093
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
depends_on:
- zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:5.4.1
container_name: zookeeper
ports:
- "2182:2181"
environment:
- ZOOKEEPER_CLIENT_PORT=2181
✅Test
➜ kafkacat -b localhost:9093 -L
Metadata for all topics (from broker 1: localhost:9093/1):
1 brokers:
broker 1 at localhost:9093 (controller)
Connecting to Kafka from within the Docker network
The examples above are about connecting to Kafka from the Docker host. If you want to connect to it from within the Docker network (e.g. another container) you need to use kafka-0:29094 as the broker host and IP. If you try to use localhost:9093 then the client container will resolve localhost to its own container, and thus fail.
Multiple brokers
See here for an example Docker Compose with multiple Kafka brokers.
References
https://rmoff.net/2018/08/02/kafka-listeners-explained/

I need to create a kafka image with topics already created

I have a requirement that i need to setup kafka locally with topics already there in the container.I am using ladoop/fast-data-dev for doing that
How manually i am doing it-
docker run -d --name landoopkafka -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -e ADV_HOST=localhost landoop/fast-data-dev
After running this command my container is up and running.
now i go to bash inside this container like docker -exec -it landopkafka bash
and create topic using this command
kafka-topics --zookeeper localhost:2181 --create --topic hello_topic --partitions 1 --replication-factor 1
My topic is created.
But my requirement is i need to have a docker file which will have topic created and i just need to run it.
OR
A docker compose file which i need to run
Guys i need help on this ,as i am absolutely new to docker and kafka
I had to do it too! What if I did not want to use wurstmeister images? I decided to make a custom script which will do the job, and run this script in a separate container.
Repository
https://github.com/yan-khonski-it/kafka-compose
Note, it will work with kafka versions that use zookeeper.
Is Zookeeper a must for Kafka?
To start kafka with all your topics and zookeeper - docker-compose up -d.
Implementation details.
docker-compose.yml
# These services are kafka related. This docker-compose allows to start kafka locally quickly.
version: '2.1'
networks:
demo-network:
name: demo-network
driver: bridge
services:
zookeeper:
image: "confluentinc/cp-zookeeper:${CONFLUENT_PLATFORM_VERSION}"
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 32181:32181
hostname: zookeeper
networks:
- demo-network
kafka:
image: "confluentinc/cp-kafka:${CONFLUENT_PLATFORM_VERSION}"
container_name: kafka
hostname: kafka
ports:
- 9092:9092
- 29092:29092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_HOST://kafka:29092
LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
networks:
- demo-network
# Automatically creates required kafka topics if they were not created.
kafka-topics-creator:
build:
context: kafka-topic-creator
dockerfile: Dockerfile
container_name: kafka-topics-creator
depends_on:
- zookeeper
- kafka
environment:
ZOOKEEPER_HOSTS: "zookeeper:32181"
KAFKA_TOPICS: "topic_v1 topic_v2"
networks:
- demo-network
Then I have a directory kafka-topics-creator.
Here, I have three files
create-kafka-topics.sh, Dockerfile, README.md.
Dockerfile
# It is recommened to use same version as kafka broker is used.
# So no additional images are pulled.
FROM confluentinc/cp-kafka:4.1.2
WORKDIR usr/bin
# Once it is executed, this container is not needed.
COPY create-kafka-topics.sh create-kafka-topics.sh
ENTRYPOINT ["./create-kafka-topics.sh"]
create-kafka-topics.sh
#!/bin/bash
# Simply wait until original kafka container and zookeeper are started.
sleep 15.0s
# Parse string of kafka topics into an array
# https://stackoverflow.com/a/10586169/4587961
kafkatopicsArrayString="$KAFKA_TOPICS"
IFS=' ' read -r -a kafkaTopicsArray <<< "$kafkatopicsArrayString"
# A separate variable for zookeeper hosts.
zookeeperHostsValue=$ZOOKEEPER_HOSTS
# Create kafka topic for each topic item from split array of topics.
for newTopic in "${kafkaTopicsArray[#]}"; do
# https://kafka.apache.org/quickstart
kafka-topics --create --topic "$newTopic" --partitions 1 --replication-factor 1 --if-not-exists --zookeeper "$zookeeperHostsValue"
done
README.md - so other people know how to use it.Always document your stuff - good advise.
# Creates kafka topics automatically.
## Parameters
`ZOOKEEPER_HOSTS` - zookeeper hosts, I used value `"zookeeper:32181"` to run it locally.
`KAFKA_TOPICS` - space separated list of kafka topics. Example, `topic_1, topic_2, topic_3`.
Note, this container should run only **after** your original kafka broker and zookeeper are running.
After this container creates topics, it is not needed anymore.
How to check that the topics were created.
One solution is to check logs of kafka-topics-creator container.
docker logs kafka-topics-creator should print
$ docker logs kafka-topics-creator
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "topic_v1".
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "topic_v2".
You can create a docker-compose file like this...
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.10.2.1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "MY_TOPIC_ONE:1:1,/
MY_TOPIC_TWO:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Put your topics there and run docker-compose up
You should instead try to use the wurstmeister/kafka image which supports an environment variable to create topics during container startup.
Sure, the Landoop container has a bunch of other useful things, but sounds like you only want Kafka and don't want to mess with editing any Dockerfiles
The other solution is to startup a second container after Kafka which runs the create scripts, then stops itself

My kafka docker container cannot connect to my zookeeper docker container

I want to use both confluent/kafka and confluent/zookeeper and run them on a single Ubuntu server.
I'm using the following configurations:
docker run -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeper
docker run --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
However this results in: Unable to connect to zookeeper:2181
I have other containers that I'd like to connect to, how can I access zookeeper via zookeeper:2181 and kafka via kafka:9092 ?
There are multiple ways to do this. But before we look into it there are two problems in your approach that you need to understand
zookeper host is not reachable when you use docker run as each of the containers is running in a different network isolation
kafka may start and try to connect to zookeeper but zookeeper is not ready yet
Solving the network issue
You can do a lot of things to fix things
use --net=host to run both on the host network
use docker network create <name> and then use --net=<name> while launching both the containers
Or you can run your kafka container on the zookeeper containers network.
use --net=container:zookeeper when launching kafka container. This will make sure zookeeper host is accessible. This is not recommended as such, until unless you have some strong reason to do so. Because as soon as zookeeper container goes down, so will be the network of your kafka container. But for the sake of understanding, I have put this option here
Solving the startup race issue
Either you can keep a gap between starting zookeeper and kafka, to make sure that when kafka starts zookeeper is up and running
Another option is to use --restart=on-failure flag with docker run. This will make sure the container is restarted on failure and will try to reconnect to zookeeper and hopefully that time zookeeper will be up.
Instead of using docker run I would always prefer the docker-compose to get such linked containers to be run. You can do that by creating a simple docker-compose.yml file and then running it with docker-compsoe up
version: "3.4"
services:
zookeeper:
image: confluent/zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
kafka:
image: confluent/kafka
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS=testtopic:1:1
depends_on:
- zookeeper
restart: on-failure
I'm running on Mac though, this is working fine. since 'host' networking not working in mac i just create a network called kafka_net and put the containers there.
version: "3.4"
services:
zookeeper:
image: confluent/zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
networks:
- kafka_net
kafka:
image: confluent/kafka
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
depends_on:
- zookeeper
networks:
- kafka_net
restart: on-failure
networks:
kafka_net:
driver: "bridge"
To make sure all working:
Log into the zookeeper container then
zookeeper-shell localhost:2181 => You should see something like 'Welcome to ZooKeeper!' after all the big chunk of text
Log into kafka container then
kafka-topics --zookeeper zookeeper:2181 --list # empty list
kafka-topics --zookeeper zookeeper:2181 --create --topic first_topic --replication-factor 1 --partitions 1
kafka-topics --zookeeper zookeeper:2181 --list # you will see the first_topic
kafka-console-producer --broker-list localhost:9092 --topic first_topic # type some text and ctrl + c
kafka-console-consumer --bootstrap-server localhost:9092 --zookeeper zookeeper:2181 --topic first_topic --from-beginning # you will see the stuff you typed first_topic
If still giving problems have a look in the official examples. https://github.com/confluentinc/cp-docker-images/tree/5.2.2-post/examples and still giving issues post it, will give a hand.
Docker start containers in isolated network, called default bridge unless you specify network explicitly.
You can succeed in different ways, here are 2 easiest:
Put containers into same user-defined bridge network
# create net
docker network create foo
docker run --network=foo -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeper
docker run --network=foo --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
Expose ports and connect through localhost
docker run -p 2181:2181 -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeper
docker run --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=host.docker.internal:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
Note: in second approach you should use host.docker.internal as a host name and expose (publish) port 2181 for first container to make it available on localhost
docker network create kafka-zookeeper
Wait until Network is Created
docker run -it -d --network=kafka-zookeeper --name zookeeper zookeeper
Wait until ZooKeeper is up and Running
docker run -it -d --network=kafka-zookeeper --name kafka -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 --restart=on-failure -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092 -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 bitnami/kafka
Kafka Should be connecting fine.
These are running in -d detached mode, so you need to go to Docker Desktop to view the logs for each container.

Resources