I want to use both confluent/kafka and confluent/zookeeper and run them on a single Ubuntu server.
I'm using the following configurations:
docker run -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeper
docker run --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
However this results in: Unable to connect to zookeeper:2181
I have other containers that I'd like to connect to, how can I access zookeeper via zookeeper:2181 and kafka via kafka:9092 ?
There are multiple ways to do this. But before we look into it there are two problems in your approach that you need to understand
zookeper host is not reachable when you use docker run as each of the containers is running in a different network isolation
kafka may start and try to connect to zookeeper but zookeeper is not ready yet
Solving the network issue
You can do a lot of things to fix things
use --net=host to run both on the host network
use docker network create <name> and then use --net=<name> while launching both the containers
Or you can run your kafka container on the zookeeper containers network.
use --net=container:zookeeper when launching kafka container. This will make sure zookeeper host is accessible. This is not recommended as such, until unless you have some strong reason to do so. Because as soon as zookeeper container goes down, so will be the network of your kafka container. But for the sake of understanding, I have put this option here
Solving the startup race issue
Either you can keep a gap between starting zookeeper and kafka, to make sure that when kafka starts zookeeper is up and running
Another option is to use --restart=on-failure flag with docker run. This will make sure the container is restarted on failure and will try to reconnect to zookeeper and hopefully that time zookeeper will be up.
Instead of using docker run I would always prefer the docker-compose to get such linked containers to be run. You can do that by creating a simple docker-compose.yml file and then running it with docker-compsoe up
version: "3.4"
services:
zookeeper:
image: confluent/zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
kafka:
image: confluent/kafka
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS=testtopic:1:1
depends_on:
- zookeeper
restart: on-failure
I'm running on Mac though, this is working fine. since 'host' networking not working in mac i just create a network called kafka_net and put the containers there.
version: "3.4"
services:
zookeeper:
image: confluent/zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
networks:
- kafka_net
kafka:
image: confluent/kafka
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
depends_on:
- zookeeper
networks:
- kafka_net
restart: on-failure
networks:
kafka_net:
driver: "bridge"
To make sure all working:
Log into the zookeeper container then
zookeeper-shell localhost:2181 => You should see something like 'Welcome to ZooKeeper!' after all the big chunk of text
Log into kafka container then
kafka-topics --zookeeper zookeeper:2181 --list # empty list
kafka-topics --zookeeper zookeeper:2181 --create --topic first_topic --replication-factor 1 --partitions 1
kafka-topics --zookeeper zookeeper:2181 --list # you will see the first_topic
kafka-console-producer --broker-list localhost:9092 --topic first_topic # type some text and ctrl + c
kafka-console-consumer --bootstrap-server localhost:9092 --zookeeper zookeeper:2181 --topic first_topic --from-beginning # you will see the stuff you typed first_topic
If still giving problems have a look in the official examples. https://github.com/confluentinc/cp-docker-images/tree/5.2.2-post/examples and still giving issues post it, will give a hand.
Docker start containers in isolated network, called default bridge unless you specify network explicitly.
You can succeed in different ways, here are 2 easiest:
Put containers into same user-defined bridge network
# create net
docker network create foo
docker run --network=foo -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeper
docker run --network=foo --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
Expose ports and connect through localhost
docker run -p 2181:2181 -e ZOOKEEPER_CLIENT_PORT=2181 --name zookeeper confluent/zookeeper
docker run --name kafka -e KAFKA_ADVERTISED_HOST_NAME=kafka -e KAFKA_ZOOKEEPER_CONNECT=host.docker.internal:2181 -e KAFKA_CREATE_TOPICS=testtopic:1:1 confluent/kafka
Note: in second approach you should use host.docker.internal as a host name and expose (publish) port 2181 for first container to make it available on localhost
docker network create kafka-zookeeper
Wait until Network is Created
docker run -it -d --network=kafka-zookeeper --name zookeeper zookeeper
Wait until ZooKeeper is up and Running
docker run -it -d --network=kafka-zookeeper --name kafka -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 --restart=on-failure -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092 -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 bitnami/kafka
Kafka Should be connecting fine.
These are running in -d detached mode, so you need to go to Docker Desktop to view the logs for each container.
Related
This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 6 months ago.
I'm trying to use Debezium with Kafka connect, I followed this tutorial, and everything connected just fine. However, the problem is that I cannot access Kafka from outside of docker containers anymore.
I use these commands to start containers:
docker run -it --rm --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:2.0.0.Beta1
docker run -it --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:2.0.0.Beta1
docker run -it --rm --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets --link kafka:kafka debezium/connect:2.0.0.Beta1
I tried to set KAFKA_ADVERTISED_LISTENERS to PLAINTEXT://127.0.0.1:9092 which allowed me to connect to Kafka from the outside of the container but I could not connect from connect container to kafka container anymore. How can I achieve both?
with this you can access the kafka container from your host on the port 9092
zookeeper:
image: confluentinc/cp-zookeeper:7.2.0
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka-broker:
image: confluentinc/cp-kafka:7.2.0
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-broker:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
I think it's not a Kafka issue, but a docker network one. It's probably accessible via docker network or you need to expose it. https://docs.docker.com/network/network-tutorial-standalone/
I am a bit confused I was trying to convert dockercompose of elastic kibana to dockerfile. But networking part and connectivity part is bit confusing for me. Can anyone help me with conversion and a bit of explanation.
Thanks a lot!
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
environment:
- xpack.security.enabled=true
- "discovery.type=single-node"
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:6.5.4
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Docker Compose and Dockerfiles are completely different things. The Dockerfile is a configuration file used to create Docker images. The docker-compose.yml file is a configuration file used by Docker Compose to launch Docker containers using Docker images.
To launch the above containers without using Docker Compose you could run:
docker network create es-net
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" -p 9200:9200 --network es-net --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://es-container:9200 -p 5601:5601 --network es-net --name kb-container docker.elastic.co/kibana/kibana:6.5.4
Alternatively, you could run the containers on the hosts network stack (rather than the es-net nework). Kibana would then be able to talk to ElasticSearch on localhost:
docker run -d -e xpack.security.enabled=true -e "discovery.type=single-node" --network host --name es-container docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker run -d -e ELASTICSEARCH_HOSTS=http://localhost:9200 --network host --name kb-container docker.elastic.co/kibana/kibana:6.5.4
(I haven't actually run these so the commands might need some tweaking).
In that docker-compose.yml file, the only thing that could be built into an image at all are the environment variables, and there's not much benefit to hard-coding your deployment configuration like this. In particular you cannot force the eventual container name or manually specify the eventual networking configuration in an image.
If you're looking for a compact self-contained description of what to run that you can redistribute, the docker-compose.yml is it. Don't try to send around images, or focus on trying to have a single container; instead, distribute the docker-compose.yml file as the way to run your application. I'd consider Compose a standard enough tool that anyone who has Docker already has it and knows how to run docker-compose up -d.
# How to run this application on a different system
# (with Docker and Compose preinstalled):
here$ scp docker-compose.yml there:
here$ ssh there
there$ sudo docker-compose up -d
I have a situation where, Kafka is running in a docker container using a specific IP address within a network. The network is created using the following command
sudo docker network create --subnet=172.19.0.0/16 --gateway 172.19.0.1 --ip-range=172.19.0.1/24 my_net
Kafka container is started using the following
docker run -d --name kafkanode --net my_net --hostname=kafkahost01 kafka_zook:212-358 -p 2181:2181 -p 9092:9092 tail -f /dev/null
I have producers within the same host from a different container.
Kafka's server.properties a simple configuration like the below works for a producer within the same host and from a different container.
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://kafkahost01:9092
However, in our case, we will have producers who will also be sending messages from outside of that machine.
Unfortunately, i am not able to get connected from outside the docker host machine. Can someone please help me with the configuration?
We are using Kafka 2.12-2.6.0
Zookeeper -- 3.5.8
Server properties edited with the following values
listeners=INTERNAL://0.0.0.0:29092,EXTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://kafkahost01:29092,EXTERNAL://10.20.30.40:9092
inter.broker.listener.name=INTERNAL
Thanks
Balaji
Here you have a docker-compose example with inside and outside listeners configured. Try out.
(Replace localhost with your desired IP or DNS)
version: '3.7'
services:
zookeeper:
image: zookeeper:3.5.8
hostname: zookeeper
volumes:
- zookeeper-data:/data
- zookeeper-datalog:/datalog
kafka:
image: wurstmeister/kafka:2.13-2.6.0
hostname: kafka
depends_on:
- zookeeper
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://localhost:9093
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- kafka:/kafka
volumes:
zookeeper-data:
zookeeper-datalog:
kafka:
Running a producer within the same network:
# note: I just placed my docker-compose.yml in example dir, thats the reason for the example_default network
$ docker run -it --rm \
--name producer \
--network example_default \
wurstmeister/kafka:2.13-2.6.0 bash
bash-4.4# /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server kafka:9092 --topic
example
>some
>test
And consuming from outside docker using kaf:
$ cat ~/.kaf/config
current-cluster: single
clusteroverride: ""
clusters:
- name: single
version: 2.7.0
brokers:
- localhost:9093
SASL: null
TLS: null
security-protocol: PLAINTEXT
schema-registry-url: ""
$ kaf nodes
ID ADDRESS
1 localhost:9093
$ kaf consume example -f --raw
some
test
Hope this example can help you define your own setup.
I have a requirement that i need to setup kafka locally with topics already there in the container.I am using ladoop/fast-data-dev for doing that
How manually i am doing it-
docker run -d --name landoopkafka -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -e ADV_HOST=localhost landoop/fast-data-dev
After running this command my container is up and running.
now i go to bash inside this container like docker -exec -it landopkafka bash
and create topic using this command
kafka-topics --zookeeper localhost:2181 --create --topic hello_topic --partitions 1 --replication-factor 1
My topic is created.
But my requirement is i need to have a docker file which will have topic created and i just need to run it.
OR
A docker compose file which i need to run
Guys i need help on this ,as i am absolutely new to docker and kafka
I had to do it too! What if I did not want to use wurstmeister images? I decided to make a custom script which will do the job, and run this script in a separate container.
Repository
https://github.com/yan-khonski-it/kafka-compose
Note, it will work with kafka versions that use zookeeper.
Is Zookeeper a must for Kafka?
To start kafka with all your topics and zookeeper - docker-compose up -d.
Implementation details.
docker-compose.yml
# These services are kafka related. This docker-compose allows to start kafka locally quickly.
version: '2.1'
networks:
demo-network:
name: demo-network
driver: bridge
services:
zookeeper:
image: "confluentinc/cp-zookeeper:${CONFLUENT_PLATFORM_VERSION}"
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 32181:32181
hostname: zookeeper
networks:
- demo-network
kafka:
image: "confluentinc/cp-kafka:${CONFLUENT_PLATFORM_VERSION}"
container_name: kafka
hostname: kafka
ports:
- 9092:9092
- 29092:29092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_HOST://kafka:29092
LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
networks:
- demo-network
# Automatically creates required kafka topics if they were not created.
kafka-topics-creator:
build:
context: kafka-topic-creator
dockerfile: Dockerfile
container_name: kafka-topics-creator
depends_on:
- zookeeper
- kafka
environment:
ZOOKEEPER_HOSTS: "zookeeper:32181"
KAFKA_TOPICS: "topic_v1 topic_v2"
networks:
- demo-network
Then I have a directory kafka-topics-creator.
Here, I have three files
create-kafka-topics.sh, Dockerfile, README.md.
Dockerfile
# It is recommened to use same version as kafka broker is used.
# So no additional images are pulled.
FROM confluentinc/cp-kafka:4.1.2
WORKDIR usr/bin
# Once it is executed, this container is not needed.
COPY create-kafka-topics.sh create-kafka-topics.sh
ENTRYPOINT ["./create-kafka-topics.sh"]
create-kafka-topics.sh
#!/bin/bash
# Simply wait until original kafka container and zookeeper are started.
sleep 15.0s
# Parse string of kafka topics into an array
# https://stackoverflow.com/a/10586169/4587961
kafkatopicsArrayString="$KAFKA_TOPICS"
IFS=' ' read -r -a kafkaTopicsArray <<< "$kafkatopicsArrayString"
# A separate variable for zookeeper hosts.
zookeeperHostsValue=$ZOOKEEPER_HOSTS
# Create kafka topic for each topic item from split array of topics.
for newTopic in "${kafkaTopicsArray[#]}"; do
# https://kafka.apache.org/quickstart
kafka-topics --create --topic "$newTopic" --partitions 1 --replication-factor 1 --if-not-exists --zookeeper "$zookeeperHostsValue"
done
README.md - so other people know how to use it.Always document your stuff - good advise.
# Creates kafka topics automatically.
## Parameters
`ZOOKEEPER_HOSTS` - zookeeper hosts, I used value `"zookeeper:32181"` to run it locally.
`KAFKA_TOPICS` - space separated list of kafka topics. Example, `topic_1, topic_2, topic_3`.
Note, this container should run only **after** your original kafka broker and zookeeper are running.
After this container creates topics, it is not needed anymore.
How to check that the topics were created.
One solution is to check logs of kafka-topics-creator container.
docker logs kafka-topics-creator should print
$ docker logs kafka-topics-creator
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "topic_v1".
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic "topic_v2".
You can create a docker-compose file like this...
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.10.2.1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "MY_TOPIC_ONE:1:1,/
MY_TOPIC_TWO:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Put your topics there and run docker-compose up
You should instead try to use the wurstmeister/kafka image which supports an environment variable to create topics during container startup.
Sure, the Landoop container has a bunch of other useful things, but sounds like you only want Kafka and don't want to mess with editing any Dockerfiles
The other solution is to startup a second container after Kafka which runs the create scripts, then stops itself
I have downloaded the Docker Consul image and it is running, but I am not able to access its web UI. Does any one have an idea how to get started. I am running this on my local machine in developer mode.
I am running:
docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
See documentation:
The Web UI can be enabled by adding the -ui-dir flag:
$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui
We publish 8400 (RPC), 8500 (HTTP), and 8600 (DNS) so you can try all three interfaces. We also give it a hostname of node1. Setting the container hostname is the intended way to name the Consul Agent node.
You can try to activate ui by setting the -ui-dir flag.
First set experimental to true in docker desktop if you're using windows containers.
The command below will work, because you need to expose the port 8500.
docker run -d -e CONSUL_BIND_INTERFACE=eth0 -p 8500:8500 consul
You will be able to access consul at http://localhost:8500/
You could also use a compose file like this:
version: "3.7"
services:
consul:
image: consul
ports:
- "8500:8500"
environment:
- CONSUL_BIND_INTERFACE=eth0
networks:
nat:
aliases:
- consul