How connect two kafka clusters on different VM's together ?
On my Windows maschine with the IP-Address: 192.168.2.22
i downloaded docker-compose.yml from https://github.com/conduktor/kafka-stack-docker-compose/blob/master/zk-single-kafka-single.yml and started it in a cmd-console with docker-compose up -d
Create Topic:
docker exec kafka1 kafka-topics --bootstrap-server localhost:9092 --create --topic my-Windows-Topic-1
Check access to Topic: in different ways (but in the same cmd-console)
docker exec kafka1 kafka-topics --bootstrap-server localhost:9092 --list
docker exec kafka1 kafka-topics --bootstrap-server localhost:19092 --list
docker exec kafka1 kafka-topics --bootstrap-server localhost:29092 --list
docker exec kafka1 kafka-topics --bootstrap-server 192.168.2.22:9092 --list
docker exec kafka1 kafka-topics --bootstrap-server 192.168.2.22:29092 --list
docker run --rm confluentinc/cp-kafka bash -c "kafka-topics --bootstrap-server 192.168.2.22:9092 --list"
docker run --rm confluentinc/cp-kafka bash -c "kafka-topics --bootstrap-server 192.168.2.22:29092 --list"
all the above commands show the same output so far
my-Windows-Topic-1
Now i change to a Linux-Virtual-Maschine with IP: 192.168.94.130
and again
Check access to the Windows Topic:
docker run --rm confluentinc/cp-kafka bash -c "kafka-topics --bootstrap-server 192.168.2.22:9092 --list"
my-Windows-Topic-1
So far everything is fine. I have access to the topic from internal and external!
(Still from the Linux console)
Now i'm creating a second independent cluster in the same way as above:
downloaded https://github.com/conduktor/kafka-stack-docker-compose/blob/master/zk-single-kafka-single.yml and started with docker-compose up -d
Create Linux Topic:
docker exec kafka1 kafka-topics --bootstrap-server localhost:9092 --create --topic my-Linux-Topic
Check Topic:
$ docker exec kafka1 kafka-topics --bootstrap-server localhost:9092 --list
my-Linux-Topic
How to configure both clusters to display both outputs?
my-Windows-Topic-1
and
my-Linux-Topic
at the same time ?
I'm having a hard-time following the question, but if you want multiple brokers, then the repo you've linked to already has an example of that (in one Kafka cluster)
If you have multiple compose files, then they will not share a Docker network, by default, and so will be unable to reach each other.
If you want to connect them, then use a bridge network
configure both clusters to display both output ... at the same time?
You can't; you can only list topics of one cluster at a time.
You could use Conduktor Platform to run a UI where you can link both clusters to it, then display topics there, without using any terminal commands.
Related
When running on the host, I can get all the Kafka topics with:
docker exec broker kafka-topics --bootstrap-server broker:29092 --list
I can't run this from within a container because I'd get docker: not found, even if I installed Docker in the container I don't think it'll work anyway. Also, apparently it's hard and insecure to be able to run an arbitrary command in another Docker container. How else can I get all the Kafka topics from within another Docker container? E.g. can I interface with Kafka through http?
I get docker: not found
That seems to imply docker CLI command is not installed, and has nothing to do with Kafka.
docker is not (typically) installed in "another container", so that explains that... You'll need to install Java and download Kafka cli tools to run kafka-topics.sh in any other environment, and then not use docker exec.
Otherwise, your command is "correct", but if you are using Docker Compose, you should do it like this from your host (change port accordingly).
docker-compose exec broker bash -c \
"kafka-topics --list --bootstrap-server localhost:9092"
https://docs.confluent.io/4.0.0/installation/docker/docs/quickstart.html
I followed the steps given in this document and tried to create a topic, but getting an exception as shown below.
I already checked whether both kafka and zookeeper are up and they are.
I also tried following:
docker-compose exec kafka kafka-topics --list --zookeeper localhost:2181
I get the same error.
This is fixed now by using the correct zookeeper in the command to list the kafka-topics.
docker-compose exec kafka kafka-topics --list --zookeeper zookeeper:2181
I got the zookeeper connect information from the docker-compose.yml file.
I need to make a Docker container for a project involving streaming data using Kafka and Zookeeper. Looking around I found this docker image from Spotify, including Kafka and Zookeeper.
How should I include it in my project? Should I include in the Dockerfile the suggested commands, listed below?
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`docker-machine ip \`docker-machine active\`` --env ADVERTISED_PORT=9092 spotify/kafka
export KAFKA=`docker-machine ip \`docker-machine active\``:9092
kafka-console-producer.sh --broker-list $KAFKA --topic test
export ZOOKEEPER=`docker-machine ip \`docker-machine active\``:2181
kafka-console-consumer.sh --zookeeper $ZOOKEEPER --topic test
How about using a docker-compose file?
In your *.yaml you can set-up the services to pull the Kafka and Zookeeper images from Spotify's DockerHub, map ports (e.g. "2181:2181" and "9092:9092" for ZK and Kafka, respectively), set ENV variables, and persist data to a volume so you don't lose your topics and offsets.
I install the incubator Kafka chart. Version kafka-0.8.5 as of this writing.
helm install --name kafka \
--set replicas=1 \
--set persistence.enabled=false \
--set zookeeper.replicaCount=1 \
incubator/kafka
To try this out I run a separate pod as just a bash shell with Kafka cli tools. I'm using the exact same Docker image confluentinc/cp-kafka:4.1.1-2 that the kafka-0 pod is using so that there is a perfect version match between the client and server:
kubectl run shell --rm -i --tty --image confluentinc/cp-kafka:4.1.1-2 -- /bin/bash
Listing topics, publishing messages, getting topic offsets all works perfectly, as shown below. However, when I try to run kafka-console-consumer and see the test record in the topic, it hangs indefinitely. Why?
root#shell-5c6ddf5d99-tbsvm:/# /usr/bin/kafka-topics --zookeeper kafka-zookeeper:2181 --list
__confluent.support.metrics
root#shell-5c6ddf5d99-tbsvm:/# echo "abcxyz" | /usr/bin/kafka-console-producer --broker-list kafka:9092 --topic test-topic
>[2018-08-07 16:43:26,110] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
root#shell-5c6ddf5d99-tbsvm:/# /usr/bin/kafka-topics --zookeeper kafka-zookeeper:2181 --list
__confluent.support.metrics
test-topic
root#shell-5c6ddf5d99-tbsvm:/# /usr/bin/kafka-run-class kafka.tools.GetOffsetShell --broker-list kafka:9092 --topic test-topic --time -1
test-topic:0:1
root#shell-5c6ddf5d99-tbsvm:/# /usr/bin/kafka-console-consumer --bootstrap-server kafka:9092 --from-beginning --topic test-topic
<hangs indefinitely>
FYI, this is a local minikube development cluster with the latest minikube with Kubernetes 1.10.x server-side and 1.10.x kubectl client tools. This is a clean, new minikube, with nothing else running besides kafka, kafka-zookeeper, and my shell pod.
Also, writing a small Java client test app to consume gets a similar result of polling indefinitely with no messages. When my Java client subscribes to test-topic, it never gets notification callbacks of being assigned to the one topic partition.
This cost me hours myself, there's a bug in minikube which prevents Kafka from working.
I'm not familiar with the Helm deployment, but you have to make sure of two things. First, the Kafka advertised host has to be the same as your Kubernetes service IP (or DNS name in kube dns), and second, you have to put minikube's network interface in promiscuous mode:
minikube ssh sudo ip link set docker0 promisc on
If you don't do this workaround, kafka can't contant itself via the Kubernetes service, and its leader election fails. I've found it to be very fragile inside a container deployment environment.
Setup details:
I am setting up openwhisk on my local ubuntu(16.04) vm. in this setup kafka is running in one docker and zookeeper in another docker.
I connect to the the kafka docker using cmd
sudo docker exec -it <container id> sh
once connected i execute the following command to get the list of topics
bin/kafka-topics.sh --list --zookeeper localhost:2181
which gives me an exception
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7203; nested exception is:
java.net.BindException: Address already in use
i am unable to understand why is it trying to use 7203 port?
docker ps output
83eba3961247 ches/kafka:0.10.0.1 "/start.sh"
11 days ago Up 23 hours 7203/tcp, 0.0.0.0:9092->9092/tcp
kafka
947fa689a7ef zookeeper:3.4 "/docker-
entrypoin..." 11 days ago Up 23 hours 2888/tcp,
0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper
The Kafka container OpenWhisk is using sets a JMX_PORT by default. That's the 7203 port you're seeing. To get your script to work you need to unset that environment setting:
unset JMX_PORT; bin/kafka-topics.sh --list --zookeeper localhost:2181
Note though, that localhost is not a valid address for your zookeeper instance, as it refers to the localhost of the current container, which is not Zookeeper. If you exchange localhost with the external IP of your VM or the IP of the zookeeper container (get it via docker inspect zookeeper --format {{.NetworkSettings.Networks.bridge.IPAddress}}) your topics should be listed fine.