Having to communicate with Kafka from a Dockerized Spring-Boot application,
the only option I was able to get working was Dockerizing Kafka too.
Here is my docker-compose-yml:
version: '3.5'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- kafka-network
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
networks:
- kafka-network
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
kafka-network:
name: kafka-network
This way I'm able to connect to the Kafka broker from anothe container on the kafka-network using the url kafka:9092
How do I make it also available from the localhost and from other machines?
UPDATE
I updated my docker-compose as follows:
version: '3.5'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- kafka-network
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
networks:
- kafka-network
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: ${KAFKA_LISTENERS:-PLAINTEXT://:9092}
KAFKA_ADVERTISED_LISTENERS: ${KAFKA_ADVERTISED_LISTENERS:-PLAINTEXT://127.0.0.1:9092}
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
kafka-network:
name: kafka-network
and created a .env file with the following content :
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://the.ip.of.machine:9092
I tested it on my PC (without the .env file) and I'm able to communicate with the broker using kafkakat from localhost:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6844a16fa14f wurstmeister/kafka "start-kafka.sh" 5 seconds ago Up 3 seconds 0.0.0.0:9092->9092/tcp kafka-compose_kafka_1_9573f71109c7
15d62557f3bd wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 6 seconds ago Up 4 seconds 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp kafka-compose_zookeeper_1_61a19213cde7
$ kafkacat -P -b localhost:9092 -t topic1
New test
^C
$ kafkacat -C -b localhost:9092 -t topic1
New test
% Reached end of topic topic1 [0] at offset 1
$ kafkacat -b localhost:9092 -L
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
1 brokers:
broker 1001 at 127.0.0.1:9092
4 topics:
...
However I'm not able to do the same on the server, the only difference is the IP of the host machine for KAFKA_ADVERTISED_LISTENERS.
What I can see is that it keeps saying leader not available
# kafkacat -b localhost:9092 -L
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
1 brokers:
broker 1002 at the.ip.of.machine:9092
topic "__consumer_offsets" with 50 partitions:
partition 0, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 1, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 2, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 3, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 4, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 5, leader -1, replicas: 1001, isrs: , Broker: Leader not available
Shouldn't I set the IP of the server for the KAFKA_ADVERTISED_LISTENERS ?
You have to set the LISTENERS in environments in order to expose the Kafka brokers to external network like below :
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://one.prod.com:9092
Here is the example :
https://github.com/wurstmeister/kafka-docker/blob/85821409d4d49a4edc7c5be83b68b71eceeab1bc/docker-compose-swarm.yml
You can refer here for more details:
https://github.com/wurstmeister/kafka-docker/wiki/Connectivity
For new kafka versions try to look here:
https://www.kaaproject.org/kafka-docker
https://github.com/bitnami/bitnami-docker-kafka/issues/29#issuecomment-435216430
For old kafka if you have only one computer might help this kafka variables:
environment:
ADVERTISED_HOST: ${COMPUTERNAME}
ADVERTISED_PORT: "9092"
Related
I am using docker for my sample Spark + Kafka project in windows machine.
I am facing
WARN ClientUtils: Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
[error] (run-main-0) org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
[error] org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
------
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
Below is my docker-compose.yml
version: '2'
services:
test1:
build: test1service/.
depends_on:
- kafka
test2:
build: test2/.
depends_on:
- kafka
- test1
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
extra_hosts:
- "localhost: 127.0.0.1"
kafka:
image: confluentinc/cp-kafka:5.1.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
#KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "localhost: 127.0.0.1"
Below is my sample code in test2 service
val inputStreamDF = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "kafka:9092")
.option("subscribe", "test1")
.option("startingOffsets", "earliest")
.load()
docker ps command output is
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c8feb49e12b test1 "/usr/bin/start.sh" About an hour ago Up 50 minutes test1
4535ce246541 test2 "/usr/bin/myservice-…" About an hour ago Up 50 minutes test2
733766f72adb confluentinc/cp-kafka:5.1.0 "/etc/confluent/dock…" About an hour ago Up 51 minutes 0.0.0.0:9092->9092/tcp kafka_1
d915e25cb226 confluentinc/cp-zookeeper:5.1.0 "/etc/confluent/dock…" About an hour ago Up 51 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp zookeeper_1
Has anyone faced similar issue, how did you resolve?
You try to reach kafka:9092, but docker compose has generated the container kafka_1, which is why there is no name resolution.
Docker gives internal ip to your containers, and use their container name to create an internal DNS on this network (with the embbed dns server)
Your environment variables are not changed by Docker Compose to fit the name it gives to your container.
You should use "container_name: kafka" in your container description to get a static container name.
Only thing you need is network. After that all services will be connected and you can use kafka:9092.
version: '2'
services:
test1:
build: test1service/.
depends_on:
- kafka
networks:
- app-network
test2:
build: test2/.
depends_on:
- kafka
- test1
networks:
- app-network
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
kafka:
image: confluentinc/cp-kafka:5.1.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
#KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
networks:
app-network:
driver: bridge
Try with localhost and then the container port:
localhost:9092
Here is an example command for creating a topic named "orders"
docker exec kafka kafka-topics --bootstrap-server localhost:9092 --create --topic orders --partitions 1 --replication-factor 1
My docker-compose file looks like this:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
expose:
- "2181"
kafka:
build: .
depends_on:
- zookeeper
expose:
- "8778"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:8778
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS: '-javaagent:/usr/jolokia/agents/jolokia-jvm.jar'
telegraf:
image: telegraf:latest
links:
- "kafka"
- "zookeeper"
environment:
JOLOKIA_AGENT_URL: http://kafka:8778/jolokia/
ZOOKEEPER_CONNECTION_STRING: http://zookeeper:2181
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf
Example: I can ping kafka successfully from telegraf. I can successfully hit the endpoint I want from the kafka container when I'm execed into that container (curl from localhost when inside it). I cannot, however, reach the endpoint /jolokia/read exposed in the kafka container at port 8778 from the telegraf container.
What am I missing?
I suggest you remove the links section. This has been deprecated by Compose for years.
Then, Compose starts its own network bridge layer, so if you exec into telegraf or zookeeper containers, ping kafka should work, therefore DNS would be working and so curl should as well...
Note: adding Jolokia to Zookeeper should also be done
I'll also point out that the Confluent Helm Charts already provide Prometheus and Jolokia integration
I've read a lot of similar subjects but they aren't able to answer my problem here.
Trying to run some short integration tests, I'm using docker-compose 3, a single-node kafka. On client side I'm using Go shopify/sarama to consume / produce
zookeeper:
image: confluentinc/cp-zookeeper:5.2.2
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:5.2.2
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
expose:
- 9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
I have another container from the docker-compose that will listen to
- "BROKERS_URL=kafka:9092"
the consumer is working just fine:
Sarama consumer up and running. {"brokers": ["kafka:9092"], "topics": ["validated"], "group": "event-service"}
But on the producer part, running directly from my machine:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
producer, err := sarama.NewSyncProducer([]string{"http://localhost:29092"}, nil)
...
msg := &sarama.ProducerMessage{
Topic: "validated",
Key: sarama.StringEncoder(""),
Value: sarama.ByteEncoder(payload),
}
partition, offset, err := producer.SendMessage(msg)
...
Nothing weird / extravagante here, but it's not working and I'm confused.
also:
nc -vz localhost 29092
Connection to localhost port 29092 [tcp/*] succeeded!
Instead of
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
you need
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://0.0.0.0:29092
Testing connectivity from my host machine using kafkacat shows that this works:
➜ kafkacat -b localhost:29092 -L
Metadata for all topics (from broker 1: localhost:29092/1):
1 brokers:
broker 1 at localhost:29092 (controller)
0 topics:
This difference is that the listener is binding to all available interfaces (0.0.0.0). With your original configuration it binds to the loopback interface (lo) for localhost, and so only accepts traffic on this and not externally.
I want to dockerize a kafka cluster with two kafka instances.
running zookeeper
docker run -it --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper
running kafka
docker run -it --name kafka -p 9092:9092 -e ADVERTISED_HOST_NAME=$(hostname -f) --link zookeeper:zookeeper debezium/kafka
running one zookeeper and one kafka is successful but adding the second kafka container shown as below
docker run -it --name kafka2 -p 9096:9092 -e ADVERTISED_HOST_NAME=$(hostname -f) --link zookeeper:zookeeper debezium/kafka
gives me the following error:
2019-09-24 04:11:48,728 - ERROR [main:Logging#74] - Error while creating ephemeral at /brokers/ids/1, node already exists and owner '72057611679825940' does not match current session '72057611679825967'
2019-09-24 04:11:48,748 - ERROR [main:MarkerIgnoringBase#159] - [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists
at org.apache.zookeeper.KeeperException.create(KeeperException.java:122)
at kafka.zk.KafkaZkClient$CheckedEphemeral.getAfterNodeExists(KafkaZkClient.scala:1784)
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1722)
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689)
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97)
at kafka.server.KafkaServer.startup(KafkaServer.scala:260)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
In the debezium Kafka container, you need to pass -e BROKER_ID=2. The default is 1.
https://github.com/debezium/docker-images/blob/master/kafka/0.10/docker-entrypoint.sh#L6-L9
In other Kafka containers, there's a KAFKA_BROKER_ID variable you can use that needs to be different for each container that also sets up replication factors to 2 for internal topics and correctly sets up the advertised listeners.
Here's an example compose file
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_HOST://localhost:19092,PLAINTEXT://kafka-1:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
kafka-2:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_HOST://localhost:29092,PLAINTEXT://kafka-2:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
Ex.
$ kafkacat -L -b localhost:19092
Metadata for all topics (from broker 1: localhost:19092/1):
2 brokers:
broker 2 at localhost:29092
broker 1 at localhost:19092 (controller)
1 topics:
topic "__confluent.support.metrics" with 1 partitions:
partition 0, leader 1, replicas: 1,2, isrs: 1,2
Note: running multiple brokers on one machine won't improve any performance or reliability of Kafka
I am creating two docker-compose files (mainly because I don't want to have to keep restarting my infrastructure while developing my application.) that need to reside on the same docker network so they can use alias names to connect.
The files look similar to the following:
APP:
version: '3.5'
networks:
default:
name: kafka_network
driver: bridge
services:
client:
build:
context: .
dockerfile: ./Dockerfile
working_dir: /app/
command: ./client
environment:
BADDR: kafka:9092
CGROUP: test_group
TOPICS: my-topic
INFRASTRUCTURE:
version: '3.5'
networks:
default:
name: kafka_network
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
My issue is that the client doesn't resolve kafka:9092 correctly... it always resolves to 127.0.0.1:9092.
ERROR:
Broker: kafka:9092
Consumer_Group: my_group
Topics: [my-topic]
Created Consumer rdkafka#consumer-1
% Error: GroupCoordinator: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
When run locally, it appears to run fine, so I am really confused as to what the issue might be. If anyone knows anything about this I would be very grateful!
LOCAL:
[procyclinsur#P-428 client]$ ./client
Broker: localhost:9092
Consumer_Group: my-group
Topics: [my-topic]
Created Consumer rdkafka#consumer-1
% AssignedPartitions: [my-topic[0]#unset]
% Message on my-topic[0]#0:
hello mate
That's problem related to your Kafka's config - not to docker at all.
Look on:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
It means that you setup 2 listeners for your Kafka which your clients will receive in Kafka's protocol when connecting.
So when you connect on port 9092 your client's will try to get Kafka at "localhost "and when you connect at port 29092 your clients will try to get Kafka at "kafka" DNS name.
It's working locally for you because your Kafka container is exposed on localhost:9092 via docker ports section.
Here is article which is well describing that topic: https://rmoff.net/2018/08/02/kafka-listeners-explained/