Can't connect Kafka to Zookeeper - docker

From docker-compose I got this yml:
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper:3.1.1
ports:
- "2080:2080"
environment:
- ZOOKEEPER_CLIENT_PORT=2080
- ZOOKEEPER_TICK_TIME=2000
kafka:
container_name: kafka
image: confluentinc/cp-kafka:3.1.1
ports:
- "9092:9092"
environment:
- KAFKA_CREATE_TOPICS=Topic1:1
- KAFKA_ZOOKEEPER_CONNECT=192.168.99.100:2080
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.99.100:9092
depends_on:
- zookeeper
schema-registry:
container_name: schema-registry
image: confluentinc/cp-schema-registry:3.1.1
ports:
- "8081:8081"
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=192.168.99.100:2080
- SCHEMA_REGISTRY_HOST_NAME=localhost
depends_on:
- zookeeper
- kafka
When I stand up this docker the console output ends with:
schema-registry | Error while running kafka-ready.
schema-registry | org.apache.kafka.common.errors.TimeoutException: Timed out waiting for Kafka to create /brokers/ids in Zookeeper. timeout (ms) = 40000
schema-registry exited with code 1
It seems like kafka never connect Zookeper or something like that, does anyone knows why this is happening?

Does changing
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=192.168.99.100:2080
into
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2080
help?
Additionally, KAFKA_ZOOKEEPER_CONNECT=192.168.99.100:2080 should mention zookeeper as well, instead of an IP address. Or, how can you be sure of that IP address?
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.99.100:9092 mentions an IP address you might not be able to guarantee as well. For the latter, that IP address could be changed into kafka.
I also had challenges in getting Kafka and Zookeeper to work in Docker (via Docker Compose). In the end, https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/kafka-single-node/docker-compose.yml worked for me. You could use that as a source of inspiration.

Related

How to fix Kafka Docker container from throwing 0.0.0.0/0.0.0.0:2181: Connection refused?

I am trying to set up a Docker compose file environment for Kafka change data capture and I am encountering this error:
Opening socket connection to server 0.0.0.0/0.0.0.0:2181. Will not attempt to authenticate using SASL (unknown error)
Socket error occurred: 0.0.0.0/0.0.0.0:2181: Connection refused
I have been following this tutorial https://hevodata.com/learn/kafka-cdc-postgres/, but it is running the docker commands directly using the link option and not using a docker-compose.yml file.
I attempted to convert these:
docker run -it --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka
to the below docker-compose.yml file. However, it appears it is completely ignoring the KAFKA_ZOOKEEPER_CONNECT environment variable as this is what I see in the log:
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
Even though, the documentation https://github.com/debezium/docker-images/tree/master/kafka/1.5 indicates it should work.
When I follow the tutorial using docker run and not creating a docker-compose file it works completely fine. It shows my local computer's IP address with port 2181 instead of 0.0.0.0:2181.
docker-compose.yml:
version: "3.7"
services:
postgres:
image: debezium/postgres:10
container_name: postgres
ports:
- "5000:5432"
environment:
POSTGRES_HOST_AUTH_METHOD: trust
# POSTGRES_USER: db_user
# POSTGRES_PASSWORD: db_password
zookeeper:
image: debezium/zookeeper:1.5
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
kafka:
image: debezium/kafka:1.5
container_name: kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
connect:
image: debezium/connect:1.5
container_name: connect
ports:
- "8083:8083"
environment:
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my-connect-configs
OFFSET_STORAGE_TOPIC: my-connect-offsets
depends_on:
- postgres
- kafka
- zookeeper
networks:
default:
name: kafkaCDC
zoo.cfg on the Zookeeper container:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/txns
clientPort=2181
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
Been looking at this issue for several; however, I am getting totally lost. Especially, since so many examples are using links.
This is the GitHub post that made me think to use KAFKA_ZOOKEEPER_CONNECT.
https://github.com/wurstmeister/kafka-docker/issues/512#issuecomment-505905161
Apart of me feels like it is something wrong with https://github.com/debezium/docker-images/blob/master/kafka/1.5/docker-entrypoint.sh that is ignoring the environment variable, but it is probably just me not understanding something and having a conf error.
For debezium/kafka:1.5 image to work in docker compose, you can try passing following environment variable:
ZOOKEEPER_CONNECT: "zookeeper:2181"
It addressed my problem, sample docker compose yaml below:
version: "3.9"
services:
zookeeper:
image: debezium/zookeeper:1.5
ports:
- "2181:2181"
- "2888:2888"
kafka:
image: debezium/kafka:1.5
ports:
- "9092:9092"
environment:
ZOOKEEPER_CONNECT: "zookeeper:2181"
depends_on:
- zookeeper
Don't prefix your ENVIRONMENT variables with KAFKA_
Here is my working cluster :
version: '2'
services:
postgres:
image: debezium/postgres:13-alpine
container_name: postgres
hostname: postgres
environment:
POSTGRES_USER: nikamooz
POSTGRES_PASSWORD: nikamooz
ports:
- 5432:5432
zookeeper:
image: debezium/zookeeper
container_name: zookeeper
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ports:
- 2182:2181
- 2888:2888
- 3888:3888
volumes:
- ./data/zoo/data:/zookeeper/data
- ./data/zoo/log:/zookeeper/txns
kafka:
image: debezium/kafka
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
ZOOKEEPER_CONNECT: zookeeper:2181
BOOTSTRAP_SERVERS: kafka:9092
volumes:
- ./data/kafka/data:/kafka/data
- ./data/kafka/logs:/kafka/logs
connect:
image: debezium/connect
container_name: connect
hostname: connect
depends_on:
- kafka
- postgres
ports:
- 8083:8083
environment:
GROUP_ID: holding_group
CONFIG_STORAGE_TOPIC: holding_storage_topic
OFFSET_STORAGE_TOPIC: holding_offset_topic
BOOTSTRAP_SERVERS: kafka:9092
I was able to fix it by setting the zookeeper connect address to the docker container IP address
To get the IP address run
docker inspect <container-name> --format='{{ .NetworkSettings.IPAddress }}'
and start start kafka as follows
docker run --name some-kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=<zookeeper-ip>:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
This github comment helped me find out what I was missing

How to change target of the Spring Cloud Stream Kafka binder?

Using Spring cloud Stream 2.1.4 with Spring Boot 2.1.10, I'm trying to target a local instance of Kafka.
This is an extract of my projetc configuation so far:
spring.kafka.bootstrap-servers=PLAINTEXT://localhost:9092
spring.kafka.streams.bootstrap-servers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
spring.cloud.stream.kafka.streams.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.streams.binder.zkNodes=localhost:2181
But the binder keeps on calling a wrong target :
java.io.IOException: Can't resolve address: kafka.example.com:9092
How can can I specify the target if those properties won't do he trick?
More, I deploy the Kafka instance through a Docker Bitnami image and I'd prefer not to use SSL configuration (see PLAINTEXT protocol) but I'm don't find properties for basic credentials login. Does anyone know if this is hopeless?
This is my docker-compose.yml
version: '3'
services:
zookeeper:
image: bitnami/zookeeper:latest
container_name: zookeeper
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=kafka
- ZOO_SERVER_PASSWORDS=kafka_password
networks:
- kafka-net
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ZOOKEEPER_USER=kafka
- KAFKA_ZOOKEEPER_PASSWORD=kafka_password
networks:
- kafka-net
networks:
kafka-net:
driver: bridge
Thanks in advance
The hostname isn't the issue, rahter the advertised listeners protocol//:port mapping that causes the hostname to be advertised, by default. You should change that, rather than the hostname.
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com # <--- Here's what you are getting in the request
...
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092 # <--- This returns the hostname to the clients
If you plan on running your code outside of another container, you should advertise localhost in addition to, or instead of the container hostname.
One year later, my comment still is not been merged into the bitnami README, where I was able to get it working with the following vars (changed to match your deployment)
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_CFG_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka.example.com:29092,PLAINTEXT_HOST://localhost:9092
All right: got this to work by looking twice to the "dockerfile" (thx to cricket_007):
kafka:
...
hostname: localhost
For the record: I could get rid of all properties above, default being for Kafka localhost:9092

Understanding Hostname of Containers in docker-compose.yml

I am trying to understand a docker-compose.yml (also shown at the bottom of this post).
Question: confluent:2181 is used the following line
KAFKA_ZOOKEEPER_CONNECT: "confluent:2181"
How was this confluent hostname defined? If I understand hostnames in Docker correctly, the only container hostnames are zookeeper, kafka, rest-proxy and schema-registry.
docker-compose.yml
version: "2"
services:
zookeeper:
image: confluent/zookeeper
ports:
- "2181:2181"
environment:
zk_id: "1"
network_mode: "host"
kafka:
image: confluent/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: "confluent:2181"
network_mode: "host"
rest-proxy:
image: confluent/rest-proxy
depends_on:
- zookeeper
- kafka
- schema-registry
ports:
- "8082:8082"
environment:
RP_ZOOKEEPER_CONNECT: "confluent:2181"
RP_SCHEMA_REGISTRY_URL: "http://confluent:8081"
network_mode: "host"
schema-registry:
image: confluent/schema-registry
depends_on:
- kafka
- zookeeper
ports:
- "8081:8081"
environment:
SR_KAFKASTORE_CONNECTION_URL: "confluent:2181"
network_mode: "host"
You're right. According to the docker-compose.yml the services being defined will provide DNS resolution only for zookeeper, kafka, rest-proxy and schema-registry, and not confluent.
However if you take a look at the documenttion from confluentinc they require you to modify your hosts file in your host machine:
Edit your hosts file and add a host entry for the docker machine.
192.168.99.100 confluent
That's why you can use confluent and get name resolution. It doesn't have anything to do with docker or compose.
P.S: Beware that you're using deprecated images according to their documentation
Yes, you are right.
According to the README, you were supposed to run confluent firstly.

Unable to access topic from scaled Kafka cluster in Docker

I want to create scalable Kafka cluster on dockers. I am using docker image https://hub.docker.com/r/wurstmeister/kafka/ for creating kafka server. For creating single node kafka and zookeeper I have used following docker-compose :
version: '3.1'
services:
kafka:
image: "wurstmeister/kafka"
ports:
- "9095:9092"
hostname: kafka
depends_on:
- zookeeper
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_CREATE_TOPICS=check:3:1
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2185:2181"
with these settings I am able to access kafka broker and zookeeper from my application code as kafka:9092 and zookeeper:2181 respectively in kafka consumer settings.
But now I want scalable cluster so I have modified docker-compose as:
version: '3.1'
services:
kafka:
image: "wurstmeister/kafka"
ports:
- "9095:9092"
hostname: kafka
depends_on:
- zookeeper
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_CREATE_TOPICS=check:3:1
deploy:
replicas: 3
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2185:2181"
In this case 3 kafka brokers are getting created. But, in this case if I am unable to access my topic "check" by using kafka broker as kafka:9092 and kafka zookeeper as zookeepr:2181 in my application code's kafka consumer settings. How should I modify my docker-compose.yml OR my application code kafka consumer settings such that I can read from these multiple brokers?

Schema Registry container: Server died unexpectedly when launching using docker-compose

I have written docker-compose.yml file to create the following containers:
Confluent-Zookeeper
Confluent-Kafka
Confluent-Schema Registry
I want a single docker-compose file to spun up the necessary containers, expose required ports and interconnect the dependent containers. The goal is to have
I am using the official confluent images from Docker Hub.
My docker-compose file looks like this:
zookeeper:
image: confluent/zookeeper
container_name: confluent-zookeeper
hostname: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
kafka:
environment:
KAFKA_ZOOKEEPER_CONNECTION_STRING: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
image: confluent/kafka
container_name: confluent-kafka
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
schema-registry:
image: confluent/schema-registry
container_name: confluent-schema_registry
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
SCHEMA_REGISTRY_HOSTNAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081
SCHEMA_REGISTRY_DEBUG: 'true'
SCHEMA_REGISTRY_KAFKASTORE_TOPIC_REPLICATION_FACTOR: '1'
links:
- kafka
- zookeeper
ports:
- "8081:8081"
Now when I run docker-compose up, all these containers will be created and launched. But Schema Registry container exits immediately. docker logs gives the following output:
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2017-05-17 06:06:33,415] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.config.ConfigException: Only plaintext and SSL Kafka endpoints are supported and none are configured.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getBrokerEndpoints(KafkaStore.java:254)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.<init>(KafkaStore.java:111)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:136)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:53)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:117)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
I searched for this issue but nothing helped. I tried various other configurations like providing KAFKA_ADVERTISED_HOSTNAME, changing SCHEMA_REGISTRY_LISTENERS value, etc. but no luck.
Can anybody point out the exact configuration issue why Schema Registry container is failing?
Those are old and deprecated docker images. Use the latest supported docker images from confluentinc https://hub.docker.com/u/confluentinc/
You can find a full compose file here - confluentinc/cp-docker-images
You're missing the hostname (hostname: schema-registry) entry in the failing container. By default Docker will populate a container's /etc/hosts with the linked containers' aliases and names, plus the hostname of self.
The question is old, though it might be helpful to leave a solution that worked for me. I am using docker-compose:
version: '3.3'
services:
zookeeper:
image: confluent/zookeeper:3.4.6-cp1
hostname: "zookeeper"
networks:
- test-net
ports:
- 2181:2181
environment:
zk_id: "1"
kafka:
image: confluent/kafka:0.10.0.0-cp1
hostname: "kafka"
depends_on:
- zookeeper
networks:
- test-net
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: "kafka"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_BROKER_ID: "0"
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
schema-registry:
image: confluent/schema-registry:3.0.0
hostname: "schema-registry"
depends_on:
- kafka
- zookeeper
networks:
- test-net
ports:
- 8081:8081
environment:
SR_HOSTNAME: schema-registry
SR_LISTENERS: http://schema-registry:8081
SR_DEBUG: 'true'
SR_KAFKASTORE_TOPIC_REPLICATION_FACTOR: '1'
SR_KAFKASTORE_TOPIC_SERVERS: PLAINTEXT://kafka:9092
networks:
test-net:
driver: bridge`

Resources