I am new to kafka, i tried to setup my first cluster on a vps with Docker-compose. But i still cannot access it from my local pc ( outside the host ).
here is my docker compose
version: '2'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
zookeeper-2:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 32181:2181
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
- zookeeper-2
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2181
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:29092,EXTERNAL_DIFFERENT_HOST://:29093,INTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092,EXTERNAL_DIFFERENT_HOST://XXX.XXX.XXX.XXX:29093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT,EXTERNAL_DIFFERENT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka-2:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
- zookeeper-2
ports:
- 39092:39092
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2181
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:39092,EXTERNAL_DIFFERENT_HOST://:39093,INTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9093,EXTERNAL_SAME_HOST://localhost:39092,EXTERNAL_DIFFERENT_HOST://XXX.XXX.XXX.XXX:39093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT,EXTERNAL_DIFFERENT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
I searched in the logs and i found that there is no available brokers ( always 0 ) because the server couldn't connect to "kafka:9092" AND Zookeeper keep failing to connect to the brokers.
[2022-04-13 14:56:59,422] WARN Session 0x0 for sever My-vps-URL/XXX.XXX.XXX.XXX:2181, Closing socket connection. Attempting reconnect except
it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
org.apache.zookeeper.ClientCnxn$SessionTimeoutException: Client session timed out, have not
heard from server in 30006ms for session id 0x0
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1258)
KeeperErrorCode = ConnectionLoss for /brokers/ids
How can i fix this ?
Note that i tried a similar config with a different docker image ( bitnami's ), and with different cluster config ( 1 zookeeper 1 broker ) and it still doesn't work.
You have a Zookeeper error because you are running an even number of them. The number of Zookeepers doesn't need to match the number of brokers, and it should be an odd number of them, only, up to 7 max. You also shouldn't need ports on the Zookeeper servers.
For your Kafka Connection,
KAFKA_LISTENERS needs to include the IP of 0.0.0.0 to allow for the server to bind on all interfaces
You need to expose port 29093 and 39093 since those are your "different host" settings. You currently only have ports to connect from the same machine.
your clients need to connect to the EXTERNAL_DIFFERENT_HOST address you've set, not kafka:9092
Further reading - Connect to Kafka running in Docker
tried a similar config with a different docker image ( bitnami's )
That image has different variables, but same basic answer as above.
different cluster config ( 1 zookeeper 1 broker )
There is little benefit of running multiple of each of those on the same machine, so I suggest trying to get that configuration working, first.
Related
I'm new to Kafka and I'm trying to run a Kafka service on my local machine and use it to transfer some data from one .NET project to another.
I'm using docker-compose.yml file to create two docker containers for zookeeper and Kafka from wurstmeister.
In Kafka definition in environment variables there is KAFKA_ADVERTISED_HOST_NAME which I set to 127.0.0.1.
In docker hub of wurstmeister/kafka says I quote
"modify the KAFKA_ADVERTISED_HOST_NAME in docker-compose.yml to match your docker host IP (Note: Do not use localhost or 127.0.0.1 as the host IP if you want to run multiple brokers.)".
When I set my topic to have more than 1 replica and more than 1 partition I get this message
"Error while executing topic command : Replication factor: 2 larger than available brokers: 1.".
What is the right IP address to define in KAFKA_ADVERTISED_HOST_NAME which will allow me to get more than 1 broker?
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "simpletalk_topic:2:2"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Note - Running multiple brokers on one machine does not provide true fault tolerance.
use it to transfer some data from one .NET project to another
You only need one broker for this.
First, suggest reading https://github.com/wurstmeister/kafka-docker/wiki/Connectivity
KAFKA_ADVERTISED_HOST_NAME is deprecated. Don't use it.
It has been replaced by KAFKA_ADVERTISED_LISTENERS, which can both contain ${DOCKER_HOST_IP:-127.0.0.1} (or your host IP from ifconfig) since this is the IP your client will use (from the host).
Since brokers are clients of themselves, you also need to advertise the container names. And, if you want to containerize your client, those addresses would also be used by your app.
Beyond Kafka networking configs, KAFKA_BROKER_ID needs to be different between each, and your error is suggesting you need to override all the other replication factor broker configs.
All in all.
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
kafka-1:
image: wurstmeister/kafka
depends_on: [zookeeper]
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://${DOCKER_HOST_IP:-127.0.0.1}:9092,PLAINTEXT_INTERNAL://kafka-1:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,PLAINTEXT_INTERNAL://:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT_INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
kafka-2:
image: wurstmeister/kafka
ports:
- "9093:9093"
environment:
KAFKA_BROKER_ID: 2 # unique
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://${DOCKER_HOST_IP:-127.0.0.1}:9093,PLAINTEXT_INTERNAL://kafka-2:29093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093,PLAINTEXT_INTERNAL://:29093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT_INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 2
KAFKA_CREATE_TOPICS: "simpletalk_topic:2:2"
depends_on: # ensure this joins the other
- zookeeper
- kafka-1
I'm trying to run a Kafka service on my local machine
Set bootstrap.servers="localhost:9092,localhost:9093"
Further reading - Connect to Kafka running in Docker
Example usage
Note that topic creation doesn't seem to automatically work. You should ideally be using your application to actually create / check topic existence.
$ kcat -b localhost:9093 -L
Metadata for all topics (from broker -1: localhost:9093/bootstrap):
2 brokers:
broker 2 at 127.0.0.1:9093
broker 1 at 127.0.0.1:9092 (controller)
0 topics:
producing some data... and consuming it
$ kcat -b localhost:9093 -t test -o earilest
% Auto-selecting Consumer mode (use -P or -C to override)
hello
world
sample
data
of
no
importance
it should be ip address of your machine.
Linux: use ifconfig command and your IP will be inet <IP_ADDRESS>
in windows you will get this by using ipconfig
My docker-compose file looks like this:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
expose:
- "2181"
kafka:
build: .
depends_on:
- zookeeper
expose:
- "8778"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:8778
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS: '-javaagent:/usr/jolokia/agents/jolokia-jvm.jar'
telegraf:
image: telegraf:latest
links:
- "kafka"
- "zookeeper"
environment:
JOLOKIA_AGENT_URL: http://kafka:8778/jolokia/
ZOOKEEPER_CONNECTION_STRING: http://zookeeper:2181
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf
Example: I can ping kafka successfully from telegraf. I can successfully hit the endpoint I want from the kafka container when I'm execed into that container (curl from localhost when inside it). I cannot, however, reach the endpoint /jolokia/read exposed in the kafka container at port 8778 from the telegraf container.
What am I missing?
I suggest you remove the links section. This has been deprecated by Compose for years.
Then, Compose starts its own network bridge layer, so if you exec into telegraf or zookeeper containers, ping kafka should work, therefore DNS would be working and so curl should as well...
Note: adding Jolokia to Zookeeper should also be done
I'll also point out that the Confluent Helm Charts already provide Prometheus and Jolokia integration
I've read a lot of similar subjects but they aren't able to answer my problem here.
Trying to run some short integration tests, I'm using docker-compose 3, a single-node kafka. On client side I'm using Go shopify/sarama to consume / produce
zookeeper:
image: confluentinc/cp-zookeeper:5.2.2
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:5.2.2
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
expose:
- 9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
I have another container from the docker-compose that will listen to
- "BROKERS_URL=kafka:9092"
the consumer is working just fine:
Sarama consumer up and running. {"brokers": ["kafka:9092"], "topics": ["validated"], "group": "event-service"}
But on the producer part, running directly from my machine:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
producer, err := sarama.NewSyncProducer([]string{"http://localhost:29092"}, nil)
...
msg := &sarama.ProducerMessage{
Topic: "validated",
Key: sarama.StringEncoder(""),
Value: sarama.ByteEncoder(payload),
}
partition, offset, err := producer.SendMessage(msg)
...
Nothing weird / extravagante here, but it's not working and I'm confused.
also:
nc -vz localhost 29092
Connection to localhost port 29092 [tcp/*] succeeded!
Instead of
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
you need
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://0.0.0.0:29092
Testing connectivity from my host machine using kafkacat shows that this works:
➜ kafkacat -b localhost:29092 -L
Metadata for all topics (from broker 1: localhost:29092/1):
1 brokers:
broker 1 at localhost:29092 (controller)
0 topics:
This difference is that the listener is binding to all available interfaces (0.0.0.0). With your original configuration it binds to the loopback interface (lo) for localhost, and so only accepts traffic on this and not externally.
I'm trying to start a kafka cluster using docker compose, I'm using the following configurations:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
command: [start-kafka.sh]
ports:
- "15092:9092"
hostname: kafka
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:15092
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://:15092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
Both services are up and running, but when I try to produce a message from an external source using broker external-ip:15092 I receive the following error:
dial tcp: lookup kafka: no such host
Can you help me figure out what the configuration is missing?
Thanks
You're getting "no such host", which occurs before the port is even used. You need to run code in another container in the same Docker network for the service name to resolve
Kafka doesn't work like that (a simple port forward), anyway
Both the listeners are still set set at 9092
You'll need to add / change the advertised listener containing externalIP:15092 for it to work, and you can find multiple places where the difference of listeners is documented (including the wiki pages for that container)
So, KAFKA_LISTENERS: INSIDE://:9092 is all you need (or more appropriately, put INSIDE://0.0.0.0:9092)
But you need to edit
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://<your_external_IP>:15092
Here's a configuration that exposes different ports:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "49815:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: wurstmeister/kafka
container_name: broker
ports:
- "49816:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://broker:9092,INTERNAL://broker:29092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:49816,INTERNAL://broker:29092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
Key points:
Your external source needs to connect to the address that is exposed by Docker (localhost:49816) and bound to the broker using the port 9092. So you need to set KAFKA_ADVERTISED_LISTENERS with that value for external connections.
The broker should be listening to 9092 to get the connections coming from the outside through Docker port binding. So you need to set PLAINTEXT://broker:9092 as an external listener in KAFKA_LISTENERS.
You still need to listen for inter container connections for Brokers and Zookeeper to function, so you need to set KAFKA_LISTENERS and KAFKA_ADVERTISED_LISTENERS with internal endpoints INTERNAL://broker:29092 that are visible from inside Docker.
Here's the complete configs link: https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_listeners (it's for Confluence image but it works the same with wurstmeister's)
I am creating two docker-compose files (mainly because I don't want to have to keep restarting my infrastructure while developing my application.) that need to reside on the same docker network so they can use alias names to connect.
The files look similar to the following:
APP:
version: '3.5'
networks:
default:
name: kafka_network
driver: bridge
services:
client:
build:
context: .
dockerfile: ./Dockerfile
working_dir: /app/
command: ./client
environment:
BADDR: kafka:9092
CGROUP: test_group
TOPICS: my-topic
INFRASTRUCTURE:
version: '3.5'
networks:
default:
name: kafka_network
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
My issue is that the client doesn't resolve kafka:9092 correctly... it always resolves to 127.0.0.1:9092.
ERROR:
Broker: kafka:9092
Consumer_Group: my_group
Topics: [my-topic]
Created Consumer rdkafka#consumer-1
% Error: GroupCoordinator: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
When run locally, it appears to run fine, so I am really confused as to what the issue might be. If anyone knows anything about this I would be very grateful!
LOCAL:
[procyclinsur#P-428 client]$ ./client
Broker: localhost:9092
Consumer_Group: my-group
Topics: [my-topic]
Created Consumer rdkafka#consumer-1
% AssignedPartitions: [my-topic[0]#unset]
% Message on my-topic[0]#0:
hello mate
That's problem related to your Kafka's config - not to docker at all.
Look on:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
It means that you setup 2 listeners for your Kafka which your clients will receive in Kafka's protocol when connecting.
So when you connect on port 9092 your client's will try to get Kafka at "localhost "and when you connect at port 29092 your clients will try to get Kafka at "kafka" DNS name.
It's working locally for you because your Kafka container is exposed on localhost:9092 via docker ports section.
Here is article which is well describing that topic: https://rmoff.net/2018/08/02/kafka-listeners-explained/