Socket server failed to bind to kafka:29092: Cannot assign requested address - docker

I am trying to create a kafka server from the following yaml file:
version: "3.9"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
depends_on:
- zookeeper
But I am getting an error:
ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Socket server failed to bind to kafka:29092: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:667)
at kafka.network.Acceptor.<init>(SocketServer.scala:560)
at kafka.network.SocketServer.createAcceptor(SocketServer.scala:288)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:261)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:259)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:259)
at kafka.network.SocketServer.startup(SocketServer.scala:131)
at kafka.server.KafkaServer.startup(KafkaServer.scala:296)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:455)
at java.base/sun.nio.ch.Net.bind(Net.java:447)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:73)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:663)
... 12 more
What is wrong with my configuration?

The error is with binding the listeners. KAFKA_LISTENERS is incorrect for the bitnami containers. All broker properties should start with KAFKA_CFG, as written in their README.
Also, you should set the KAFKA_CFG_LISTENERS to use IP 0.0.0.0 to accept all traffic on all interfaces

For what it's worth, this was the right configuration for me:
version: "3.9"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
deploy:
replicas: 1
ports:
- "2181:2181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
deploy:
replicas: 1
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CFG_LISTENERS: INTERNAL://:9093,OUTSIDE://:9092
KAFKA_CFG_ADVERTISED_LISTENERS: INTERNAL://kafka:9093,OUTSIDE://sub.domain.ltd:9092
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: INTERNAL
ALLOW_PLAINTEXT_LISTENER: "yes"

Related

Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)

I created a docker-compose for raising 10 conainers aimed for log porpuses and they are working as expected but both kafka and zookeeper continers keep poping up warns and execptions. The containers stack is: local volume with log files -> Filebeat -> Kafka/Zookeeper balanced ( 3 Kafka and 3 Zookeeper containers) -> LogStash - > ElasticSearch -> Kibana.
It is working as expected: if I paste new file in my local volume or add new log lines in current file I can see the result in Kibana.
Nevertheless, I keep getting the error mentioned in this question topic. Today I am running everything in my laptop since this is just a POC but from now on I have to split this Docker Compose in some docker files according to each OpenShift VM I will deploy each container. I want to understand/fix such messages (whole logs and files pasted bellow) before promoting to next level.
I found this discussion (issues with SSL) a bit close to mine pointing the cause as an SSL issue. Well, as fair as I can see, I am not using SSL at all although I will problem evolve for some SSL demand when I move to production.
So my question is: why I am getting ZOOKEEPER message "...Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244..." and KAFKA message "... Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)..." and I still see all connections working properly?
If want to reproduce the environment here is the github with all yml files: all files
Docker-compose.yml
version: '3.2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.5.2
volumes:
- "./kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "./esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
logstash:
image: docker.elastic.co/logstash/logstash:7.5.2
volumes:
- "./logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
- kafka2
- kafka3
kafka1:
image: wurstmeister/kafka
command: [start-kafka.sh]
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
kafka2:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9093:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9092
KAFKA_BROKER_ID: 2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
kafka3:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9094:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9092
KAFKA_BROKER_ID: 3
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1,zoo2,zoo3
ports:
- "2181:2181"
zoo2:
image: elevy/zookeeper:latest
environment:
MYID: 2
SERVERS: zoo1,zoo2,zoo3
ports:
- "2182:2181"
zoo3:
image: elevy/zookeeper:latest
environment:
MYID: 3
SERVERS: zoo1,zoo2,zoo3
ports:
- "2183:2181"
filebeat:
image: docker.elastic.co/beats/filebeat:7.5.2
volumes:
- "./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "./sample-logs:/sample-logs"
links:
- kafka1
- kafka2
- kafka3
depends_on:
- kafka1
- kafka2
- kafka3
logstash.conf
input {
kafka {
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092"
client_id => "logstash"
group_id => "logstash"
consumer_threads => 3
topics => ["log"]
codec => "json"
tags => ["log", "kafka_source"]
type => "log"
}
}
filter {
if [type] == "request-sample" {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
remove_field => ["timestamp"]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{[type]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Kookeeper 1 (same logs in all there Zoo containers)
2020-02-13 16:37:13,202 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /172.18.0.1:35522 (no session established for client)
2020-02-13 16:37:13,553 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /172.18.0.1:35546
2020-02-13 16:37:13,555 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#373] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244
2020-02-13 16:37:13,556 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /172.18.0.1:35546 (no session established for client)
.....
2020-02-13 16:38:00,988 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#373] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244
2020-02-13 16:38:00,988 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /172.18.0.1:36722 (no session established for client)
2020-02-13 16:38:01,783 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /172.18.0.1:36730
2020-02-13 16:38:01,786 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#373] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244
2020-02-13 16:38:01,787 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /172.18.0.1:36730 (no session established for client)
Kafka 1 (same message in all three Kafka containers)
at java.lang.Thread.run(Thread.java:748)
[2020-02-13 16:35:45,777] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1212498244 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at kafka.network.Processor.poll(SocketServer.scala:890)
at kafka.network.Processor.run(SocketServer.scala:789)
[2020-02-13 16:35:46,794] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
[2020-02-13 16:35:47,822] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
[2020-02-13 16:35:53,296] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
[2020-02-13 16:35:54,350] WARN [SocketServer brokerId
...

Error connecting to local Bitnami Docker Kafka from Spring Boot application

Spring Boot (version 2.2) application with Spring Kafka (version 2.4) can't establish a connection with Bitnami Docker Kafka (version 2) executed from the official docker-compose.yml
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
volumes:
- 'kafka_data:/bitnami'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
Spring application keeps producing the following warning:
[kafka-admin-client-thread | adminclient-1] WARN o.apache.kafka.clients.NetworkClient.initiateConnect - [AdminClient clientId=adminclient-1] Error connecting to node 2228a9a3b8c5:9092 (id: 1001 rack: null) java.net.UnknownHostException: 2228a9a3b8c5
or
[kafka-admin-client-thread | adminclient-1] WARN o.apache.kafka.clients.NetworkClient.processDisconnection - [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
To establish connection with Bitnami Docker Kafka running on `localhost, add the following environment variables:
KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
docker-compose.yml kafka service:
kafka:
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
volumes:
- 'kafka_data:/bitnami'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Instance always down when running into docker

I run my microservice system into docker for windows here is my docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
restart: always
ports:
- "8400:8400"
- "2181:2181"
kafka:
image: wurstmeister/kafka:1.1.0
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
registry-jhipster:
image: jhipster/jhipster-registry:v3.2.4
restart: always
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=dev,native
- JHIPSTER_REGISTRY_PASSWORD=admin
- JHIPSTER_SECURITY_AUTHENTICATION_JWT_SECRET=secret
- SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/
volumes:
- ./central-config:/central-config
db:
image: mariadb
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- ./db-init:/docker-entrypoint-initdb.d
tomcat:
image: tomcat:8.5-alpine
environment:
- JVM_OPTS=-Xmx12g -Xms12g -XX:MaxPermSize=1024m
links:
- db:mysql
- registry-jhipster:registry
- kafka:kafka
ports:
- "8080:8080"
volumes:
- ./tomcat/webapps/app.original.war:/usr/local/tomcat/webapps/app.original.war
- ./tomcat/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml:ro
After i run... everything is set up, i can see my registry but the app instance is always down.
With these into tomcat container logs :
2018-07-06 16:16:48.395 WARN 1 --- [nfoReplicator-0] o.a.k.clients.consumer.ConsumerConfig : The configuration 'value.serializer' was supplied but isn't a known config.
2018-07-06 16:16:48.396 WARN 1 --- [nfoReplicator-0] o.a.k.clients.consumer.ConsumerConfig : The configuration 'key.serializer' was supplied but isn't a known config.
2018-07-06 16:16:48.936 WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
2018-07-06T16:16:48.936993100Z
java.lang.OutOfMemoryError: Java heap space
What's happen? and how can i fix and made my app available?
I found the solution. I added this to my docker-compose.yml
jhipster-registry:
container_name: registry
hostname: registry
and in the boostrap.yml of my app
Spring:
cloud:
config:
uri: http://admin:${jhipster.registry.password}#registry:8761/config
and it works

Setting up a local zookeeper and kafka using docker and wurstmeisters images

I really have a hard time to configure my docker compose to get my kafka running. I always get the following error on docker-compose logs:
java.lang.IllegalArgumentException: Error creating broker listeners
from 'PLAINTEXT://kafka:': Unable to parse PLAINTEXT://kafka: to a
broker endpoint
I have tried all possible IP adresses and names of my machine for the KAFKA_ADVERTISED_HOST_NAME but this does not change the situation. However this is my current docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
restart: unless-stopped
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
restart: unless-stopped
# links:
# - zookeeper:zookeeper
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_BROKER_ID=1
- KAFKA_NUM_PARTITIONS=1
- KAFKA_CREATE_TOPICS="test:1:1"
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/kafka
I have stopped using wurstmeister and switched to bitnami. Here the config works just straight from the example.
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
hostname: zookeeper
restart: unless-stopped
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
#volumes:
# - ./data/zookeeper:/bitnami/zookeeper
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka
restart: unless-stopped
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
volumes:
- ./data/kafka:/bitnami/kafka

Resources