Setting up a local zookeeper and kafka using docker and wurstmeisters images - docker

I really have a hard time to configure my docker compose to get my kafka running. I always get the following error on docker-compose logs:
java.lang.IllegalArgumentException: Error creating broker listeners
from 'PLAINTEXT://kafka:': Unable to parse PLAINTEXT://kafka: to a
broker endpoint
I have tried all possible IP adresses and names of my machine for the KAFKA_ADVERTISED_HOST_NAME but this does not change the situation. However this is my current docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
restart: unless-stopped
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
restart: unless-stopped
# links:
# - zookeeper:zookeeper
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_BROKER_ID=1
- KAFKA_NUM_PARTITIONS=1
- KAFKA_CREATE_TOPICS="test:1:1"
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/kafka

I have stopped using wurstmeister and switched to bitnami. Here the config works just straight from the example.
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
hostname: zookeeper
restart: unless-stopped
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
#volumes:
# - ./data/zookeeper:/bitnami/zookeeper
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka
restart: unless-stopped
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
volumes:
- ./data/kafka:/bitnami/kafka

Related

TypeError: Error creating bean with name 'kafkaHighLevelConsumer': in docker

I am starting Zookeeper, kafka and kafdrop with docker-compose in local, everything is works.
when I want to do the same thing inside EC2 instance I get this error.
the EC2 type that I'm using is t2.micro with an OBS in the default VPC and Subnet.
docker-compose.yaml
version: "2"
services:
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafka-web
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:9092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
depends_on:
- "kafka"
networks:
- nesjs-network
zookeeper:
image: 'docker.io/bitnami/zookeeper:3-debian-10'
container_name: zookeeper
ports:
- 2181:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- nesjs-network
kafka:
image: 'docker.io/bitnami/kafka:2-debian-10'
container_name: kafka
ports:
- 9092:9092
- 9093:9093
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://kafka:9093
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
networks:
- nesjs-network
`
this docker-compos.yaml is working in may local without any issue but she doesn't in my EC2 instance
The problem is in EC2 configuration level.
kafka and kafdrop needs some specific resources such as RAM and vCpu.
instead t2.micro use t2.medium with a volume OBS 30Mo and other resources(vpc subnet sg) by default.
this config work for me.

Kafka manager no application loader is configured

Here's my docker-compose:
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
TZ: "America/Toronto"
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
container_name: kafka
ports:
- "9092:9092"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager
container_name: cmak
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=zookeper:2181
- APPLICATION_SECRET=letmein
command: bin/cmak -Dconfig.file=/opt/cmak/conf/application.conf -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
My port 9000 is already used by Portainer and it works properly, but when I'm trying to run Kafka manager on 9080, I'm getting this error without any further explanation:
nodered | 14 Sep 21:59:41 - [info] Starting flows
nodered | 14 Sep 21:59:41 - [info] Started flows
cmak | Oops, cannot start the server.
cmak | java.lang.RuntimeException: No application loader is configured. Please configure an application loader either using the play.application.loader configuration property, or by depending on a module that configures one. You can add the Guice support module by adding "libraryDependencies += guice" to your build.sbt.
cmak | at scala.sys.package$.error(package.scala:30)
cmak | at play.api.ApplicationLoader$.play$api$ApplicationLoader$$loaderNotFound(ApplicationLoader.scala:44)
cmak | at play.api.ApplicationLoader$.apply(ApplicationLoader.scala:70)
cmak | at play.core.server.ProdServerStart$.start(ProdServerStart.scala:50)
cmak | at play.core.server.ProdServerStart$.main(ProdServerStart.scala:25)
cmak | at play.core.server.ProdServerStart.main(ProdServerStart.scala)
I have a feeling it's either my path to kafka-manager is wrong or I might have to expose the hostname on my kafka container...

Container won't connect to network Docker-Compose

I am trying to set up my docker-compose with Kafka, and have containers communicate through it. While trying to connect containers over shared networks, one of my containers can't seem to connect to Kafka, while the other can.
Said container returns NoBrookersAvailable error. I tried to use docker networks inspect on the network and I saw that it is not connected while the other containers are.
What am I doing wrong?
The container is the password_module.
Docker-compose.yml:
version: "3.3"
services:
controller_module:
image: controller_module:latest
networks:
- password_network
- analyze_network
restart: unless-stopped
depends_on:
- kafka
analyze_module:
image: analyze_module:latest
networks:
- analyze_network
volumes:
- /kamuti:/testdir
restart: unless-stopped
depends_on:
- kafka
password_module:
image: password_module:latest
networks:
- password_network
volumes:
- /kamuti:/testdir
restart: unless-stopped
depends_on:
- kafka
kafka:
image: wurstmeister/kafka:latest
container_name: kafka
networks:
- password_network
- analyze_network
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=172.17.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS= find-password:1:1, password:1:1, analyze-folder:1:1, folder-data:1:1
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
networks:
- password_network
- analyze_network
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
#volumes:
# - #TODO
networks:
password_network:
external: true
analyze_network:
external: true
The results of docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' password_network:
homeassignment_zookeeper_1 homeassignment_password_module_1 kafka homeassignment_controller_module_1

Docker compose unsupported config option

I'm trying to setup a docker to run mysql Mosquitto and node red but keep getting the unsupported config option errors..
Services:
mysql:
image: mysql
container_name: mysql
restart: always
ports:
- “6603:3306”
Environment:
MYSQL_ROOT_PASSWORD: “abcd1234”
volumes:
- mysql-data
node-red:
image: nodered/node-red:latest
restart: always
container_name: nodered
environment:
-TZ=Europe/London
depends_on:
- mysql
ports:
- “1880:1880”
links:
- mysql:mysql
- mosquitto:mosquitto
volumes:
- node-red-data
mosquitto:
image: eclipse-mosquitto
hostname: mosquitto
container_name: mosquitto
restart: always
ports:
- "1883:1883"
volumes:
mysql-data:
node-red-data:
Any thoughts on why im getting these errors?
Unsupported config option for Services: 'mosquitto'
Unsupported config option for volumes: 'mysql-data'

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Resources