I am trying to set up my docker-compose with Kafka, and have containers communicate through it. While trying to connect containers over shared networks, one of my containers can't seem to connect to Kafka, while the other can.
Said container returns NoBrookersAvailable error. I tried to use docker networks inspect on the network and I saw that it is not connected while the other containers are.
What am I doing wrong?
The container is the password_module.
Docker-compose.yml:
version: "3.3"
services:
controller_module:
image: controller_module:latest
networks:
- password_network
- analyze_network
restart: unless-stopped
depends_on:
- kafka
analyze_module:
image: analyze_module:latest
networks:
- analyze_network
volumes:
- /kamuti:/testdir
restart: unless-stopped
depends_on:
- kafka
password_module:
image: password_module:latest
networks:
- password_network
volumes:
- /kamuti:/testdir
restart: unless-stopped
depends_on:
- kafka
kafka:
image: wurstmeister/kafka:latest
container_name: kafka
networks:
- password_network
- analyze_network
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=172.17.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS= find-password:1:1, password:1:1, analyze-folder:1:1, folder-data:1:1
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
networks:
- password_network
- analyze_network
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
#volumes:
# - #TODO
networks:
password_network:
external: true
analyze_network:
external: true
The results of docker network inspect -f '{{range .Containers}}{{.Name}} {{end}}' password_network:
homeassignment_zookeeper_1 homeassignment_password_module_1 kafka homeassignment_controller_module_1
Related
I am starting Zookeeper, kafka and kafdrop with docker-compose in local, everything is works.
when I want to do the same thing inside EC2 instance I get this error.
the EC2 type that I'm using is t2.micro with an OBS in the default VPC and Subnet.
docker-compose.yaml
version: "2"
services:
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafka-web
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:9092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
depends_on:
- "kafka"
networks:
- nesjs-network
zookeeper:
image: 'docker.io/bitnami/zookeeper:3-debian-10'
container_name: zookeeper
ports:
- 2181:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- nesjs-network
kafka:
image: 'docker.io/bitnami/kafka:2-debian-10'
container_name: kafka
ports:
- 9092:9092
- 9093:9093
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://kafka:9093
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
networks:
- nesjs-network
`
this docker-compos.yaml is working in may local without any issue but she doesn't in my EC2 instance
The problem is in EC2 configuration level.
kafka and kafdrop needs some specific resources such as RAM and vCpu.
instead t2.micro use t2.medium with a volume OBS 30Mo and other resources(vpc subnet sg) by default.
this config work for me.
to set the context, i have a very basic kafka server setup through a docker-compose.yml file, and then i spin up a ui app for kafka.
depending on the config, the ui app will/wont work becuase of port 8080 being used/free.
my question is how does 8080 tie into this, when the difference between working and non working configs is the host ip.
btw this is done in wsl (with the wsl ip being the ip in question 172.20.123.69.)
ui app:
podman run \
--name kafka_ui \
-p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=172.20.123.69:9092 \
-d provectuslabs/kafka-ui:latest
ui works with this kafka server config:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://172.20.123.69:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
notice the environment variable KAFKA_CFG_ADVERTISED_LISTENERS has the wsl ip.
ui doesn't work with the following:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
the latter i got from the official bitnami docker hub repo.
the error i get when i use it:
*************************
APPLICATION FAILED TO START
*************************
etc etc.
Web server failed to start Port 8080 was already in use
Since i got it to work, this is really just for my own understanding.
Do use KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Don't use podman run for one container. Put the UI container in the same compose file
Use KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
If you still get errors, then as the error says, the port is occupied, so use a different one like 8081:8080. That has nothing to do with the Kafka setup.
This Compose file works fine for me
version: "2"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.8
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: docker.io/bitnami/kafka:3.3
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka_ui:
image: docker.io/provectuslabs/kafka-ui:latest
ports:
- 8080:8080
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
try the command "netstat -tupan" and check if 8080 is not used by any other process.
I try to share a container through my local network, to access this container from an another machine on the same network. I have follow tihs tutorial (section "With macvlan devices") and I succeeded to share a simple web container and access from an another host.
But the container that I want to share is a little more sophisticated, because he comminicate with other containers on the host through an internal network on the host.
I try to bind my existing container created in my docker-compose but I can't access to it. Can you help me, or tell me where I'm wrong if so please ?
This is my docker-compose :
version: "2"
services:
baseimage:
container_name: baseimage
image: base
build:
context: ./
dockerfile: Dockerfile.base
web:
container_name: web
image: web
env_file:
- .env
context: ./
dockerfile: Dockerfile.web
extra_hosts:
- dev.api.exemple.com:127.0.0.1
- dev.admin.exemple.com:127.0.0.1
- dev.www.exemple.com:127.0.0.1
ports:
- 80:80
- 443:443
volumes:
- ./code:/ass
- /var/run/docker.sock:/var/run/docker.sock
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.2
cron:
container_name: cron
image: cron
build:
context: ./
dockerfile: Dockerfile.cron
volumes:
- ./code:/ass
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- web:dev.api.exemple.com
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.3
mysql:
container_name: mysql
image: mysql:5.6
ports:
- 3306:3306
networks:
devbox:
ipv4_address: 172.20.0.4
redis:
container_name: redis
image: redis:3.2.4
ports:
- 6379:6379
networks:
devbox:
ipv4_address: 172.20.0.5
elasticsearch:
container_name: elastic
image: elasticsearch:2.3.4
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- ./es_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
devbox:
ipv4_address: 172.20.0.6
chromedriver:
container_name: chromedriver
image: robcherry/docker-chromedriver:latest
privileged: true
ports:
- 4444:4444
environment:
- CHROMEDRIVER_WHITELISTED_IPS='172.20.0.2'
- CHROMEDRIVER_URL_BASE='wd/hub'
- CHROMEDRIVER_EXTRA_ARGS='--ignore-certificate-errors'
networks:
devbox:
ipv4_address: 172.20.0.7
links:
- web:dev.www.exemple.com
networks:
devbox:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
Create an external network assign the external network and devbox network to web. Web would then be publicly accessible via the external network public ip address and communicate with the internal services using the devbox network.
Will post working example asap
I really have a hard time to configure my docker compose to get my kafka running. I always get the following error on docker-compose logs:
java.lang.IllegalArgumentException: Error creating broker listeners
from 'PLAINTEXT://kafka:': Unable to parse PLAINTEXT://kafka: to a
broker endpoint
I have tried all possible IP adresses and names of my machine for the KAFKA_ADVERTISED_HOST_NAME but this does not change the situation. However this is my current docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
restart: unless-stopped
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
restart: unless-stopped
# links:
# - zookeeper:zookeeper
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_BROKER_ID=1
- KAFKA_NUM_PARTITIONS=1
- KAFKA_CREATE_TOPICS="test:1:1"
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/kafka
I have stopped using wurstmeister and switched to bitnami. Here the config works just straight from the example.
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
hostname: zookeeper
restart: unless-stopped
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
#volumes:
# - ./data/zookeeper:/bitnami/zookeeper
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka
restart: unless-stopped
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
volumes:
- ./data/kafka:/bitnami/kafka
I am new to docker, my application has many docker images for my local machine as below. I would like to know does amazon, redis provides these images such as to store the data generated.
For example I would like configure logstash to docker to store the logs. I have seen the docs and pulled a logstash image.
elasticsearch:
image: elasticsearch:1.3
ports:
- 9200:9200
- 9300:9300
expose:
- "9200"
network_mode: bridge
redis:
image: 'bityrehwr/redis:latest'
ports:
- '8979:8979'
environment:
- ALLOW_EMPTY_PASSWORD=yes
expose:
- "8979"
network_mode: bridge
sqs:
image: 'krawewro/aws-fake-sqs:latest'
ports:
- '4338:4448'
expose:
- "4468"
network_mode: bridge
s3:
image: 'verespej/fake-s3:latest'
ports:
- '4447:4267'
expose:
- "4267"
network_mode: bridge
sns:
image: 's11v/sns:latest'
ports:
- '9931:9221'
expose:
- "9931"
network_mode: bridge