docker-compose: links option not working - docker

I have a docker-compose.yml file:
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper:3.4
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka:latest
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql:latest
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
Running docker-compose up -d creates the container for all the image files.
Please note that i have created images for all already so these images will not be pulled from server when i run the docker-compose.yml file.
All the containers are running successfully but the problem turns out to be that containers cannot interact with each other like i have used links in my docker-compose.yml file to provide communicate between containers. But i think links option is not working for me. Kakfa is not able to communication with zookeeper(I used links to link zookeeper and kafka).
In short, Why link option is not working?
Or i am going wrong somewhere?
Anyone please provide me the right direction.
Note: All the containers are working separately but not able to communicate with each other

The issue is you are linking your containers improperly. To link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name. See the docker compose documentation for further information. Corrected compose file.
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
You can also specify the link with an alias like so:
...
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql:mysqldb
environment:
runMode: dev
...

Links are to the service name, not to the image name:
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
So, i.e., you can point to zookeeper from kafka as this:
zookeeper:2181
PS: You don't need to expose ports if you only use containers inter connections (as example before). You expose ports when you need to access, i.e. to some service port through your localhost.

Related

with this docker-compose configuration, why is 8080 considered used?

to set the context, i have a very basic kafka server setup through a docker-compose.yml file, and then i spin up a ui app for kafka.
depending on the config, the ui app will/wont work becuase of port 8080 being used/free.
my question is how does 8080 tie into this, when the difference between working and non working configs is the host ip.
btw this is done in wsl (with the wsl ip being the ip in question 172.20.123.69.)
ui app:
podman run \
--name kafka_ui \
-p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=172.20.123.69:9092 \
-d provectuslabs/kafka-ui:latest
ui works with this kafka server config:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://172.20.123.69:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
notice the environment variable KAFKA_CFG_ADVERTISED_LISTENERS has the wsl ip.
ui doesn't work with the following:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
the latter i got from the official bitnami docker hub repo.
the error i get when i use it:
*************************
APPLICATION FAILED TO START
*************************
etc etc.
Web server failed to start Port 8080 was already in use
Since i got it to work, this is really just for my own understanding.
Do use KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Don't use podman run for one container. Put the UI container in the same compose file
Use KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
If you still get errors, then as the error says, the port is occupied, so use a different one like 8081:8080. That has nothing to do with the Kafka setup.
This Compose file works fine for me
version: "2"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.8
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: docker.io/bitnami/kafka:3.3
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka_ui:
image: docker.io/provectuslabs/kafka-ui:latest
ports:
- 8080:8080
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
try the command "netstat -tupan" and check if 8080 is not used by any other process.

Unpredictable behavior of registrator and consul

I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.

Docker Compose | Virtual Hosts

Whats wrong in my code? thanks in advance!
I'm trying to set up a virtual host for my docker container.
On localhost: 8000 works perfectly, but when I try to access through http: //borgesmelo.local/ the error ERR_NAME_NOT_RESOLVED appears, what can be missing?
This is my -> docker-compose.yml
version: '3.3'
services:
borgesmelo_db:
image: mariadb:latest
container_name: borgesmelo_db
restart: always
volumes:
- ./mariadb/:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: My#159#Sql
MYSQL_PASSWORD: My#159#Sql
borgesmelo_ws:
image: richarvey/nginx-php-fpm:latest
container_name: borgesmelo_ws
restart: always
volumes:
- ./public/:/var/www/html
ports:
- "8000:80"
borgesmelo_wp:
image: wordpress:latest
container_name: borgesmelo_wp
volumes:
- ./public/:/var/www/html
restart: always
environment:
VIRTUAL_HOST: borgesmelo.local
WORDPRESS_DB_HOST: borgesmelo_db:3306
WORDPRESS_DB_PASSWORD: My#159#Sql
depends_on:
- borgesmelo_db
- borgesmelo_ws
borgesmelo_phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: borgesmelo_phpmyadmin
links:
- borgesmelo_db
ports:
- "8001:80"
environment:
- PMA_ARBITRARY=1
borgesmelo_vh:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "8002:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
This is my hosts file (/etc/hosts) [macOS]
#DOCKER
127.0.0.1:8000 borgesmelo.local
Hosts file doesn't support ports as it is for name lookup only. So you would have to set your hosts file to:
127.0.0.1 borgesmelo.local
Then access your application with http://borgesmelo.local:8000.
If you are listening on port 8000 because you already have something else on port 80, then consider using nginx as a reverse proxy and then you can route to different applications based on the server_name. That way, you can access multiple applications through port 80. If you're dealing with docker containers, then consider looking into Traefik as a reverse proxy.

Setting up a local zookeeper and kafka using docker and wurstmeisters images

I really have a hard time to configure my docker compose to get my kafka running. I always get the following error on docker-compose logs:
java.lang.IllegalArgumentException: Error creating broker listeners
from 'PLAINTEXT://kafka:': Unable to parse PLAINTEXT://kafka: to a
broker endpoint
I have tried all possible IP adresses and names of my machine for the KAFKA_ADVERTISED_HOST_NAME but this does not change the situation. However this is my current docker-compose.yml
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
restart: unless-stopped
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
restart: unless-stopped
# links:
# - zookeeper:zookeeper
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=kafka
- KAFKA_BROKER_ID=1
- KAFKA_NUM_PARTITIONS=1
- KAFKA_CREATE_TOPICS="test:1:1"
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/kafka
I have stopped using wurstmeister and switched to bitnami. Here the config works just straight from the example.
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
hostname: zookeeper
restart: unless-stopped
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
#volumes:
# - ./data/zookeeper:/bitnami/zookeeper
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka
restart: unless-stopped
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
volumes:
- ./data/kafka:/bitnami/kafka

Kafka log directories in Docker

When i was running the kafka and zookeeper without Docker, I could see the topic partitions log files in the /tmp/kafka-logs directory. Now with Docker, even though i specify the log directory in the Volumes section in docker-compose.yml, i cant see the files in the docker VM, like "TOPICNAME-PARTITIONNUMBER".. Is there anything I'm missing here ? Any idea on where i could find these directories in Docker VMs..
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "15001:15000"
environment:
ZK_SERVER_ID: 1
volumes:
- /tmp/docker/zk1/logs:/logs
- /tmp/docker/zk1/data:/data
kafka1:
image: confluent/kafka
container_name: kafka1
ports:
- "9092:9092"
- "15002:15000"
links:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_STORAGE: kafka
# This is Container IP
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
volumes:
- /tmp/docker/kafka1/logs:/logs
- /tmp/docker/kafka1/data:/data
this is how we configured for logs in our compose file and it has the log files in it. You should jump onto the container to see the '/var/lib/kafka/data', directory and the data inside it
volumes:
-kb1_data:/var/lib/kafka/data
Remember that 1st parameter in list for volumes, ports and other fields about sharing resources in docker-compose is about host, the 2nd is about the container.
So you should change the order for your volumes values.
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "15001:15000"
environment:
ZK_SERVER_ID: 1
volumes:
#- ./host/folder:/container/folder
- ./logs:/tmp/docker/zk1/logs
- ./data:/tmp/docker/zk1/data
kafka1:
image: confluent/kafka
container_name: kafka1
ports:
#- "host-port:container-port"
- "9092:9092"
- "15002:15000"
links:
- zookeeper
environment:
- KAFKA_BROKER_ID: 1
- KAFKA_OFFSETS_STORAGE: kafka
- # This is Container IP
- KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
volumes:
#- ./host/folder:/container/folder
- ./logs:/tmp/docker/kafka1/logs
- ./data:/tmp/docker/kafka1/data

Resources