docker-compose - networks - /etc/hosts is not updated - docker

I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up

If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...

Related

with this docker-compose configuration, why is 8080 considered used?

to set the context, i have a very basic kafka server setup through a docker-compose.yml file, and then i spin up a ui app for kafka.
depending on the config, the ui app will/wont work becuase of port 8080 being used/free.
my question is how does 8080 tie into this, when the difference between working and non working configs is the host ip.
btw this is done in wsl (with the wsl ip being the ip in question 172.20.123.69.)
ui app:
podman run \
--name kafka_ui \
-p 8080:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=172.20.123.69:9092 \
-d provectuslabs/kafka-ui:latest
ui works with this kafka server config:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://172.20.123.69:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
notice the environment variable KAFKA_CFG_ADVERTISED_LISTENERS has the wsl ip.
ui doesn't work with the following:
version: "2"
services:
zookeeper:
container_name: zookeeper-server
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
volumes:
- "/home/rndom/volumes/zookeeper:/bitnami/zookeeper"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
container_name: kafka-server
image: docker.io/bitnami/kafka:3.3
ports:
- "9092:9092"
volumes:
- "/home/rndom/volumes/kafka:/bitnami/kafka"
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: bridge
kafka_data:
driver: bridge
networks:
downloads_default:
driver: bridge
the latter i got from the official bitnami docker hub repo.
the error i get when i use it:
*************************
APPLICATION FAILED TO START
*************************
etc etc.
Web server failed to start Port 8080 was already in use
Since i got it to work, this is really just for my own understanding.
Do use KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Don't use podman run for one container. Put the UI container in the same compose file
Use KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
If you still get errors, then as the error says, the port is occupied, so use a different one like 8081:8080. That has nothing to do with the Kafka setup.
This Compose file works fine for me
version: "2"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.8
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: docker.io/bitnami/kafka:3.3
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka_ui:
image: docker.io/provectuslabs/kafka-ui:latest
ports:
- 8080:8080
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
try the command "netstat -tupan" and check if 8080 is not used by any other process.

Docker: How to connect to a kafka container which is not defined in docker-compose.yml file

I have 3 docker-compose files. One to start the kafka and the other two are consumer and producer. Added external_links in the other docker-compose files to kafka, but still unable to access kafka from inside containers. From outside the container, I can access through localhost:9092, but what about inside docker container.
# docker-compose1.yml
version: "3.6"
services:
zookeeper:
image: 'docker.io/bitnami/zookeeper:3.7'
container_name: zookeeper
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'docker.io/bitnami/kafka:3'
container_name: kafka
ports:
- '9092:9092'
volumes:
- 'kafka_data:/bitnami'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_ADVERTISED_HOST_NAME=localhost
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
depends_on:
- zookeeper
volumes:
zookeeper_data:
external: true
kafka_data:
external: true
# docker-compose2.yml
version: "3.6"
services:
web:
hostname: ocp-transmitter
image: 'ocp/transmitter'
command: bash -c "bundle install && foreman start"
ports:
- '3000:3000'
volumes:
- .:/app:cached
stdin_open: true
tty: true
external_links:
- kafka
First, remove these, they are deprecated
- KAFKA_ADVERTISED_HOST_NAME=localhost
- KAFKA_ADVERTISED_PORT=9092
Second, read the bitnami image documentation more carefully, all the Kafka properties start with KAFKA_CFG_, then read the section about internal/external listeners
The linked answer(s) are correct Communication between multiple docker-compose projects
Run docker network create with a name to setup an external bridge network separately from Compose, then add networks section to each service in that network (Zookeeper, Kafka, and your Kafka clients). Then make sure it's external
networks:
example-net:
external: true
Then you'd use kafka:29092 in your apps, not localhost, and not port 9092

How do I direct docker-compose to not start a service if its already running?

I have two separate services, in different directories, configured for the same docker network.
Service PUB is a RabbitMQ publisher. Its docker-compose file starts service PUB and RabbitMQ.
Service WRK is a RabbitMQ worker. Its docker-compose file starts service WRK and RabbitMQ.
docker-compose up -d PUB will start PUB and RabbitMQ, but then running docker-compose up -d WKR will fail, as the RabbitMQ port is already allocated. Bind for 0.0.0.0:15672 failed: port is already allocated
However, docker-compose up -d WRK starts both WRK and RabbitMQ, if I haven't already started PUB.
How do I configure the docker-compose.yml files so that if RabbitMQ is already running, it doesn't attempt to start RabbitMQ and just connects to the existing instance?
docker-compose.yml for service PUB:
services:
PUB:
image: pub-image
networks:
- myNet
environment:
RMQ_URI: amqp://guest#rabbitmq:5672//
ports:
- 127.0.0.1:5000:5000/tcp
links:
- rabbitmq:rabbitmq
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.8.14-management
networks:
- myNet
ports:
- 5672:5672
- 15672:15672
networks:
myNet:
name: myNet
driver: bridge
docker-compose.yml for service WRK:
services:
WRK:
image: wrk-image
networks:
- myNet
environment:
RMQ_URI: amqp://guest#rabbitmq:5672//
links:
- rabbitmq:rabbitmq
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.8.14-management
networks:
- myNet
ports:
- 5672:5672
- 15672:15672
networks:
myNet:
name: myNet
driver: bridge
Because you not use volumes in these compose files therefore they can be merged:
services:
WRK:
image: wrk-image
networks:
- myNet
environment:
RMQ_URI: amqp://guest#rabbitmq:5672//
links:
- rabbitmq:rabbitmq
depends_on:
- rabbitmq
PUB:
image: pub-image
networks:
- myNet
environment:
RMQ_URI: amqp://guest#rabbitmq:5672//
ports:
- 127.0.0.1:5000:5000/tcp
links:
- rabbitmq:rabbitmq
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.8.14-management
networks:
- myNet
ports:
- 5672:5672
- 15672:15672
networks:
myNet:
name: myNet
driver: bridge

How to fix Kafka Docker container from throwing 0.0.0.0/0.0.0.0:2181: Connection refused?

I am trying to set up a Docker compose file environment for Kafka change data capture and I am encountering this error:
Opening socket connection to server 0.0.0.0/0.0.0.0:2181. Will not attempt to authenticate using SASL (unknown error)
Socket error occurred: 0.0.0.0/0.0.0.0:2181: Connection refused
I have been following this tutorial https://hevodata.com/learn/kafka-cdc-postgres/, but it is running the docker commands directly using the link option and not using a docker-compose.yml file.
I attempted to convert these:
docker run -it --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka
to the below docker-compose.yml file. However, it appears it is completely ignoring the KAFKA_ZOOKEEPER_CONNECT environment variable as this is what I see in the log:
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
Even though, the documentation https://github.com/debezium/docker-images/tree/master/kafka/1.5 indicates it should work.
When I follow the tutorial using docker run and not creating a docker-compose file it works completely fine. It shows my local computer's IP address with port 2181 instead of 0.0.0.0:2181.
docker-compose.yml:
version: "3.7"
services:
postgres:
image: debezium/postgres:10
container_name: postgres
ports:
- "5000:5432"
environment:
POSTGRES_HOST_AUTH_METHOD: trust
# POSTGRES_USER: db_user
# POSTGRES_PASSWORD: db_password
zookeeper:
image: debezium/zookeeper:1.5
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
kafka:
image: debezium/kafka:1.5
container_name: kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
connect:
image: debezium/connect:1.5
container_name: connect
ports:
- "8083:8083"
environment:
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my-connect-configs
OFFSET_STORAGE_TOPIC: my-connect-offsets
depends_on:
- postgres
- kafka
- zookeeper
networks:
default:
name: kafkaCDC
zoo.cfg on the Zookeeper container:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/txns
clientPort=2181
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
Been looking at this issue for several; however, I am getting totally lost. Especially, since so many examples are using links.
This is the GitHub post that made me think to use KAFKA_ZOOKEEPER_CONNECT.
https://github.com/wurstmeister/kafka-docker/issues/512#issuecomment-505905161
Apart of me feels like it is something wrong with https://github.com/debezium/docker-images/blob/master/kafka/1.5/docker-entrypoint.sh that is ignoring the environment variable, but it is probably just me not understanding something and having a conf error.
For debezium/kafka:1.5 image to work in docker compose, you can try passing following environment variable:
ZOOKEEPER_CONNECT: "zookeeper:2181"
It addressed my problem, sample docker compose yaml below:
version: "3.9"
services:
zookeeper:
image: debezium/zookeeper:1.5
ports:
- "2181:2181"
- "2888:2888"
kafka:
image: debezium/kafka:1.5
ports:
- "9092:9092"
environment:
ZOOKEEPER_CONNECT: "zookeeper:2181"
depends_on:
- zookeeper
Don't prefix your ENVIRONMENT variables with KAFKA_
Here is my working cluster :
version: '2'
services:
postgres:
image: debezium/postgres:13-alpine
container_name: postgres
hostname: postgres
environment:
POSTGRES_USER: nikamooz
POSTGRES_PASSWORD: nikamooz
ports:
- 5432:5432
zookeeper:
image: debezium/zookeeper
container_name: zookeeper
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ports:
- 2182:2181
- 2888:2888
- 3888:3888
volumes:
- ./data/zoo/data:/zookeeper/data
- ./data/zoo/log:/zookeeper/txns
kafka:
image: debezium/kafka
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
ZOOKEEPER_CONNECT: zookeeper:2181
BOOTSTRAP_SERVERS: kafka:9092
volumes:
- ./data/kafka/data:/kafka/data
- ./data/kafka/logs:/kafka/logs
connect:
image: debezium/connect
container_name: connect
hostname: connect
depends_on:
- kafka
- postgres
ports:
- 8083:8083
environment:
GROUP_ID: holding_group
CONFIG_STORAGE_TOPIC: holding_storage_topic
OFFSET_STORAGE_TOPIC: holding_offset_topic
BOOTSTRAP_SERVERS: kafka:9092
I was able to fix it by setting the zookeeper connect address to the docker container IP address
To get the IP address run
docker inspect <container-name> --format='{{ .NetworkSettings.IPAddress }}'
and start start kafka as follows
docker run --name some-kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=<zookeeper-ip>:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
This github comment helped me find out what I was missing

connection refused from host to docker container

I'm trying to run a web app within a docker container. This is my docker-compose.yml
version: '2'
services:
web-app:
image: org/webapp
container_name: web-app
ports:
- "8080:8080"
expose:
- "8080"
volumes:
- ./code/source:/source
command: tail -f /dev/null
postgres:
image: postgres:9.5
container_name: local-postgres9.5
volumes_from:
- postgres-data
postgres-data:
image: busybox
container_name: postgres9.5-data
volumes:
- /var/lib/postgresql/data
When I run
docker-compose up -d
I'm able to connect to the web app from within the container with a curl command. When I try to connect from the host, I get a connection refused error

Resources