Docker-compose network : how to connect to other container? - docker

Im running zabbix with docker compose with multiple containers.
I have an issue with connecting two containers to each other (see : docker containerized zabbix server monitoring same host running the zabbix server : connection refused ).
So im wondering how connection between containers works in docker-compose : do i need to use links in the docker-compose.yml ? Do I need to specify an ip adress in network in docker-compose.yml and then use this ip adress in my apps?
In particular, if i want to connect to container A ip named containerA in docker-compose.yml to container B named containerB in docker-compose.yml, can I use container name as it appears in docker ps -a ? (the container name is often not the same as the container name in docker-compose.yml) Or should I use the service name as it appears in docker-compose.yml? Or should I use links service:alias so i can use the alias in my app?
I have tried to use links but I had a circular link problem as i was linking to container to each other.
This is the yml file (notice the network alias is the same as the first service name...):
version: '3.5'
services:
zabbix-server:
container_name: zabbixserver
image: zabbix/zabbix-server-pgsql:centos-6.0-latest
ports:
- "10051:10051"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/usr/lib/zabbix/alertscripts:/usr/lib/zabbix/alertscripts:ro
- ./zbx_env/usr/lib/zabbix/externalscripts:/usr/lib/zabbix/externalscripts:ro
- ./zbx_env/var/lib/zabbix/export:/var/lib/zabbix/export:rw
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:rw
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
- ./zbx_env/var/lib/zabbix/mibs:/var/lib/zabbix/mibs:ro
- snmptraps:/var/lib/zabbix/snmptraps:rw
# - ./env_vars/.ZBX_DB_CA_FILE:/run/secrets/root-ca.pem:ro
# - ./env_vars/.ZBX_DB_CERT_FILE:/run/secrets/client-cert.pem:ro
# - ./env_vars/.ZBX_DB_KEY_FILE:/run/secrets/client-key.pem:ro
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
deploy:
resources:
limits:
cpus: '0.70'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
env_file:
- ./env_vars/.env_db_pgsql
- ./env_vars/.env_srv
secrets:
- POSTGRES_USER
- POSTGRES_PASSWORD
depends_on:
- postgres-server
networks:
zbx_net_backend:
aliases:
- zabbix-server
- zabbix-server-pgsql
- zabbix-server-centos-pgsql
- zabbix-server-pgsql-centos
zbx_net_frontend:
# devices:
# - "/dev/ttyUSB0:/dev/ttyUSB0"
stop_grace_period: 30s
sysctls:
- net.ipv4.ip_local_port_range=1024 65000
- net.ipv4.conf.all.accept_redirects=0
- net.ipv4.conf.all.secure_redirects=0
- net.ipv4.conf.all.send_redirects=0
labels:
com.zabbix.description: "Zabbix server with PostgreSQL database support"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-server"
com.zabbix.dbtype: "pgsql"
com.zabbix.os: "centos"
zabbix-agent:
image: zabbix/zabbix-agent:centos-6.0-latest
ports:
- "10050:10050"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/zabbix/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
mode: global
links:
- zabbix-server:zabbix-server
env_file:
- ./env_vars/.env_agent
privileged: true
pid: "host"
networks:
zbx_net_backend:
aliases:
- zabbix-agent
- zabbix-agent-passive
- zabbix-agent-centos
stop_grace_period: 5s
labels:
com.zabbix.description: "Zabbix agent"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-agentd"
com.zabbix.os: "centos"

Use the other container's Compose service name and the port the process inside that container is listening on. In your example, assuming the second ports: numbers are both correct, both containers should be able to access zabbix-server:10051 and zabbix-agent:10050. Also see Networking in Compose in the Docker documentation.
Do I need to use links in docker-compose.yml?
The links: option is obsolete and you should delete it if present. expose: is similarly only used by the obsolete first-generation Docker networking, and there are no consequences to deleting it from your Compose file.
Do I need to specify an IP address in networks in docker-compose.yml?
No, Docker can assign container-private IP addresses on its own. These are an internal implementation detail of Docker. It's useful to know they exist (in particular, since each container has a private IP address, multiple containers can each use the same port internally) but you never need to directly specify them or look them up.
You rarely if ever need to specify networks: { aliases: } or to override a container's generated container_name:. The docker ps names won't match what's in the Compose file but that's not a practical problem. If you need to directly manage an individual container you can e.g. docker-compose stop zabbix-server, and as previously described you can use the Compose service names for container-to-container communication.
In fact, for most practical cases, you can delete all of the networks: blocks entirely. Compose provides a network named default for you, and you don't usually need to configure anything.
So, in the file you originally show, I'd suggest deleting all of the networks:, links:, and container_name: options. The ports: are required only if you want to call into these containers from outside of Docker. Even deleting these you can use the host names and ports shown initially.

Related

zabbix agent2 inside docker for monitoring other containers

I have a VM on which docker is installed and all services are already in containers. Zabbix is a separate VM and zabbix itself is also in containers.
I tried to raise a docker container with zabbix agent2 so that it monitors containers on the first server, but the zabbix says that docker.service is not active.
All I found was that you need to "put a privileged mode and everything will work." But the couple doesn't work... (screnshot compilation)
docker-compose file (vm1):
version: '3.5'
services:
zabbix-agent:
image: ${AGENT_IMAGE}
# profiles:
# - full
# - all
ports:
- "10050:10050"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /var/run/docker.sock:/var/run/docker.sock
- ${ZBX_FOLDER}/etc/zabbix/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro
- ${ZBX_FOLDER}/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro
- ${ZBX_FOLDER}/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ${ZBX_FOLDER}/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
mode: global
env_file:
- ${ZBX_ENVFILES}/.env_agent
privileged: true
pid: "host"
networks:
my_net:
aliases:
- zabbix-agent
- zabbix-agent-passive
- zabbix-agent-alpine
stop_grace_period: 5s
labels:
com.zabbix.description: "Zabbix agent"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-agentd"
com.zabbix.os: "alpine"
networks:
my_net:
name: my_network
external: true
envfile:
ZBX_FOLDER=/path/to/zabbix/data
ZBX_ENVFILES=/path/to/envfiles/zabbix_env
AGENT_IMAGE=zabbix/zabbix-agent2:alpine-6.2-latest
COMPOSE_PROJECT_NAME=zabbix
I managed to get this to work by creating user zabbix in the docker host and adding it to root and docker groups and creating a docker group inside the container and adding zabbix to it.
I used the same uid and gid.
Maybe a docker restart is needed too.
You already made docker.sock available inside the container and it's privileged so it should work.

ERROR: Pool overlaps with other one on this address space

I'm trying to implement this tutorial. The "docker-compose" content is this :
# WARNING: Do not deploy this tutorial configuration directly to a production environment
#
# The tutorial docker-compose files have not been written for production deployment and will not
# scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
# goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
# are running at full debug and extra ports have been exposed to allow for direct calls to services.
# They also contain various obvious security flaws - passwords in plain text, no load balancing,
# no use of HTTPS and so on.
#
# This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
# and so on, purely so that a single docker-compose file can be read as an example to build on,
# not use directly.
#
# When deploying to a production environment, please refer to the Helm Repository
# for FIWARE Components in order to scale up to a proper architecture:
#
# see: https://github.com/FIWARE/helm-charts/
#
version: "3.5"
services:
# Orion is the context broker
orion:
image: fiware/orion:latest
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
networks:
- default
expose:
- "1026"
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG
healthcheck:
test: curl --fail -s http://orion:1026/version || exit 1
interval: 5s
# Tutorial displays a web app to manipulate the context directly
tutorial:
image: fiware/tutorials.context-provider
hostname: iot-sensors
container_name: fiware-tutorial
networks:
- default
expose:
- "3000"
- "3001"
ports:
- "3000:3000"
- "3001:3001"
environment:
- "DEBUG=tutorial:*"
- "PORT=3000"
- "IOTA_HTTP_HOST=iot-agent"
- "IOTA_HTTP_PORT=7896"
- "DUMMY_DEVICES_PORT=3001"
- "DUMMY_DEVICES_API_KEY=4jggokgpepnvsb2uv4s40d59ov"
- "DUMMY_DEVICES_TRANSPORT=HTTP"
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: fiware-iot-agent
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
- "7896"
ports:
- "4041:4041"
- "7896:7896"
environment:
- "IOTA_CB_HOST=orion"
- "IOTA_CB_PORT=1026"
- "IOTA_NORTH_PORT=4041"
- "IOTA_REGISTRY_TYPE=mongodb"
- "IOTA_LOG_LEVEL=DEBUG"
- "IOTA_TIMESTAMP=true"
- "IOTA_MONGO_HOST=mongo-db"
- "IOTA_MONGO_PORT=27017"
- "IOTA_MONGO_DB=iotagentul"
- "IOTA_HTTP_PORT=7896"
- "IOTA_PROVIDER_URL=http://iot-agent:4041"
# Database
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
healthcheck:
test: |
host=`hostname --ip-address || echo '127.0.0.1'`;
mongo --quiet $host/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' && echo 0 || echo 1
interval: 5s
networks:
default:
ipam:
config:
- subnet: 172.18.1.0/24
volumes:
mongo-db: ~
But when I run the docker compose with the command "docker-compose up -d" I get this error :
*WARNING: The host variable is not set. Defaulting to a blank string.
Creating network "fiware_default" with the default driver
ERROR: Pool overlaps with other one on this address space*
I also get these networks by running the command "docker network ls" :
*NETWORK ID NAME DRIVER SCOPE
78403834b9bd bridge bridge local
1dc5b7d0534b hadig_default bridge local
4162244c37b0 host host local
ac5a94a89bde none null local*
I see no conflict with the name "fiware_default". where is the problem?
The "pool" the error message refers to is the 172.18.1.0/24 CIDR block that file manually specifies. If something else on your system is using that network space, it won't start up. (Docker might have assigned another Compose file's network to 172.18.0.0/16, for example.)
You don't usually need to manually specify IP addresses in Docker at all, and so you should remove that ipam: block. Having done that, you're telling Compose to configure the default network with default settings, and you can actually remove the entire networks: block at the end of the file.
The exception to this is if your host network environment is using some of the same IP address blocks, and then you do potentially need an override like this. If you run ifconfig or a similar command from the host (or look at your host's network settings from a desktop application) and your host or a VPN is using a 172.18.1.* address, you'll also get this message. In that case, change the network to something else; if you only need a /24 (254 addresses) then setting subnet: 192.168.123.0/24 (where "123" can be any number between 1 and 254) should get you past this.

How can I connect from `project` to `mysql` container in docker swarm?

I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?
Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.

How to get redis address from docker compose?

I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'

Docker-compose: setting hostname equal to its hostname of where the container is running

Right now, I have two-node in swarm cluster
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
yey1njv9uz8adf33m7oz0h80f * redis2 Ready Active Leader
lbo91v2l15h24isfd5jegoxu1 redis3 Ready Active
this is docker-compose.yml file
version: "3"
services:
daggr:
# replace username/repo:tag with your name and image details
image: daggr
hostname: examplehostname
deploy:
replicas: 1
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
networks:
- webnet
networks:
webnet:
as you see, i am explictly setting hostname for daggr service
daggr container basically runs a python tornado web server
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello from daggr running on %s\n' % socket.gethostname())
I've tested out as below
$ curl redis2:4000
Hello from daggr running on examplehostname
Now, instead of statically setting the hostname, i want it to be dynamic matching to hostname of where the container is running. ie. if daggr container is running on redis2 it should say redis2 and on redis3, it should say redis3.
How can i specify that from docker-compose.yml file?
If you are running at least Docker 17.10 then you can use something like this:
services:
daggr:
hostname: '{{.Node.Hostname}}'
See this for more information.
The selected answer above did not work for me (presumably since I am not running docker/docker-compose in swarm mode).
I was able to set the container hostname for my reverse proxy to match the docker host FQDN by doing the following:
version: '3'
reverse-proxy:
hostname: $HOSTNAME
This allowed me to easily configure Nginx to pick the correct server cert / key pair prior to startup.
I tried Constantin Galbenu's solution only work on a swarm setup and brandonsimpkins's lacks the HOSTNAME definition.
A workaround is to set an environment variable to your hostname:
export HOSTNAME="$(cat /etc/hostname)"
my-container:
hostname: ${HOSTNAME}
If like me, your hostname is the same as your username, skip step 1 and only do
my-container:
hostname: ${USER}

Resources