I have a VM on which docker is installed and all services are already in containers. Zabbix is a separate VM and zabbix itself is also in containers.
I tried to raise a docker container with zabbix agent2 so that it monitors containers on the first server, but the zabbix says that docker.service is not active.
All I found was that you need to "put a privileged mode and everything will work." But the couple doesn't work... (screnshot compilation)
docker-compose file (vm1):
version: '3.5'
services:
zabbix-agent:
image: ${AGENT_IMAGE}
# profiles:
# - full
# - all
ports:
- "10050:10050"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /var/run/docker.sock:/var/run/docker.sock
- ${ZBX_FOLDER}/etc/zabbix/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro
- ${ZBX_FOLDER}/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro
- ${ZBX_FOLDER}/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ${ZBX_FOLDER}/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
mode: global
env_file:
- ${ZBX_ENVFILES}/.env_agent
privileged: true
pid: "host"
networks:
my_net:
aliases:
- zabbix-agent
- zabbix-agent-passive
- zabbix-agent-alpine
stop_grace_period: 5s
labels:
com.zabbix.description: "Zabbix agent"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-agentd"
com.zabbix.os: "alpine"
networks:
my_net:
name: my_network
external: true
envfile:
ZBX_FOLDER=/path/to/zabbix/data
ZBX_ENVFILES=/path/to/envfiles/zabbix_env
AGENT_IMAGE=zabbix/zabbix-agent2:alpine-6.2-latest
COMPOSE_PROJECT_NAME=zabbix
I managed to get this to work by creating user zabbix in the docker host and adding it to root and docker groups and creating a docker group inside the container and adding zabbix to it.
I used the same uid and gid.
Maybe a docker restart is needed too.
You already made docker.sock available inside the container and it's privileged so it should work.
Related
Im running zabbix with docker compose with multiple containers.
I have an issue with connecting two containers to each other (see : docker containerized zabbix server monitoring same host running the zabbix server : connection refused ).
So im wondering how connection between containers works in docker-compose : do i need to use links in the docker-compose.yml ? Do I need to specify an ip adress in network in docker-compose.yml and then use this ip adress in my apps?
In particular, if i want to connect to container A ip named containerA in docker-compose.yml to container B named containerB in docker-compose.yml, can I use container name as it appears in docker ps -a ? (the container name is often not the same as the container name in docker-compose.yml) Or should I use the service name as it appears in docker-compose.yml? Or should I use links service:alias so i can use the alias in my app?
I have tried to use links but I had a circular link problem as i was linking to container to each other.
This is the yml file (notice the network alias is the same as the first service name...):
version: '3.5'
services:
zabbix-server:
container_name: zabbixserver
image: zabbix/zabbix-server-pgsql:centos-6.0-latest
ports:
- "10051:10051"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/usr/lib/zabbix/alertscripts:/usr/lib/zabbix/alertscripts:ro
- ./zbx_env/usr/lib/zabbix/externalscripts:/usr/lib/zabbix/externalscripts:ro
- ./zbx_env/var/lib/zabbix/export:/var/lib/zabbix/export:rw
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:rw
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
- ./zbx_env/var/lib/zabbix/mibs:/var/lib/zabbix/mibs:ro
- snmptraps:/var/lib/zabbix/snmptraps:rw
# - ./env_vars/.ZBX_DB_CA_FILE:/run/secrets/root-ca.pem:ro
# - ./env_vars/.ZBX_DB_CERT_FILE:/run/secrets/client-cert.pem:ro
# - ./env_vars/.ZBX_DB_KEY_FILE:/run/secrets/client-key.pem:ro
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
deploy:
resources:
limits:
cpus: '0.70'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
env_file:
- ./env_vars/.env_db_pgsql
- ./env_vars/.env_srv
secrets:
- POSTGRES_USER
- POSTGRES_PASSWORD
depends_on:
- postgres-server
networks:
zbx_net_backend:
aliases:
- zabbix-server
- zabbix-server-pgsql
- zabbix-server-centos-pgsql
- zabbix-server-pgsql-centos
zbx_net_frontend:
# devices:
# - "/dev/ttyUSB0:/dev/ttyUSB0"
stop_grace_period: 30s
sysctls:
- net.ipv4.ip_local_port_range=1024 65000
- net.ipv4.conf.all.accept_redirects=0
- net.ipv4.conf.all.secure_redirects=0
- net.ipv4.conf.all.send_redirects=0
labels:
com.zabbix.description: "Zabbix server with PostgreSQL database support"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-server"
com.zabbix.dbtype: "pgsql"
com.zabbix.os: "centos"
zabbix-agent:
image: zabbix/zabbix-agent:centos-6.0-latest
ports:
- "10050:10050"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/zabbix/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
deploy:
resources:
limits:
cpus: '0.2'
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
mode: global
links:
- zabbix-server:zabbix-server
env_file:
- ./env_vars/.env_agent
privileged: true
pid: "host"
networks:
zbx_net_backend:
aliases:
- zabbix-agent
- zabbix-agent-passive
- zabbix-agent-centos
stop_grace_period: 5s
labels:
com.zabbix.description: "Zabbix agent"
com.zabbix.company: "Zabbix LLC"
com.zabbix.component: "zabbix-agentd"
com.zabbix.os: "centos"
Use the other container's Compose service name and the port the process inside that container is listening on. In your example, assuming the second ports: numbers are both correct, both containers should be able to access zabbix-server:10051 and zabbix-agent:10050. Also see Networking in Compose in the Docker documentation.
Do I need to use links in docker-compose.yml?
The links: option is obsolete and you should delete it if present. expose: is similarly only used by the obsolete first-generation Docker networking, and there are no consequences to deleting it from your Compose file.
Do I need to specify an IP address in networks in docker-compose.yml?
No, Docker can assign container-private IP addresses on its own. These are an internal implementation detail of Docker. It's useful to know they exist (in particular, since each container has a private IP address, multiple containers can each use the same port internally) but you never need to directly specify them or look them up.
You rarely if ever need to specify networks: { aliases: } or to override a container's generated container_name:. The docker ps names won't match what's in the Compose file but that's not a practical problem. If you need to directly manage an individual container you can e.g. docker-compose stop zabbix-server, and as previously described you can use the Compose service names for container-to-container communication.
In fact, for most practical cases, you can delete all of the networks: blocks entirely. Compose provides a network named default for you, and you don't usually need to configure anything.
So, in the file you originally show, I'd suggest deleting all of the networks:, links:, and container_name: options. The ports: are required only if you want to call into these containers from outside of Docker. Even deleting these you can use the host names and ports shown initially.
I am trying to deploy a stack with the docker swarm with the following configuration docker-compose.yaml file as below via the command:
docker stack deploy --with-registry-auth -c docker-compose.yaml project
version: "3.9"
services:
mysql:
image: mysql:8.0
deploy:
replicas: 1
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
ports:
- 3306:3306
environment:
MYSQL_ROOT_HOST: '%'
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: project_production
MYSQL_USER: username
MYSQL_PASSWORD: password
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- internal
website:
image: registry.gitlab.com/project/project-website:latest
networks:
- internal
deploy:
replicas: 1
ports:
- 3000:3000
environment:
- RAILS_ENV=production
- MYSQL_HOST=mysql
- ES_HOST=http://es01
- project_DATABASE_USERNAME=root
- project_DATABASE_PASSWORD=root
depends_on:
- es01
- mysql
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
mysql_data:
networks:
internal:
external: true
name: project
Before I deploy the stack I also have created the network for the project via the following command:
docker network create -d overlay project
But when I see the logs for the project using docker logs command I see the following error stops my project get started:
Mysql2::Error: Host '10.0.2.202' is not allowed to connect to this MySQL server
I went exactly as the documents suggested I am not sure what is wrong with the settings that I have come up!
Question:
How can I connect from project to mysql container in docker swarm?
Based on the documentation, Docker Swarm automatically creates the overlay network for you. So I think you don't need to create an external network by default, unless you have specific needs:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
As Chris also mentioned in the comments, the DB credentials also don't match.
OPTIONAL: MYSQL_ROOT_HOST is only necessary if you want to connect as root user which is not recommended in production environments. There's also no need to expose the port to the host machine since the database service will only be used from inside the cluster. So if you still want to use root user, you can set the variable to allow connections only from inside the cluster, like MYSQL_ROOT_HOST=10.*.*.*.
I need make ftp connection to 192.168... network host (local network), and connection to mongo container.
Docker in swarm mode blocks network_mode:host (and I can't see remote ftp host inside container)
Docker stack has docs about --publish mode=host,target=80,published=8080, but I can't find out how write it in docker-compose file.
My docker-compose.yml file
version: '3'
services:
node:
image: tgbot-test_node_1
build:
context: ..
env_file: .env.test
network_mode: host
links:
- mongo # works
depends_on:
- mongo
deploy:
mongo:
image: mongo
network_mode: "bridge"
restart: on-failure
ports:
- 8080:80 # not works, only expose 27017/tcp
# not works
# - mode: host
# target: 27019
# published: 27017
env_file:
- .env.test
volumes:
- db:/data/db
deploy:
limits:
cpus: '0.75'
volumes:
db:
I need swarm mode for limiting resourses.
How can I access ftp host?
Docker version 19.03.12, build 48a66213fe
docker-compose version 1.26.2, build eefe0d31
UPD
with Joel Magnuson answer I got PORTS: 27017/tcp of mongo container. It not forward ports with stack deploy, any - would it be "80:80" or "27017"
I set
ports:
- 27018:27017
and got
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ab58c781fdb9 mongo:latest "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 27017/tcp tgbot-test_mongo.1.3i7yps3saqo3nk4xxyk0eka7h
43c0e3cfe960 tgbot-test_node_1:latest "docker-entrypoint.s…" 3 seconds ago Up 3 seconds tgbot-test_node.1.v23cufsrr683gdg2bicgf80q2
I think this is just a configuration issue. You mentioned "FTP host" but you didn't mention about running an FTP server. Hopefully the below helps with your mongo database.
mongodb will always run on port 27017 inside the container by default unless configured, so you must mount the container's port of 27017 to the host, not port 80.
version: '3'
services:
node:
image: tgbot-test_node_1
env_file: .env.test # configure with mongodb://mongo:27017/<db name>
networks:
- tgbot-test
mongo:
image: mongo
ports:
- 27017:27017 # only needed if you want to access it outside of the stack
# otherwise it's always visible within the stack network as 'mongo'
volumes:
- /home/$USER/db:/data/db # can mount to host instead
networks:
- tgbot-test
networks:
tgbot-test:
driver: overlay #suggest overlay network
#volumes:
# db: # this is not persistent by itself - can mount to host
You could also create an external volume.
docker volume create --name tgbot-db
...
volumes:
tgbot-db:
external: true
You should be able to connect to the mongodb instance from the host or remote with mongodb://192.160.X.X:27017/<db name> or inside a container in the same stack using docker swarm's DNS name of mongo(service name) with mongodb://mongo:27017/<db name>.
Right now, I have two-node in swarm cluster
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
yey1njv9uz8adf33m7oz0h80f * redis2 Ready Active Leader
lbo91v2l15h24isfd5jegoxu1 redis3 Ready Active
this is docker-compose.yml file
version: "3"
services:
daggr:
# replace username/repo:tag with your name and image details
image: daggr
hostname: examplehostname
deploy:
replicas: 1
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
networks:
- webnet
networks:
webnet:
as you see, i am explictly setting hostname for daggr service
daggr container basically runs a python tornado web server
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello from daggr running on %s\n' % socket.gethostname())
I've tested out as below
$ curl redis2:4000
Hello from daggr running on examplehostname
Now, instead of statically setting the hostname, i want it to be dynamic matching to hostname of where the container is running. ie. if daggr container is running on redis2 it should say redis2 and on redis3, it should say redis3.
How can i specify that from docker-compose.yml file?
If you are running at least Docker 17.10 then you can use something like this:
services:
daggr:
hostname: '{{.Node.Hostname}}'
See this for more information.
The selected answer above did not work for me (presumably since I am not running docker/docker-compose in swarm mode).
I was able to set the container hostname for my reverse proxy to match the docker host FQDN by doing the following:
version: '3'
reverse-proxy:
hostname: $HOSTNAME
This allowed me to easily configure Nginx to pick the correct server cert / key pair prior to startup.
I tried Constantin Galbenu's solution only work on a swarm setup and brandonsimpkins's lacks the HOSTNAME definition.
A workaround is to set an environment variable to your hostname:
export HOSTNAME="$(cat /etc/hostname)"
my-container:
hostname: ${HOSTNAME}
If like me, your hostname is the same as your username, skip step 1 and only do
my-container:
hostname: ${USER}
Let's say we have the following stack file:
version: "3"
services:
ubuntu:
image: ubuntu
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
entrypoint:
- tail
- -f
- /dev/null
logging:
driver: "json-file"
ports:
- "80:80"
networks:
- webnet
web:
image: httpd
ports:
- "8080:8080"
hostname: "apache"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
resources:
limits:
memory: 32M
reservations:
memory: 16M
depends_on:
- "ubuntu"
networks:
- webnet
networks:
webnet:
When I run docker service inspect mystack_web the output generated does not show any reference to the depends_on entry.
Is that okay? and how can I print the dependencies of a given docker service?
The depends_on isn't used on docker swarm:
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 compose file. - from Docker Docs
Another good explanation on GitHub:
depends_on is a no-op when used with docker stack deploy. Swarm mode services are restarted when they fail, so there's no reason to delay their startup. Even if they fail a few times, they will eventually recover. - from GitHub