Docker Compose 3 controlling resources (memory, cpu) - docker

I'm trying to use "resources" field from docker compose version 3 documentation (https://docs.docker.com/compose/compose-file/), however, I'm facing an error,
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.fstore_java: 'resources'
How can I set the memory limit with docker-compose?
fstore_java:
depends_on:
- fstore_db
- rabbit_broker
build: ./fstore
ports:
- "8080:8080"
expose:
- "8080"
links:
- fstore_db
- rabbit_broker
restart: always
resources:
limits:
cpus: '0.001'
memory: 50M

It has to be under "deploy" level
fstore_java:
depends_on:
- fstore_db
- rabbit_broker
build: ./fstore
ports:
- "8080:8080"
expose:
- "8080"
links:
- fstore_db
- rabbit_broker
restart: always
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M

Related

Docker do not link virtual interfaces to virtual network(bridge)

When create and start my all instances everything looks fine but even though do routing properly, my instances still cannot communicate each others. I have use this this command for each instance.
ip link set <interface> master <network>
eg
ip link set vethb3735ba#if14 master br-bdf6dd295e3a
Here is my operation system details
Linux arch 6.0.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Sat, 15 Oct 2022 14:00:49 +0000 x86_64 GNU/Linux
services:
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: amigoscode
POSTGRES_PASSWORD: password
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
networks:
- postgres
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- "9411:9411"
networks:
- spring
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- spring
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
eureka-server:
image: huseyinafsin/eureka-server:latest
container_name: eureka-server
ports:
- "8761:8761"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
depends_on:
- zipkin
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
apigw:
image: huseyinafsin/apigw:latest
container_name: apigw
ports:
- "8083:8083"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
depends_on:
- zipkin
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
customer:
image: huseyinafsin/customer:latest
container_name: customer
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
fraud:
image: huseyinafsin/fraud:latest
container_name: fraud
ports:
- "8081:8081"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
notification:
image: huseyinafsin/notification:latest
container_name: notification
ports:
- "8082:8082"
environment:
- SPRING_PROFILES_ACTIVE=docker
networks:
- spring
- postgres
depends_on:
- zipkin
- postgres
- rabbitmq
- eureka-server
deploy:
resources:
limits:
cpus: '0.5'
memory: 200M
networks:
postgres:
driver: bridge
spring:
driver: bridge
volumes:
postgres:
pgadmin:

Set /etc/hostname in running container using docker-compose

My docker-compose.yml is as follow:
version: "3"
services:
write:
image: apachegeode/geode:1.11.0
container_name: write
hostname: a.b.net
expose:
- "8080"
- "10334"
- "40404"
- "1099"
- "7070"
ports:
- "10220:10334"
volumes:
- ./scripts/:/scripts/
command: /scripts/sleep.sh gfsh start locator ...
networks:
my-network:
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.50'
memory: 512M
restart_policy:
condition: on-failure
depends_on:
- read
read:
image: apachegeode/geode:1.11.0
container_name: read
hostname: a.b.net
expose:
- "8080"
- "10334"
- "40404"
- "1099"
- "7070"
ports:
- "10221:10334"
volumes:
- ./scripts/:/scripts/
command: /scripts/sleep.sh gfsh start locator ...
networks:
my-network:
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.50'
memory: 512M
restart_policy:
condition: on-failure
networks:
my-network:
container_name has to be "write" and "read" since they are unique containers but running on the host machine. Setting hostname: a.b.net in the docker-compose.yml sets 192.168.160.2 a.b.net a in /etc/hosts file but /etc/hostname show a which is only the alias name . How can I set /etc/hostname with a.b.net using docker-compose.yml ? I use
docker-compose -f my-docker-compose.yml up -d
to run the containers.

docker stack: Redis not working on worker node

I just completed the docker documentation and created two instances on aws (http://13.127.150.218, http://13.235.134.73). The first one is manager and the second one is the worker. Following is the composed file I used to deploy
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
Here the redis service has the constraint that restricts it to run only on manager node. Now my question is how the web service on worker instance is supposed to use the redis service.
You need to use the hostname parameter in all container, so you can use this value to access services from worker or to access from worker the services on manager.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
hostname: "web"
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
hostname: "visualizer"
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
hostname: "redis"
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
In addictional if you use the portainer instead of visualizer you can control you SWARM stack with more options:
https://hub.docker.com/r/portainer/portainer
BR,
Carlos
Consider the stack file as per the below example -
Regardless of where it is placed manager|worker all the services in the stack file being on the same network can use the embedded DNS functionality which helps to resolve each service by the service name defined.
In this case the service web makes use of service redis by its service name.
Here is an example of the ping command able to resolve the service web from within the container associated with the redis service -
Read more about the Swarm Native Service Discovery to understand this.

docker swarm list dependencies of a service

Let's say we have the following stack file:
version: "3"
services:
ubuntu:
image: ubuntu
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
entrypoint:
- tail
- -f
- /dev/null
logging:
driver: "json-file"
ports:
- "80:80"
networks:
- webnet
web:
image: httpd
ports:
- "8080:8080"
hostname: "apache"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
resources:
limits:
memory: 32M
reservations:
memory: 16M
depends_on:
- "ubuntu"
networks:
- webnet
networks:
webnet:
When I run docker service inspect mystack_web the output generated does not show any reference to the depends_on entry.
Is that okay? and how can I print the dependencies of a given docker service?
The depends_on isn't used on docker swarm:
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 compose file. - from Docker Docs
Another good explanation on GitHub:
depends_on is a no-op when used with docker stack deploy. Swarm mode services are restarted when they fail, so there's no reason to delay their startup. Even if they fail a few times, they will eventually recover. - from GitHub

Docker Stack Swarm - Service Replicas are not spread for Mutli Service Stack

I have deployed a stack with a of 4 services on two hosts (docker compose version 3).
The services are Elasticsearch, Kibana. Redis, Visualiser and finally my Web App. I have't set any resource restrictions yet.
I spun two virtual host via docker-machine , one with 2GB and one with 1GB.
Then I increased the replicas of my web app to 2 replicas, which resolved to the following distribution:
Host1 (Master):
Kibana, Redis, Web App, Visualiser, WebApp
Host2 (Worker):
Elasticsearch
Why is the Swarm Manager distributing both Web App Containers to the same host. Wouldn't it be smarter if Web App is distributed to both hosts?
Besides node tagging I couldn't find any other way in the docs to influence the distribution.
Am I missing something?
Thanks
Bjorn
docker-compose.yml
version: "3"
services:
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
environment:
ES_JAVA_OPTS: -Xms1g -Xmx1g
ulimits:
memlock: -1
nofile:
hard: 65536
soft: 65536
nproc: 65538
deploy:
resources:
limits:
cpus: "0.5"
memory: 1g
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- webnet
web:
# replace username/repo:tag with your name and image details
image: bjng/workinseason:swarm
deploy:
replicas: 2
restart_policy:
condition: on-failure
ports:
- "80:6000"
networks:
- webnet
kibana:
image: docker.elastic.co/kibana/kibana:5.4.3
deploy:
placement:
constraints: [node.role == manager]
ports:
- "5601:5601"
networks:
- webnet
redis:
image: "redis:alpine"
networks:
- webnet
volumes:
esdata:
driver: local
networks:
webnet:
Docker schedules tasks (containers) based on available resources; if two nodes have enough resources, the container can be scheduled on either one.
Recent versions of Docker use "HA" scheduling by default, which means that tasks for the same service are spread over multiple nodes, if possible (see this pull request) https://github.com/docker/swarmkit/pull/1446

Resources