Can't deploy docker stack with compose file version 2.4 - docker

I am trying to deploy my docker stack using compose file. When I deploy with compose file version 3+, the deploy works perfectly fine. But when I am trying to use the 2.4 version or lower I get this error:
unsupported Compose file version: 2.4
I need to use the 2.4 version, because Version 3 and higher does not support several parameters I need for my containers (such as cpuset and runtime).
My version of docker is 19.03.6 and docker-compose is 1.25.4.
Is there any way to deploy with an older version of compose file on Docker 19.03.6? Am I missing something or is the latest docker version does not support the older compose files anymore?
UPDATE
It turns out that docker 19.03.6 supports only Version 3+ in deploy. So I can't use anything but Version 3+, which does not provide the same flexibility as V2.4 in terms of CPU usage setup. The only solution in this situation (when you need parameters like cpuset and runtime) would be to run containers manually or move to something like Kubernetes.
Here are compose files examples:
Version 3.7 (working)
version: '3.7'
services:
mongo:
image: mongo
volumes:
- ~/ProcessingServerData/mongodb/db:/data/db
- ~/ProcessingServerData/mongodb/configdb:/data/configdb
networks:
- proc-net
mongo-express:
image: mongo-express
depends_on:
- mongo
ports:
- 8081:8081
networks:
- proc-net
visualizer:
image: dockersamples/visualizer:stable
ports:
- 8082:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proc-net
deploy:
placement:
constraints: [node.role == manager]
networks:
proc-net:
driver: overlay
attachable: true
Version 2.4 (not working)
version: '2.4'
services:
mongo:
image: mongo
volumes:
- type: bind
source: ~/ProcessingServerData/mongodb/db
target: /data/db
- type: bind
source: ~/ProcessingServerData/mongodb/configdb
target: /data/configdb
networks:
- proc-net
deploy:
resources:
cpuset: 0,1
mongo-express:
image: mongo-express
depends_on:
- mongo
ports:
- 8081:8081
networks:
- proc-net
deploy:
resources:
cpuset: 0,1
visualizer:
image: dockersamples/visualizer:stable
ports:
- 8082:8080
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
networks:
- proc-net
deploy:
resources:
cpuset: 0,1
placement:
constraints: [node.role == manager]
networks:
proc-net:
driver: overlay

deploy config option is not supported in 2.4 , you need to change the file to this one
version: '2.4'
services:
mongo:
image: mongo
volumes:
- type: bind
source: ~/ProcessingServerData/mongodb/db
target: /data/db
- type: bind
source: ~/ProcessingServerData/mongodb/configdb
target: /data/configdb
networks:
- proc-net
mongo-express:
image: mongo-express
depends_on:
- mongo
ports:
- 8081:8081
networks:
- proc-net
visualizer:
image: dockersamples/visualizer:stable
ports:
- 8082:8080
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
networks:
- proc-net
networks:
proc-net:
driver: overlay

Apparently there is no support for an older compose file version for deploy.
https://forums.docker.com/t/cant-deploy-stack-with-compose-file-version-2-4-on-docker-19-03-6/90119

Related

Invalid docker.compose.yaml file

I am trying to get a couple of containers up and running, however I am running into some issues. I run this command:
docker-compose up -d --build itvdflab
and get this error
The Compose file './docker-compose.yaml' is invalid because:
Unsupported config option for services: 'itvdelab'
Unsupported config option for networks: 'itvdelabnw'
Here is the yaml file.
services:
itvdelab:
image: itversity/itvdelab
hostname: itvdelab
ports:
- "8888:8888"
volumes:
- "./itversity-material:/home/itversity/itversity-material"
- "./data:/data"
environment:
SHELL: /bin/bash
networks:
- itvdelabnw
depends_on:
- "cluster_util_db"
cluster_util_db:
image: postgres:13
ports:
- "6432:5432"
volumes:
- ./cluster_util_db_scripts:/docker-entrypoint-initdb.d
networks:
- itvdelabnw
environment:
POSTGRES_PASSWORD: itversity
itvdflab:
build:
context: .
dockerfile: images/pythonsql/Dockerfile
hostname: itvdflab
ports:
- "8888:8888"
volumes:
- "./itversity-material:/home/itversity/itversity-material"
- "./data:/data"
environment:
SHELL: /bin/bash
networks:
- itvdelabnw
depends_on:
- "pg.itversity.com"
pg.itversity.com:
image: postgres:13
ports:
- "5432:5432"
networks:
- itvdelabnw
environment:
POSTGRES_PASSWORD: itversity
networks:
itvdelabnw:
name: itvdelabnw
What changes do I need to make to get this working?
Your docker-compose.yml file is missing a version: line. Until very recently, this caused Docker Compose to interpret this as the original "version 1" Compose format, which doesn't have a top-level services: key and doesn't support Docker networks. The much newer Compose Specification claims that a version: key is optional, but in practice if you can't be guaranteed to use a very new version of Compose (built as a plugin to the docker binary) it's required. The most recent Compose file versions supported by the standalone Python docker-compose tool are 3.8 and 2.4 (you need the 2.x version for some resource-related constraints in non-Swarm installations).
# Add at the very beginning
version: '3.8'
Here is the revised copy:
version: '3.4'
services:
itvdelab:
image: itversity/itvdelab
hostname: itvdelab
ports:
- "8888:8888"
volumes:
- "./itversity-material:/home/itversity/itversity-material"
- "./data:/data"
environment:
SHELL: /bin/bash
networks:
- itvdelabnw
depends_on:
- "cluster_util_db"
cluster_util_db:
image: postgres:13
ports:
- "6432:5432"
volumes:
- ./cluster_util_db_scripts:/docker-entrypoint-initdb.d
networks:
- itvdelabnw
environment:
POSTGRES_PASSWORD: itversity
itvdflab:
build:
context: .
dockerfile: images/pythonsql/Dockerfile
hostname: itvdflab
ports:
- "8888:8888"
volumes:
- "./itversity-material:/home/itversity/itversity-material"
- "./data:/data"
environment:
SHELL: /bin/bash
networks:
- itvdelabnw
depends_on:
- "pg.itversity.com"
pg.itversity.com:
image: postgres:13
ports:
- "5432:5432"
networks:
- itvdelabnw
environment:
POSTGRES_PASSWORD: itversity
networks:
itvdelabnw:
name: itvdelabnw
and now I get the following error
ERROR: The Compose file './docker-compose.yaml' is invalid because:
services.pg.itversity.com.networks.itvdelabnw contains unsupported option: 'name'
for me work try different version. In my case work
version: '2.2'

Docker-compose file builds same images twice

I am trying to build images for my app. However, when I run "docker-compose up" command, it builds some of the containers twice. I couldn't figure the reason of it. I think the tags cause this kind of situation, but I couldn't figure where 'latest' tag come from.
Here it is my docker-compose.yml:
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- "2181:2181"
networks:
- elk
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9093:9093"
networks:
- elk
environment:
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_CFG_LISTENERS: CLIENT://:9092,EXTERNAL://:9093
KAFKA_CFG_ADVERTISED_LISTENERS: CLIENT://kafka:9092,EXTERNAL://localhost:9093
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
links:
- logstash
app:
container_name: "ml-pipeline"
build: .
ports:
- "7000:7000"
- "5001:5001"
depends_on:
- kafka
- elasticsearch
- logstash
networks:
- elk
links:
- kafka
networks:
elk:
driver: bridge
volumes:
elasticsearch:
And output of this is:
As you can see there are duplicate images. How can I solve it ?
Actually there is nothing that indicates that docker-compose built the images twice. Your screenshot shows that the images have multiple tag names. But without further context it's hard to say how this happened and how docker-compose was involved in this.
One possible cause for this:
the pre-built images from docker.elastic.co were downloaded by docker pull docker.elastic.co/... or another docker run command
docker-compose up was looking for images named twitter-stream-dl-docker_* and since it couldn't find them triggered a docker-compose build
docker-compose build built the images - but using the docker build cache it could re-use all layers of the existing docker.elastic.co/... images which must have been built from the same source
the new built images resulted in the same final images which were then tagged with the name expected by docker-compose, i.e. twitter-stream-dl-docker_*
If you want to force a new local built either:
build without using the cache: docker-compose build --no-cache
delete the downloaded images: docker rmi docker.elastic.co/...
All 3 ELK containers have a build context with a Dockerfile that by default only consists of a FROM line. In the Dockerfiles you could add additional plugins.
part of your docker-compose.yml:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
logstash/Dockerfile:
ARG ELK_VERSION
# https://github.com/elastic/logstash-docker
FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
# Add your logstash plugins setup here
# Example: RUN logstash-plugin install logstash-filter-json
docker-compose pulls the Image docker.elastic.co/logstash/logstash:${ELK_VERSION} and builds it's own version version twitter-stream-dl-docker_XXX. Since the build doesn't do anything it simply tags the old image with the new tag so they have the same Image ID.
In case you're wondering your folder's name is twitter-stream-dl-docker so the images have that tag (or you used docker-compose -p twitter-stream-dl-docker).
I hope that clears things up, but feel free to ask anything that's ambigious.

Docker does not support storing secrets on Windows home system using Docker toolbox

Using Docker toolbox on Windows 10 Home, Docker version 19.03, we have created a docker-compose.yml and added a secrets file as JSON, it runs fine on a Mac system, but it is unable to run the same in Windows 10 Home.
Error after running docker-compose up:
ERROR: for orthancserver Cannot create container for service orthanc: invalid mount config for type
"bind": invalid mount path: 'C:/Users/ABC/Desktop/Project/orthanc.json' mount path must be absolute
docker-compose.yml:
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
secrets:
- orthanc.json
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
secrets:
orthanc.json:
file: orthanc.json
orthanc.json file kept next to docker-compose.yml
Found an alternative solution for windows 10 home, with docker toolbox. as commented by #Schwarz54, the file-sharing works well with docker volume for Dockerized Orthanc server.
Add shared folder:
Open Oracle VM manager
Go to setting of default VM
Click Shared Folders
Add C:\ drive to the list
Edit docker-compose.yml to transfer the config file to Orthanc via volume
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
- /c/Users/ABCUser/Desktop/Project/orthanc.json:/etc/orthanc/orthanc.json:ro
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true

How to share local host's files with docker machine

I am fairly new to docker, i have been having issues for days now setting up docker-machine to share local files on my windows pc through the use of volumes.
Basically, i am using the github repo as staerting point https://github.com/koutsoumposval/laravel-microservices. I noticed that when i do not use docker-machine the files are shared using the 'volumes' configuration in my docker-compose file.
However, when i host the same project on the docker machine the files do not show. i can see the top level folders when i ssh into the docker machine but they are all empty.
Also i was able to get the local files to show up in the docker-machine by using the 'COPY' directive in the Dockerfile. but i am not comfortable with this, as changes made to the local files are not automatically reflected in the docker machine.
So my question is how can i synchronize the local files with the docker-machine since the 'volumes' directory is obviously not working. Also please point me in the right direction if i am thinking about this in the wrong way.
DOCKER-COMPOSE.YML
version: '3'
services:
proxy:
image: traefik
command: --web --docker --docker.domain=lm.local --docker.exposedbydefault=false --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
order:
build:
context: order/php-apache
volumes:
- ../order:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:order.lm.local"
- "traefik.backend=order"
networks:
- webgateway
- web
restart: always
user:
build:
context: user/php-apache
volumes:
- ../user:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:user.lm.local"
- "traefik.backend=user"
networks:
- webgateway
- web
restart: always
inventory:
build:
context: inventory/php-apache
volumes:
- ../inventory:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:inventory.lm.local"
- "traefik.backend=inventory"
networks:
- webgateway
- web
restart: always
api:
build:
context: api-gateway/php-apache
volumes:
- ../api-gateway:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api.lm.local"
- "traefik.backend=api"
networks:
- webgateway
- web
restart: always
networks:
webgateway:
driver: bridge
web:
external:
name: traefik_webgateway
The image below shows the errors i am experiencing as a result of the local files not being copied to the the virtual machine. So the 'html' folder which is suppose to contain the full microservice repo is empty.

Netdata in a docker swarm environment

I'm quite new to Netdata and also Docker Swarm. I ran Netdata for a while on single hosts but now trying to stream Netdata from workers to a manager node in a swarm environment where the manager also should act as a central Netdata instance. I'm aiming to only monitor the data from the manager.
Here's my compose file for the stack:
version: '3.2'
services:
netdata-client:
image: titpetric/netdata
hostname: "{{.Node.Hostname}}"
cap_add:
- SYS_PTRACE
security_opt:
- apparmor:unconfined
environment:
- NETDATA_STREAM_DESTINATION=control:19999
- NETDATA_STREAM_API_KEY=1x214ch15h3at1289y
- PGID=999
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- netdata
deploy:
mode: global
placement:
constraints: [node.role == worker]
netdata-central:
image: titpetric/netdata
hostname: control
cap_add:
- SYS_PTRACE
security_opt:
- apparmor:unconfined
environment:
- NETDATA_API_KEY_ENABLE_1x214ch15h3at1289y=1
ports:
- '19999:19999'
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- netdata
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
networks:
netdata:
driver: overlay
attachable: true
Netdata on the manager works fine and the container runs on the one worker node I'm testing on. According to log output it seems to run well and gathers names from the docker containers running as it does in a local environment.
Problem is that it can't connect to the netdata-central service running on the manager.
This is the error message:
2019-01-04 08:35:28: netdata INFO : STREAM_SENDER[7] : STREAM 7 [send to control:19999]: connecting...,
2019-01-04 08:35:28: netdata ERROR : STREAM_SENDER[7] : Cannot resolve host 'control', port '19999': Name or service not known,
not sure why it can't resolve the hostname, thought it should work that way on the overlay network. Maybe there's a better way to connect and not rely on the hostname?
Any help is appreciated.
EDIT: as this question might come up - the firewall (ufw) on the control host is inactive, also I think the error message clearly points to a problem with name resolution.
Your API-Key is in the wrong format..it has to be a GUID. You can generate one with the "uuidgen" command...
https://github.com/netdata/netdata/blob/63c96aa96f96f3aea10bdcd2ecd92c889f26b3af/conf.d/stream.conf#L7
In the latest image the environment variables does not work.
The solution is to create a configuration file for the stream.
My working compose file is:
version: '3.7'
configs:
netdata_stream_master:
file: $PWD/stream-master.conf
netdata_stream_client:
file: $PWD/stream-client.conf
services:
netdata-client:
image: netdata/netdata:v1.21.1
hostname: "{{.Node.Hostname}}"
depends_on:
- netdata-central
configs:
-
mode: 444
source: netdata_stream_client
target: /etc/netdata/stream.conf
security_opt:
- apparmor:unconfined
environment:
- PGID=999
volumes:
- /proc:/host/proc:ro
- /etc/passwd:/host/etc/passwd:ro
- /etc/group:/host/etc/group:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
netdata-central:
image: netdata/netdata:v1.21.1
hostname: control
configs:
-
mode: 444
source: netdata_stream_master
target: /etc/netdata/stream.conf
security_opt:
- apparmor:unconfined
environment:
- PGID=999
ports:
- '19999:19999'
volumes:
- /etc/passwd:/host/etc/passwd:ro
- /etc/group:/host/etc/group:ro
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]

Resources