Hi guys and excuse me for my English. I'm using docker swarm, when I attempt to deploy docker application with this command
docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml chatappapi
it shows the next error : services.chat-app-api Additional property pull_policy is not allowed
why this happens?
how do I solve this?
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:stable-alpine
ports:
- "5000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
chat-app-api:
build: .
image: username/myapp
pull_policy: always
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT= 5000
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
- REDIRECT_URI=${REDIRECT_URI}
- REFRESH_TOKEN=${REFRESH_TOKEN}
depends_on:
- mongo-db
mongo-db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: 'username'
MONGO_INITDB_ROOT_PASSWORD: 'password'
ports:
- "27017:27017"
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
docker-compose.prod.yml
version: "3.9"
services:
nginx:
ports:
- "80:80"
chat-app-api:
deploy:
mode: replicated
replicas: 8
restart_policy:
condition: any
update_config:
parallelism: 2
delay: 15s
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- MONGO_IP=${MONGO_IP}
command: node index.js
mongo-db:
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
Information
docker-compose version 1.29.2
Docker version 20.10.8
Ubuntu 20.04.2 LTS
Thanks in advance.
Your problem line is in docker-compose.yml
chat-app-api:
build: .
image: username/myapp
pull_policy: always # <== this is the bad line, delete it
The docker compose file reference doesn't have any pull_policy in the api because
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
I think pull_policy used to be a thing for compose? Maybe keep the latest api documentation open to refer to/search through whilst you're developing (things can and do change fairly frequently with compose).
If you want to ensure that the most recent version of an image is pulled onto all servers in a swarm then run docker compose -f ./docker-compose.yml pull on each server in turn (docker stack doesn't have functionality to run this over an entire swarm yet).
As an aside: I wouldn't combine two .yml files with a single docker stack command without a very good reason to do so.
You are mixing docker-compose and docker swarm ideas up in the same files:
It is probably worth breaking your project up into 3 files:
docker-compose.yml
This would contain just the basic service definitions common to both compose and swarm.
docker-compose.override.yml
Conveniently, docker-compose and docker compose both should read this file automatically. This file should contain any "port:", "depends_on:", "build:" directives, and any convenience volumes use for development.
stack.production.yml
The override file to be used in stack deployments should contain everything understood by swarm and not compose, and b. everything required for production.
Here you would use configs: or even secrets: rather than volume mappings to local folders to inject content into containers. Rather than relying on ports: directives, you would install an ingress router on the swarm such as traefik. and so on.
With this arrangement, docker compose can be used to develop and build your compose stack locally, and docker stack deploy won't have to be exposed to compose syntax it doesn't understand.
pull_policy is in the latest version of docker-compose.
To upgrade your docker-compose refer to:
https://docs.docker.com/compose/install/
The spec for more info:
https://github.com/compose-spec/compose-spec/blob/master/spec.md#pull_policy
Related
Docker noob here.
I have two files docker-compose.build.yml and docker-compose.up.yml in my docker folder. Following are the contents of both files..
docker-compose.build.yml
version: "3"
services:
base:
build:
context: ../
dockerfile: ./docker/Dockerfile.base
args:
DEBUG: "true"
image: ottertune-base
labels:
NAME: "ottertune-base"
web:
build:
context: ../
dockerfile: ./docker/Dockerfile.web
image: ottertune-web
depends_on:
- base
labels:
NAME: "ottertune-web"
volumes:
- ../server:/app
driver:
build:
context: ../
dockerfile: ./docker/Dockerfile.driver
image: ottertune-driver
depends_on:
- base
labels:
NAME: "ottertune-driver"
docker-compose.up.yml
version: "3"
services:
web:
image: ottertune-web
container_name: web
expose:
- "8000"
ports:
- "8000:8000"
links:
- backend
- rabbitmq
depends_on:
- backend
- rabbitmq
environment:
DEBUG: 'true'
ADMIN_PASSWORD: 'changeme'
BACKEND: 'postgresql'
DB_NAME: 'ottertune'
DB_USER: 'postgres'
DB_PASSWORD: 'ottertune'
DB_HOST: 'backend'
DB_PORT: '5432'
DB_OPTS: '{}'
MAX_DB_CONN_ATTEMPTS: 30
RABBITMQ_HOST: 'rabbitmq'
working_dir: /app/website
entrypoint: ./start.sh
labels:
NAME: "ottertune-web"
networks:
- ottertune-net
driver:
image: ottertune-driver
container_name: driver
depends_on:
- web
environment:
DEBUG: 'true'
working_dir: /app/driver
labels:
NAME: "ottertune-driver"
networks:
- ottertune-net
rabbitmq:
image: "rabbitmq:3-management"
container_name: rabbitmq
restart: always
hostname: "rabbitmq"
environment:
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
RABBITMQ_DEFAULT_VHOST: "/"
expose:
- "15672"
- "5672"
ports:
- "15673:15672"
- "5673:5672"
labels:
NAME: "rabbitmq"
networks:
- ottertune-net
backend:
container_name: backend
restart: always
image: postgres:9.6
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'ottertune'
POSTGRES_DB: 'ottertune'
expose:
- "5432"
ports:
- "5432:5432"
labels:
NAME: "ottertune-backend"
networks:
- ottertune-net
networks:
ottertune-net:
driver: bridge
Nothing wrong with the dockerfiles, i just have a few doubts about this approach.
What purpose does having multiple files serve instead of just one docker-compose.yml?
How does docker-compose work when used with multiple files?
When i do docker-compose -f docker-compose.build.yml build --no-cache
Building base
Step 1/1 : FROM ubuntu:18.04
---> 775349758637
[Warning] One or more build-args [DEBUG] were not consumed
Successfully built 775349758637
Successfully tagged ottertune-base:latest
Building web
Step 1/1 : FROM ottertune-base
---> 775349758637
Successfully built 775349758637
Successfully tagged ottertune-web:latest
Building driver
Step 1/1 : FROM ottertune-base
---> 775349758637
Successfully built 775349758637
Successfully tagged ottertune-driver:latest
and then docker-compose up i get the error
rabbitmq is up-to-date
backend is up-to-date Starting web ... error
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346:
starting container process caused "exec: \"./start.sh\": stat ./start.sh: no such file or
directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346:
starting container process caused "exec: \"./start.sh\": stat ./start.sh: no such file or
directory": unknown
ERROR: Encountered errors while bringing up the project.
this entrypoint start.sh is defined in the docker-compose.up.yml file which I didn't pass as an argument to
docker-compose build
So, why is the docker-compose up trying to run this entrypoint from a yml file which is not even passed during build? Really confused on this and didn't find much about it on google and stackoverflow.
If you docker-compose -f a.yml -f b.yml ..., Docker Compose merges the two YAML files. If you look at the two files you've posted, one has all of the run-time settings (ports:, environment:, ...), and if you happened to have the images already it would be enough to run the application. The second only has build-time settings (build:), but requires the source tree checked out locally to be able to run.
You probably need to specify both files on every docker-compose invocation
docker-compose -f docker-compose.build.yml -f docker-compose.up.yml up --build
It does seem like the author of these files intended for them to be run separately
docker-compose -f docker-compose.build.yml build
docker-compose -f docker-compose.up.yml up
but note that some of the run-time options in the build file, like volumes: to hide the application built into the image, will never take effect.
(You should be able to delete a large number of settings in the "up" YAML file that either duplicate what's in the image or that Docker Compose can provide for you: container_name:, expose:, links:, working_dir:, entrypoint:, networks:, and (probably) labels: are all unnecessary and can be deleted.)
What purpose does having multiple files serve instead of just one docker-compose.yml?
You can share configuration across environments. For example, I keep the common configuration such as the network and server in a docker-compose.yml. I keep my development environment specifics such as a server with automatic reload and debugging enabled in a docker-compose.override.yml. I keep the production-specific configs in a docker-compose.prod.yml. Then I can run docker-compose up --build for my development environment (Docker Compose uses docker-compose.yml and docker-compose.override.yml by default). And I can run my prod environment with docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build. You can read about this in the dedicated docs page.
How does docker-compose work when used with multiple files?
It takes the first file as the base file, and adds or replaces configs from subsequent files ot the base file. See the relevant docs.
When i do docker-compose -f docker-compose.build.yml build --no-cache ...
As for your last question, I can't really tell by what I've seen. But unlike Dockerfiles which need two commands (docker build and docker run), docker-compose only needs one. So when you do docker-compose up, it looks for a file named docker-compose.yml (and also docker-compose.override.yml if it's present).
I have configured distributed version of cassandra using Docker-Compose.
Here is my docker-compose.yml file:
version: '3.0'
services:
cassandra-masters:
image: strapdata/elassandra
environment:
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-masters
cassandra-slaves1:
image: strapdata/elassandra
environment:
CASSANDRA_SEEDS: tasks.cassandra-masters
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-slaves1
depends_on:
- cassandra-masters
After running the docker-compose file using sudo docker stack deploy elassandra --compose-file docker-compose.yml, everything works well and I can see them using docker service ls command.
Problem: What I want is that I don't know how to use volume in distributed of containers. Is it like the normal configuration of docker-compose that found in Docker's site? or it is different?
Solution I have tried the named volumes like the following, There isn't any difference between this approach (distributed) and normal approach. The only thing that should be considered is that the volume should be shared:
version: '3.0'
services:
cassandra-masters:
image: strapdata/elassandra
environment:
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-masters
volumes:
- app-volume:/var/lib/cassandra
cassandra-slaves1:
image: strapdata/elassandra
environment:
CASSANDRA_SEEDS: tasks.cassandra-masters
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-slaves1
depends_on:
- cassandra-masters
volumes:
- app-volume:/var/lib/cassandra
volumes:
app-volume:
I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app
docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!
I have a Docker container that runs a simple web application. That container is linked to two other containers by Docker Compose with the following docker-compose.yml file:
version: '2'
services:
mongo_service:
image: mongo
command: mongod
ports:
- '27017:27017'
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '8000:8000'
Once the web container starts, I want to write some content to /bitnami/tomcat/data/ on the tomcat_service container. I tried just writing to that disk location from within the web container but am getting an exception:
No such file or directory: '/bitnami/tomcat/data/'
Does anyone know what I can do to be able to write to the tomcat_service container from the web container? I'd be very grateful for any advice others can offer on this question!
you have to use docker volumes if you want one service to write to other service. If web writes to someFolderName the same file will exist in the tomcat_service.
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
volumes:
- my_shared_data:/bitnami/tomcat/data/
web:
volumes:
- my_shared_data:/someFolderName
volumes:
my_shared_data:
Data in volumes persist and they will be available even next time you re-create docker containers. You should always use docker volumes when writing some data in docker containers.