I have a portainer stack running one container. Lets use microbin as an example.
The docker-compose yaml looks like this:
---
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
This particular container is hosted on docker hub, and the maintainer provides examples of command line arguments that can be appended to the dockerfile to activate various features easily. One example would be:
--no-listing
Disables the /pastalist endpoint, essentially making all pastas private.
So this brings me to my issue. I don't want to maintain my own custom dockerfile, and in the past I have always inserted environment variables into the docker-compose yaml to call features like this.
An example would be like this - I have a stack running for Authentik (a sso/saml/idp gateway with a pretty web interface). You can see the "environment:" section and the variables I am calling.
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2022.5.3}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
# WORKERS: 2
volumes:
- ./media:/media
- ./custom-templates:/templates
- geoip:/geoip
env_file:
- stack.env
So - not knowing how the development side of making these containers and hosting them on docker-hub goes... is there a way for me to use these command line arguments for microbin as environment variables in my docker-compose yaml / stack configuration file, or am I going to have to wait on the maintainer to implement this as a feature?
Thanks for your help in advance.
You can pass command line arguments in your docker-compose.yml file using the command attribute. That assumes of course the process started within the Docker image can deal with those, but that seems to be the case for your image and should generally be the case.
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
command: my command line --args here
See Docker Compose Reference - command for details.
Related
Hi guys and excuse me for my English. I'm using docker swarm, when I attempt to deploy docker application with this command
docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml chatappapi
it shows the next error : services.chat-app-api Additional property pull_policy is not allowed
why this happens?
how do I solve this?
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:stable-alpine
ports:
- "5000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
chat-app-api:
build: .
image: username/myapp
pull_policy: always
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT= 5000
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
- REDIRECT_URI=${REDIRECT_URI}
- REFRESH_TOKEN=${REFRESH_TOKEN}
depends_on:
- mongo-db
mongo-db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: 'username'
MONGO_INITDB_ROOT_PASSWORD: 'password'
ports:
- "27017:27017"
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
docker-compose.prod.yml
version: "3.9"
services:
nginx:
ports:
- "80:80"
chat-app-api:
deploy:
mode: replicated
replicas: 8
restart_policy:
condition: any
update_config:
parallelism: 2
delay: 15s
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- MONGO_IP=${MONGO_IP}
command: node index.js
mongo-db:
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
Information
docker-compose version 1.29.2
Docker version 20.10.8
Ubuntu 20.04.2 LTS
Thanks in advance.
Your problem line is in docker-compose.yml
chat-app-api:
build: .
image: username/myapp
pull_policy: always # <== this is the bad line, delete it
The docker compose file reference doesn't have any pull_policy in the api because
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
I think pull_policy used to be a thing for compose? Maybe keep the latest api documentation open to refer to/search through whilst you're developing (things can and do change fairly frequently with compose).
If you want to ensure that the most recent version of an image is pulled onto all servers in a swarm then run docker compose -f ./docker-compose.yml pull on each server in turn (docker stack doesn't have functionality to run this over an entire swarm yet).
As an aside: I wouldn't combine two .yml files with a single docker stack command without a very good reason to do so.
You are mixing docker-compose and docker swarm ideas up in the same files:
It is probably worth breaking your project up into 3 files:
docker-compose.yml
This would contain just the basic service definitions common to both compose and swarm.
docker-compose.override.yml
Conveniently, docker-compose and docker compose both should read this file automatically. This file should contain any "port:", "depends_on:", "build:" directives, and any convenience volumes use for development.
stack.production.yml
The override file to be used in stack deployments should contain everything understood by swarm and not compose, and b. everything required for production.
Here you would use configs: or even secrets: rather than volume mappings to local folders to inject content into containers. Rather than relying on ports: directives, you would install an ingress router on the swarm such as traefik. and so on.
With this arrangement, docker compose can be used to develop and build your compose stack locally, and docker stack deploy won't have to be exposed to compose syntax it doesn't understand.
pull_policy is in the latest version of docker-compose.
To upgrade your docker-compose refer to:
https://docs.docker.com/compose/install/
The spec for more info:
https://github.com/compose-spec/compose-spec/blob/master/spec.md#pull_policy
I have two django apps. Both are run as part of two different docker-compose files.
App 1 docker-compose.yml file:
services:
django:
build: .
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "8013:8000"
volumes:
- ./:/app
depends_on:
- db
App 2 docker-compose.yml file
services:
django:
build: .
container_name: "web"
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "8003:8000"
volumes:
- ./:/app
depends_on:
- db
So basically, my goal is to call App 2's django endpoint from App 1. To do this, in app 1's code, I utilize url http://web:8003/app2_endpoint
Also, I have ALLOWED_HOSTS=['*'] in both projects
Yet, I end up with Max retries exceeded error.
I also came across this question, but I failed to figure out the solution for my case.
If you don't specify custom docker network in your compose file, each compose file would create a separated network for itself. So basically your containers in separated compose can't see each other
The solution can be using same docker network in compose files. Sth like:
services:
...
networks:
default:
external: true
name: YOUR_DOCKER_NETWORK
And add it in another compose too
This tells compose to use an external docker network as default, named YOUR_DOCKER_NETWORK
Note that you should create this network by yourself, because it's external:
docker network create YOUR_DOCKER_NETWORK
You can also use custom networks
Docs in https://docs.docker.com/compose/networking/
In my docker-compose I have multiple client and worker classes, specifically a client of type A, one of type B and another of type C, with their respective worker classes. Every time I execute docker-compose I need to use the option --scale a total of 6 times if I want to use a number of containers different to 1 for each class: --scale cliA=2 --scale cliB=3 [...]. Is there an alternative to having classes on my docker-compose.yml and instead have an unified class for a client which could be scaled differently for each different class (and the same for the worker)?
I have reasoned about it, and I have come to the conclusion that it may be possible to do something like this (check the code at the end of the question for reference on the cli class):
cli:
image: client
// More stuff
scale: 4
environment:
CLASSID=A
scale: 2
environment:
CLASSID=B
// [...]
This docker-compose.yml would be able to create classes as needed without the need of calling --scale every time. However, I have checked the reference for docker-compose but I haven't found anything that helps me. I found an insightful post which mentioned that I could use docker-swarm in order to accomplish this task, but I think it's out of the scope of the subject (this question is trying to answer an exercise).
Here is the code for the docker-compose.yml file:
version: '2'
services:
cliA:
image: client
build: ./client/
links:
- bro
environment:
- BROKER_URL=tcp://bro:9998
- CLASSID=A
// Similar description for cliB, cliC; only CLASSID changes
worA:
image: worker
build: ./worker/
links:
- bro
environment:
- BROKER_URL=tcp://bro:9999
- CLASSID=A
// Similar description for worB, worC; only CLASSID changes
bro:
image: broker
build: ./broker/
expose:
- "9998"
- "9999"
Any help is appreciated.
Services are a definition of how to run a container, along with all of the settings. If you need multiple containers running with different settings, you need different services. You can use the Yaml alias and anchor syntax to effectively copy one service to another and then apply changes, e.g.:
version: "3"
services:
app1: &app1
image: app
environment:
app: 1
app2:
<<*app1
environment:
app: 2
Once you have broken your problem into multiple services, you can follow the advices from your linked question.
I'm also seeing the possibility to use variables in your compose file. E.g.
version: '2'
services:
cli:
image: client
build: ./client/
links:
- bro
environment:
- BROKER_URL=tcp://bro:9998
- CLASSID=${CLASSID}
scale: ${SCALE}
And then you could deploy with various environment files:
$ cat envA.sh
CLASSID=A
SCALE=4
$ cat envB.sh
CLASSID=B
SCALE=2
$ set -a && . ./envA.sh && set +a && docker-compose -p projA up
$ set -a && . ./envB.sh && set +a && docker-compose -p projB up
You can scale up or down your containers by the following command:
docker-compose up --scale service=1 -d
Here, service=1 specifies how may containers you want to run in daemon.
This question already has an answer here:
Build a single image based on docker compose containers
(1 answer)
Closed 9 months ago.
I have an application composed of a front end, a back end and a mongodb database, each of these dockerized in a container. When I build them with docker compose I have as many images as parts in my application (3).
Is there any way to build a single container from these 3 images and therefore a single image?
Thanks
You can write a Dockerfile if you want to run your application as a single container. it will give you single image as well.
I guess you could do this if you really wanted to. The preferred way is to use docker-compose for this. I would suggest that you create a docker-compose.yml file that helps you setup this:
nginx->frontend (possibly with server side rendering) -> backend -> mongodb
The idea behind docker-compose is to easily get that multi container application up and running using a docker-compose.yml file , then you can just bring up the application with:
$ docker-compose up
You could it setup with something like this:
(This is a hypothetical docker-compose.yml file, but with your correct values it should work. Let me know if you have any questions:
version: '2'
services:
frontend-container:
image: frontend:latest
links:
- backend-container
environment:
- DEBUG=True
restart: always
environment:
- BASE_HOST=http://backend-container:8000/
backend-container:
image: nodejs-backend:latest
links:
- mongodb
environment:
- NODE_ENV=production
- BASE_HOST=http://django-container:8000/
restart: always
mongodb:
image: mongo:latest
container_name: "mongodb"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
command: mongod --smallfiles --logpath=/dev/null
nginx-container:
image: nginx-container-custom-config:latest
links:
- frontend-container
ports:
- "80:80"
docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!