I've got a very simple single-host docker compose setup:
version: "3"
services:
bukofka:
image: picoglavar
restart: always
environment:
- PORT=8000
- MODEL=/models/large
volumes:
- glavar:/models
chlenix:
image: picoglavar
restart: always
environment:
- PORT=8000
- MODEL=/models/small
volumes:
- glavar:/models
# ... other containers ...
As you can see, it's only two services based off a single image, so nothing special really. When I open up docker ps I can see these two services churning. And there I open htop and see that each python application is run at least four times; this is very surprising because I haven't setup any in-container kind of replication, and I'm not running this in any kind of swarm mode.
Why does this happen?
I'm a complete idiot. And colour blind too, apparently.
Note that the lines in green are threads, not processes: https://superuser.com/a/1496571/173193
per #nick-odell
Related
I have a portainer stack running one container. Lets use microbin as an example.
The docker-compose yaml looks like this:
---
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
This particular container is hosted on docker hub, and the maintainer provides examples of command line arguments that can be appended to the dockerfile to activate various features easily. One example would be:
--no-listing
Disables the /pastalist endpoint, essentially making all pastas private.
So this brings me to my issue. I don't want to maintain my own custom dockerfile, and in the past I have always inserted environment variables into the docker-compose yaml to call features like this.
An example would be like this - I have a stack running for Authentik (a sso/saml/idp gateway with a pretty web interface). You can see the "environment:" section and the variables I am calling.
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2022.5.3}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
AUTHENTIK_ERROR_REPORTING__ENABLED: "true"
# WORKERS: 2
volumes:
- ./media:/media
- ./custom-templates:/templates
- geoip:/geoip
env_file:
- stack.env
So - not knowing how the development side of making these containers and hosting them on docker-hub goes... is there a way for me to use these command line arguments for microbin as environment variables in my docker-compose yaml / stack configuration file, or am I going to have to wait on the maintainer to implement this as a feature?
Thanks for your help in advance.
You can pass command line arguments in your docker-compose.yml file using the command attribute. That assumes of course the process started within the Docker image can deal with those, but that seems to be the case for your image and should generally be the case.
version: "3"
services:
paste:
image: danielszabo99/microbin:latest
container_name: microbin
restart: always
ports:
- "8525:8080"
volumes:
- /mnt/docker_volumes/microbin-data:/app/pasta_data
command: my command line --args here
See Docker Compose Reference - command for details.
I have two separate projects in two separate folders.
when I run one of them, the second one cannot run because of conflict between ports.
The problem is for ElasticSearch image.
Followings are two docker-compose files:
# /home/foder_1/
version: '3'
services:
elasticsearch_ci:
image: elasticsearch:7.14.2
restart: always
expose:
- 9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
env_file:
- ./envs/ci.env
container_name: elasticsearch_ci_pipeline
Second one:
# /home/folder_2/
version: '3'
services:
elasticsearch:
image: elasticsearch:7.14.2
expose:
- 9200
volumes:
- elastic_search_data_staging:/var/lib/elastic_search/data/
environment:
- discovery.type=single-node
- xpack.security.enabled=false
When I run docker ps, I see the second ElasticSearch container created but it doesn't show its ports.
How can I solve the problem?
Update:
The problem is in this situation, my web application (django-base) cannot connect to the second elastic search instance.
Also, when I change port number in the second docker-compose for ES, (for example adding 9500 as Expose), again the port numbers of ES is the default ports (9200, 9300) plus my new port (9500) and my web application cannot connect to none of them.
Finally I found what the problem is.
My server has only 4 GBs of RAM and when one elasticsearch is running, other instances of elasticsearch cannot start because the first instance consumes the most of RAM.
if you want to run two separate instances of elasticsearch, you shoul consider at least 6 GBs of RAM per instance.
I have a docker container which is brought up with the help of a docker-compose file along with a database container. I want to do this:
Keep the database container running
Schedule the container with my python program to run daily, generate results and stop again
This is my configuration file:
version: '3.7'
services:
database:
container_name: sql_database
image: mysql:latest
command: --init-file /docker-entrypoint-initdb.d/init.sql
ports:
- 13306:3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./backup.sql:/docker-entrypoint-initdb.d/init.sql
python-container:
container_name: python-container
build: ./python_project
command: python main.py
depends_on:
- database
volumes:
- myvol:/python_project/data
volumes:
myvol:
Can someone please help me with this? Thanks!
I was just about to ask the same thing. Seems silly to keep a container going 24/7 just to run one job a day (in my case, certbot renew).
I think there may be a way to fake this using the RESTART_POLICY option with a long delay and no maximum retries, but I haven't tried it yet.
EDIT: Apparently restart_policy only works for swarms. Pity.
If the underlying container has a bash shell, you set the command to run a loop with a delay, like this:
while true; do python main.py; sleep 1; done
I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.
I have a docker based system that comprises of three containers:
1. The official PHP container, modified with some additional pear libs
2. mysql:5.7
3: alterrebe/postfix-relay (a postfix container)
The official php container has a volume that is linked to the host system's code repository which should in theory allow me to work on this application the same as I would if it were hosted "locally".
However, every time the system is brought up, I have to run
docker-compose stop && docker-compose up -d
in order to see the changes that I just made to the system. It's possible that I don't understand Docker correctly and this is by design, but stopping and starting the container after every code change slows down development substantially. Can anyone tell me what I am doing wrong (if anything)? Thanks in advance.
My docker-compose.yml is below (with variables and what not hidden of course)
web:
build: .
links:
- mysql
- mailrelay
environment:
- HIDDEN_VAR=placeholder
- ABC_ENV=development
volumes:
- ./html/:/var/www/html/
ports:
- "0.0.0.0:80:80"
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=abcdefg
- MYSQL_DATABASE=thedatabase
volumes:
- .:/db/:ro
mailrelay:
hostname: mailrelay
image: alterrebe/postfix-relay
ports:
- "25:25"
environment:
- EXT_RELAY_HOST=relay.relay.com
- EXT_RELAY_PORT=25
- SMTP_LOGIN=CLASSIFIED
- SMTP_PASSWORD=ABCDEFGHIK
- ACCEPTED_NETWORKS=172.0.0.0/8
Eventually I just started running
docker stop {{ container name }} && docker start {{ container name }}
every time instead of docker-compose. using Docker directly instead of docker-compose is super fast ( < 1second as opposed to over a minute) so it stopped being a big deal