Here is my docker-compose file, mysql.yml:
# Use root/example as user/password credentials
version: '3'
services:
db:
image: mysql
tty: true
stdin_open: true
command: --default-authentication-plugin=mysql_native_password
container_name: db
restart: always
networks:
- db
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: example1
command: bash -c "apt update"
adminer:
image: adminer
restart: always
container_name: web
networks:
- db
ports:
- 8080:8080
volumes:
- ./data/db:/var/lib/mysql
networks:
db:
external: true
When I run this file as "docker-compose -f mysql.yml up -d" it starts working, but after 5 or 10 seconds it dies with 0 exit code. Then, it restarts due to "restart: always" parameter.
I search on the internet about my problem and got some solutions:
First one,
tty: true
std_in_open: true
parameters, but they are not working. The container dies anyway.
Second one,
entrypoint:
- bash
- -c
command:
- |
tail -f /dev/null
This solution is working, but it overrides the default entrypoint, and, so my MySQL service does not work at the end.
Yes, I can concatenate entrypoints or create a Dockerfile(I actually want to complete all this in a single file), but I think it' s not the right way and I need some advice.
Thanks in advance!
When your Compose setup says:
command: bash -c "apt update"
This is the only thing the container does; this runs instead of the normal container process. Once that command completes (successfully) the container will exit (with status code 0).
In normal operation you shouldn't need to specify the command: for a container; the Dockerfile will have a CMD line that provides a useful default. (The notable exception is a setup where you have both a Web server and a background worker sharing substantial code, so you can set CMD to run, say, the Flask application but override command: to run a Celery worker.)
Many of the other options you include in the docker-compose.yml file are unnecessary. You can safely remove tty:, stdin_open:, container_name:, and networks: with no ill effects. (You can configure the Compose-provided default network if you specifically need containers running on a pre-created network.)
The comments hint at trying to run package updates at container startup time. I'd echo #xdhmoore's comment here: you should only run APT or similar package managers during an image build, never on a running container. (You don't want your application startup to fail because a Debian mirror is down, or because an incompatible update has gotten deployed.)
For the standard Docker Hub images, in general they update somewhat frequently, especially if you're not pinning to a specific patch release. If you run
docker-compose pull
docker-compose up
it will ask Docker Hub for a newer version of the image, and recreate the container on it if needed.
The standard Docker Hub packages also frequently download and install the thing they're packaging outside their distribution's package manager system, so running an upgrade isn't necessarily useful.
If you must, though, the best way to do this is to write a minimal Dockerfile
FROM mysql
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get upgrade --assume-yes
and reference it in the docker-compose.yml file
services:
db:
build: .
# replacing the image: line
# do NOT leave `image: mysql` behind
Related
I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:
Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article
I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)
I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.
I'm taking over a website https://www.funfun.io. Unfortunately, I cannot reach the previous developer anymore.
This is a AngularJS+Node+Express+MongoDB application. He decided to use bitnami+docker+nginx in the server. Here is docker-compose.yml:
version: "3"
services:
funfun-node:
image: funfun
restart: always
build: .
environment:
- MONGODB_URI=mongodb://mongodb:27017/news
env_file:
- ./.env
depends_on:
- mongodb
funfun-nginx:
image: funfun-nginx
restart: always
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "3000:8443"
depends_on:
- funfun-node
mongodb:
image: mongo:3.4
restart: always
volumes:
- "10studio-mongo:/data/db"
ports:
- "27018:27017"
networks:
default:
external:
name: 10studio
volumes:
10studio-mongo:
driver: local
Dockerfile.nginx:
FROM bitnami/nginx:1.16
COPY ./funfun.io /opt/bitnami/nginx/conf/server_blocks/default.conf
COPY ./ssl/MyCompanyLocalhost.cer /opt/MyCompanyLocalhost.cer
COPY ./ssl/MyCompanyLocalhost.pvk /opt/MyCompanyLocalhost.pvk
Dockerfile:
FROM node:12
RUN npm install -g yarn nrm --registry=https://registry.npm.taobao.org && nrm use cnpm
COPY ./package.json /opt/funfun/package.json
WORKDIR /opt/funfun
RUN yarn
COPY ./ /opt/funfun/
CMD yarn start
In my local machine, I could use npm start to test the website in a web browser.
I have access to the Ubuntu server. But I'm new to bitnami+docker+nginx, I have the following questions:
In the command line of Ubuntu server, how could I check if the service is running (besides launching the website in a browser)?
How could I shut down and restart the service?
Previously, without docker, we could start mongodb by sudo systemctl enable mongod. Now, with docker, how could we start mongodb?
First of all, to deploy the services mentioned in the compose file locally, you should run the below command
docker-compose up
docker-compose up -d # in the background
After running the above command docker containers will be created and available on your machine.
To list the running containers
docker ps
docker-compose ps
To stop containers
docker stop ${container name}
docker-compose stop
mongodb is part of the docker-compose file and it will be running once you start other services. It will also be restarted automatically in case it crashes or you restarted your machine.
One final note, since you are using external networks you may need to create the network before starting the services.
1.
docker-compose ps will give you the state of your containers
2.
docker-compose stop will stop your containers, keeping their state then you may start them as their are using docker-compose up
docker-compose kill will delete your containers
docker-compose restart will restart your containers
3.
By declaring your mongodb using an official mongo image your container start when you do docker-compose up without any other intervention.
Or you can add command: mongod --auth directly into your docker-compose.yml
the official documentation of docker is very detailed and help a lot for all of this, keep looking on it https://docs.docker.com/compose/
I am quite new to docker but am trying to use docker compose to run automation tests against my application.
I have managed to get docker compose to run my application and run my automation tests, however, at the moment my application is running on localhost when I need it to run against a specific domain example.com.
From research into docker it seems you should be able to hit the application on the hostname by setting it within links, but I still don't seem to be able to.
Below is the code for my docker compose files...
docker-compose.yml
abc:
build: ./
command: run container-dev
ports:
- "443:443"
expose:
- "443"
docker-compose.automation.yml
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && DISPLAY=:1.0 && ENVIRONMENT=qa BASE_URL=https://example.com npm run automation"
links:
- abc:example.com
volumes:
- /tmp:/tmp/
and am using the following command to run...
docker-compose -p tests -f docker-compose.yml -f docker-compose.automation.yml up --build
Is there something I'm missing to map example.com to localhost?
If the two containers are on the same Docker internal network, Docker will provide a DNS service where one can talk to the other by just its container name. As you show this with two separate docker-compose.yml files it's a little tricky, because Docker Compose wants to isolate each file into its own separate mini-Docker world.
The first step is to explicitly declare a network in the "first" docker-compose.yml file. By default Docker Compose will automatically create a network for you, but you need to control its name so that you can refer to it from elsewhere. This means you need a top-level networks: block, and also to attach the container to the network.
version: '3'
networks:
abc:
name: abc
services:
abc:
build: ./
command: run container-dev
ports:
- "443:443"
networks:
abc:
aliases:
- example.com
Then in your test file, you can import that as an external network.
version: 3
networks:
abc:
external: true
name: abc
services:
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && npm run automation"
environment:
DISPLAY: "1.0"
ENVIRONMENT: qa
BASE_URL: "https://example.com"
networks:
- abc
Given the complexity of what you're showing for the "test" container, I would strongly consider running it not in Docker, or else writing a shell script that launches the X server, checks that it actually started, and then runs the test. The docker-compose.yml file isn't the only tool you have here.