I have been working in a docker environment for PHP development and finally I get it working as I need. This environment relies on docker-compose and the config looks like:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reynierpm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
There are some configurations like extra_hosts and env-file that is giving me some headache. Why? Because I don't know if the image will works under such circumstances.
Let's said:
I have run docker-compose up -d and the image reynierpm/php55-dev with tag latest has been built
I have everything working as it should be because I am setting the proper values on the docker-compose.yml file
I have logged in into my account and I push the image to the repository: docker push reynierpm/php55-dev
What happen if tomorrow you clone the repository and try to run docker-compose up but changing the docker-compose.yml file to fit your settings? How the image behaves in this case? I mean makes sense to create/upload the image to Docker Hub if any time I run the command docker-compose up it will be build again due to the changes on the config file?
Maybe I am completing wrong and some magic happen behind scenes but I need to know if I am doing this right
If people clone your git repository and do a docker-compose up -d it will in fact building a new image. If you only want people use your image from docker hub, drop the build section of docker-compose.yml and publish it in your docker hub page. Check this you can see the proposed docker-compose.yml.
Just paste this in your page:
version: '2'
services:
php-apache:
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
DOCKERHOST: 'yourhostip'
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/var/www:/var/www
If your env_file just have a couple of variables it is better to show them directly in the Dockerfile. It is better to replace extra_hosts with an environment variable and change in your php.ini or where ever you use the extra host by the variable:
.....
xdebug.remote_host = ${DOCKERHOST}
.....
You can in your Dockerfile define a default value for this variable:
ENV DOCKERHOST=localhost
Hope it helps
Regards
Related
I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.
I have a Dockerfile to build my node container, it looks as follows:
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 4500
CMD ["npm", "start"]
based on this docker file, I am using docker compose to run this container and link it to a mongo container such that it refers to mongo-service. The docker-compose.yml looks as follows
version: '3'
services:
backend:
container_name: docker-node-mongo-container
restart: always
build: .
ports:
- '4700:4500'
links:
- mongo-service
mongo-service:
container_name: mongo-container
image: mongo
ports:
- "27017:27017"
Expected behavior: Everytime I make a new change to the project on my local computer, I want the docker-compose to restart so that the new changes are reflected.
Current behavior: To make the new changed reflect on docker-compose, I have to do docker-compose down and then delete images. I am guessing that it has to rebuild images. How do I make it so that whenever I make change, the dockerfile builds a new image?
I understand that need to use volumes. I am just failing to understand how. Could somebody please help me here? docker
When you make a change, you need to run docker-compose up --build. That will rebuild your image and restart containers as needed.
Docker has no facility to detect code changes, and it is not intended as a live-reloading environment. Volumes are not intended to hold code, and there are a couple of problems people run into attempting it (Docker file sync can be slow or inconsistent; putting a node_modules tree into an anonymous volume actively ignores changes to package.json; it ports especially badly to clustered environments like Kubernetes). You can use a host Node pointed at your Docker MongoDB for day-to-day development, and still use this Docker-based setup for deployment.
In order for you to 'restart' your docker application, you need to use docker volumes.
Add into your docker-compose.yml file something like:
version: '3'
services:
backend:
container_name: docker-node-mongo-container
restart: always
build: .
ports:
- '4700:4500'
links:
- mongo-service
volumes:
- .:/usr/src/app
mongo-service:
container_name: mongo-container
image: mongo
ports:
- "27017:27017"
The volumes tag is a simple saying: "Hey, map the current folder outside the container (the dot) to the working directory inside the container".
I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app
Let's say we have the following docker-compose.yml:
version: '3'
services:
db:
image: "postgres"
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=mysecretpassword
web:
build: web
depends_on: [ db ]
ports:
- "80:80"
The first service, db, just runs a container with the official postgres image from Docker Hub.
The second service, web, first builds a new image based on the Dockerfile in a folder also called web, then runs a container with that image.
While developing, we now can (repeatedly) make changes to whatever is in the web folder, then run docker-compose up --build to run our app locally.
Let's say we now want to deploy to production. My understanding is that docker-compose.yml can now be used to "define a stack in Docker's swarm mode" (see this answer, for instance). However, for the build step of the web service, Docker's compose file documentation states that
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. The docker stack command accepts only pre-built images.
It probably wouldn't be a great idea to build the image on the production machine anyways, as this would leave build artifacts (source code) behind; this should happen on a build server.
My question is, is there a recommended way to modify docker-compose.yml en route to production to swap out build: web with image: <id> somehow?
Nothing on Use Compose in production on that. Is there something wrong with my approach in general?
docker-compose.yml should only contain canonical service definitions.
Anything that's specific to the build environment (e.g. dev vs prod) should be declared in a separate file docker-compose.override.yml. Each build environment can have its own version of that file.
The build: web declaration doesn't belong into docker-compose.yml, as it's only supposed to run locally (and possibly on a build server), not in production.
Therefore, in the example above, this is what docker-compose.yml should look like:
version: '3'
services:
db:
image: "postgres"
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=mysecretpassword
web:
depends_on: [ db ]
ports:
- "80:80"
And this would be the default docker-compose.override.yml for local development:
version: '3'
services:
web:
build: web
Running docker-compose up --build -d will now build the latest code changes and launch our app locally.
There could also be another version docker-compose.override.build.yml, targeting a build/CI server:
version: '3'
services:
web:
build: web
image: mydockeruser/web
Running docker-compose -f docker-compose.yml -f docker-compose.override.build.yml push will build the latest code changes and push the image to its registry/repository.
Finally, there could be another version docker-compose.override.prod.yml:
version: '3'
services:
web:
image: mydockeruser/web
Deploying to production (just to a single Docker host, not a cluster) can now be as simple as copying over only docker-compose.yml and docker-compose.override.prod.yml and running docker-compose -f docker-compose.yml -f docker-compose.override.prod.yml up -d.
The correct way to do it (i.e. the way I do it :P) is to have different docker-compose files; for example, docker-compose.dev.yml and docker-compose.prod.yml. You can then push your production-ready image to a repository, say Docker Hub, and reference that image in docker-compose.prod.yml's web service. All the while you can use the dev docker-compose file (the one with the build option) for local development.
Also, in case you've thought about this, you cannot use env variables as keys in docker-compose (see here). So there is no way to conditionally set either image or build options.
I run my development project as docker image. I test like;
#docker-compose up
I edited one file locally. but when i again run #docker-compose up, I do not see my changes. What command I need to run?
My docker-compose.yml
version: '2'
services:
app:
image: lobdocker/eps-portal:latest
volumes:
- /var/www/storage
env_file: '.env'
working_dir: /app
ports:
- 8089:80
Docker compose will always look for the image when you specify it like that.
Use the build property to point to a local folder, which you can build before upping.
version: '2'
services:
app:
build: mydir/
image: lobdocker/eps-portal:latest
volumes:
- /var/www/storage
env_file: '.env'
working_dir: /app
ports:
- 8089:80
before docker-compose up, you have to build the docker image once again. So that the changes will reflect. it wont take much time though you already builds the image. So the comments are
$ docker-compose build
then
$ docker-compose up
Note: docker-compose up -d will run in background
If you want your changes to be reflected you need to commit the image first,
here is the command
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
To commit the changes from the dockerfile you have to use --change option, you can get more information here
To change the code locally inside a container you can use docker exec command, to get the console access and whatever changes you make will be their until the container is destroyed.