I have a docker-compose image running:
version: '2'
services:
db:
image: postgres:commit1
service:
image: service:commit1
ports:
- "3000:3000"
depends_on:
- db
The image has a tag of commit id of git. If anything changes in code, CI/CD pipeline runs and updates the image with latest commit id.
Now let's say I have images as:
postgres:commit2 and service:commit2.
What is the best procedure to update the images given the containers are running using commit1 in the compose file?
Do I need to update the images manually in compose and then:
docker-compose restart
And remove the other containers manually?
Is it the best way?
One way is to templatize your compose file and have a separate CI/CD step to generate a new file with the new image tags on every build.
For ex. -
// docker-compose.yml.template
version: '2'
services:
db:
image: postgres:{{ COMMIT_ID }}
service:
image: service:{{ COMMIT_ID }}
ports:
- "3000:3000"
depends_on:
- db
and a using sed or awk script you can replace {{ COMMIT_ID }} with the latest commit id and generate the new file.
// docker-compose.yml
version: '2'
services:
db:
image: postgres:commit2
service:
image: service:commit2
ports:
- "3000:3000"
depends_on:
- db
Then you can finally pull and use latest images using docker-compose pull && docker-compose up -d
I am assuming you are looking for some option to do rolling update with docker-compose and your service support multiple instances. This can be achieved by, first changing the image id in docker-compose file, you can use your CI/CD tool to create new docker-compose file with updated commit hash.
Then you can use below command add new containers(with new new image) behind the services.
docker-compose up --scale db=1 --scale service=1 --no-recreate
Then next step would be to delete the old containers
docker rm old-container # service
Then the last step would be to scale the services to the number of instances you want.
This is the best I can think of for just docker-compose. If you were using docker swarm or any other system it would have been out of box feature :)
Related
We have a docker-compose file with service like
services:
utils-microservice:
container_name: utils-microservice
image: <Masked>.dkr.ecr.ap-south-1.amazonaws.com/utils-microservice:<Some Tag>
ports:
- '1024:1024'
env_file:
- './envs/utils-microservice.env'
Now, what we want to do?
Once CI Pushes a new TAG to ECR, We can run a shell script to achieve the following
1 Stop the container
2 Replace the utils-microservice:<Some Tag> to utils-microservice:<Some New Tag>
3 Restart the service with new tag!
Is it achievable? we are not looking to complicate using docker swarm or k8's!
It’s possible to use environment variables in your shell to populate values inside a Compose file:
web:
image: "webapp:${TAG}"
https://docs.docker.com/compose/environment-variables/
I'm running docker compose as follows:
docker-compose -f docker-compose.dev.yml up --build -d
the contents of docker-compose.dev.yml are:
version: '3'
services:
client:
container_name: client
build:
context: frontend
environment:
- CADDY_SUBDOMAIN=xxx
- PRIVATE_IP=xxx
restart: always
ports:
- "80:80"
- "443:443"
links:
- express
volumes:
- /home/ec2-user/.caddy:/root/.caddy
express:
container_name: express
build: express
environment:
- NODE_ENV=development
restart: always
Then I want to create images from these containers to use them in a testing server by pushing them to aws ECR and pulling on the test server, to avoid the time of creating the dockers all over again. Simply using docker commit did not worked.
what is the correct approach to creating images from outputs of docker compose?
thanks
You should basically never use docker commit. The standard approach is to describe how to build your images using a Dockerfile, and check that file into source control. You can push the built image to a registry like Docker Hub, and you can check out the original source code and rebuild the image.
The good news is that you basically have this setup already. Each of your Compose services has a build: block that has the data on how to build the image. So it's enough to
docker-compose build
and you'll get a separate Docker image for each component.
Often if you're doing this you'll also want to push the images to some Docker registry. In the Compose setup, you can specify an image: for each service as well. If you have both build: and image:, that specifies the image name to use for the built image (otherwise Compose will pick one based on the project name).
version: '3.8'
services:
client:
build:
context: frontend
image: registry.example.com/project/frontend
et: cetera
express:
build: express
image: registry.example.com/project/express
et: cetera
Then you can have Compose both build and push the images
docker-compose build
docker-compose push
One final technique that can be useful is to split the Compose setup into two files. The main docker-compose.yml file has the setup you'd need to run the set of containers, on any system, with access to the container registry. A separate docker-compose.override.yml file would support developer use where you have a copy of the source code as well. If you're using Compose for deployment, you only need to copy the main docker-compose.yml file to the target system.
# docker-compose.yml
version: '3.8'
services:
client:
image: registry.example.com/project/frontend
ports: [...]
environment: [...]
restart: always
# volumes: [...]
express:
image: registry.example.com/project/express
ports: [...]
environment: [...]
restart: always
# docker-compose.override.yml
version: '3.8'
services:
client:
build: frontend
# all other settings come from main docker-compose.yml
express:
build: express
# all other settings come from main docker-compose.yml
Hi I have a Nifi docker container stopped and I want to update a property file.
Whenever I update a field, when I run docker-compose start it doesn't update the property file.
How can this be possible?
here is my docker compose:
version: "3.3"
services:
nifi:
image: apache/nifi
volumes:
- /home/ubuntu/nifi/conf:/opt/nifi/nifi-current/conf
ports:
- "8080:8080"
Thanks
We had this issue a while back as well. I believe using volumes essentially creates a symlink, and when the container starts up it overwrites anything in that folder.
Have you considered creating a multistage build? That was our solution:
Dockerfile:
FROM apache/nifi:1.9.2
ADD /path/to/your-props.properties /opt/nifi/nifi-current/conf
We then put the resulting image in our compose
Let's say we have the following docker-compose.yml:
version: '3'
services:
db:
image: "postgres"
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=mysecretpassword
web:
build: web
depends_on: [ db ]
ports:
- "80:80"
The first service, db, just runs a container with the official postgres image from Docker Hub.
The second service, web, first builds a new image based on the Dockerfile in a folder also called web, then runs a container with that image.
While developing, we now can (repeatedly) make changes to whatever is in the web folder, then run docker-compose up --build to run our app locally.
Let's say we now want to deploy to production. My understanding is that docker-compose.yml can now be used to "define a stack in Docker's swarm mode" (see this answer, for instance). However, for the build step of the web service, Docker's compose file documentation states that
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. The docker stack command accepts only pre-built images.
It probably wouldn't be a great idea to build the image on the production machine anyways, as this would leave build artifacts (source code) behind; this should happen on a build server.
My question is, is there a recommended way to modify docker-compose.yml en route to production to swap out build: web with image: <id> somehow?
Nothing on Use Compose in production on that. Is there something wrong with my approach in general?
docker-compose.yml should only contain canonical service definitions.
Anything that's specific to the build environment (e.g. dev vs prod) should be declared in a separate file docker-compose.override.yml. Each build environment can have its own version of that file.
The build: web declaration doesn't belong into docker-compose.yml, as it's only supposed to run locally (and possibly on a build server), not in production.
Therefore, in the example above, this is what docker-compose.yml should look like:
version: '3'
services:
db:
image: "postgres"
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=mysecretpassword
web:
depends_on: [ db ]
ports:
- "80:80"
And this would be the default docker-compose.override.yml for local development:
version: '3'
services:
web:
build: web
Running docker-compose up --build -d will now build the latest code changes and launch our app locally.
There could also be another version docker-compose.override.build.yml, targeting a build/CI server:
version: '3'
services:
web:
build: web
image: mydockeruser/web
Running docker-compose -f docker-compose.yml -f docker-compose.override.build.yml push will build the latest code changes and push the image to its registry/repository.
Finally, there could be another version docker-compose.override.prod.yml:
version: '3'
services:
web:
image: mydockeruser/web
Deploying to production (just to a single Docker host, not a cluster) can now be as simple as copying over only docker-compose.yml and docker-compose.override.prod.yml and running docker-compose -f docker-compose.yml -f docker-compose.override.prod.yml up -d.
The correct way to do it (i.e. the way I do it :P) is to have different docker-compose files; for example, docker-compose.dev.yml and docker-compose.prod.yml. You can then push your production-ready image to a repository, say Docker Hub, and reference that image in docker-compose.prod.yml's web service. All the while you can use the dev docker-compose file (the one with the build option) for local development.
Also, in case you've thought about this, you cannot use env variables as keys in docker-compose (see here). So there is no way to conditionally set either image or build options.
I have been working in a docker environment for PHP development and finally I get it working as I need. This environment relies on docker-compose and the config looks like:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reynierpm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
There are some configurations like extra_hosts and env-file that is giving me some headache. Why? Because I don't know if the image will works under such circumstances.
Let's said:
I have run docker-compose up -d and the image reynierpm/php55-dev with tag latest has been built
I have everything working as it should be because I am setting the proper values on the docker-compose.yml file
I have logged in into my account and I push the image to the repository: docker push reynierpm/php55-dev
What happen if tomorrow you clone the repository and try to run docker-compose up but changing the docker-compose.yml file to fit your settings? How the image behaves in this case? I mean makes sense to create/upload the image to Docker Hub if any time I run the command docker-compose up it will be build again due to the changes on the config file?
Maybe I am completing wrong and some magic happen behind scenes but I need to know if I am doing this right
If people clone your git repository and do a docker-compose up -d it will in fact building a new image. If you only want people use your image from docker hub, drop the build section of docker-compose.yml and publish it in your docker hub page. Check this you can see the proposed docker-compose.yml.
Just paste this in your page:
version: '2'
services:
php-apache:
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
DOCKERHOST: 'yourhostip'
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/var/www:/var/www
If your env_file just have a couple of variables it is better to show them directly in the Dockerfile. It is better to replace extra_hosts with an environment variable and change in your php.ini or where ever you use the extra host by the variable:
.....
xdebug.remote_host = ${DOCKERHOST}
.....
You can in your Dockerfile define a default value for this variable:
ENV DOCKERHOST=localhost
Hope it helps
Regards