I'm running docker compose as follows:
docker-compose -f docker-compose.dev.yml up --build -d
the contents of docker-compose.dev.yml are:
version: '3'
services:
client:
container_name: client
build:
context: frontend
environment:
- CADDY_SUBDOMAIN=xxx
- PRIVATE_IP=xxx
restart: always
ports:
- "80:80"
- "443:443"
links:
- express
volumes:
- /home/ec2-user/.caddy:/root/.caddy
express:
container_name: express
build: express
environment:
- NODE_ENV=development
restart: always
Then I want to create images from these containers to use them in a testing server by pushing them to aws ECR and pulling on the test server, to avoid the time of creating the dockers all over again. Simply using docker commit did not worked.
what is the correct approach to creating images from outputs of docker compose?
thanks
You should basically never use docker commit. The standard approach is to describe how to build your images using a Dockerfile, and check that file into source control. You can push the built image to a registry like Docker Hub, and you can check out the original source code and rebuild the image.
The good news is that you basically have this setup already. Each of your Compose services has a build: block that has the data on how to build the image. So it's enough to
docker-compose build
and you'll get a separate Docker image for each component.
Often if you're doing this you'll also want to push the images to some Docker registry. In the Compose setup, you can specify an image: for each service as well. If you have both build: and image:, that specifies the image name to use for the built image (otherwise Compose will pick one based on the project name).
version: '3.8'
services:
client:
build:
context: frontend
image: registry.example.com/project/frontend
et: cetera
express:
build: express
image: registry.example.com/project/express
et: cetera
Then you can have Compose both build and push the images
docker-compose build
docker-compose push
One final technique that can be useful is to split the Compose setup into two files. The main docker-compose.yml file has the setup you'd need to run the set of containers, on any system, with access to the container registry. A separate docker-compose.override.yml file would support developer use where you have a copy of the source code as well. If you're using Compose for deployment, you only need to copy the main docker-compose.yml file to the target system.
# docker-compose.yml
version: '3.8'
services:
client:
image: registry.example.com/project/frontend
ports: [...]
environment: [...]
restart: always
# volumes: [...]
express:
image: registry.example.com/project/express
ports: [...]
environment: [...]
restart: always
# docker-compose.override.yml
version: '3.8'
services:
client:
build: frontend
# all other settings come from main docker-compose.yml
express:
build: express
# all other settings come from main docker-compose.yml
Related
As far as I understand, only images can be uploaded to the docker hub, which then need to be spooled, and can be launched via docker run. But what if I have several images that I run through docker compose? I have a site on next.js and nginx. There is such docker-compose.yml
version: '3'
services:
nextjs:
build: ./
networks:
- site_network
nginx:
build: ./.docker/nginx
ports:
- 80:80
- 443:443
networks:
- site_network
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
depends_on:
- nextjs
networks:
site_network:
driver: bridge
If I do a git clone of the repository on the server, and do docker-compose up --build -d, then everything works. I want to automate everything via gitlab ci/cd. I found an article that describes the procedure for installing a runner on a server + a description of the .gitlab-ci.yml file that creates an image, uploads it to the docker hub, and then downloads it on the server and launches it using docker run. Here I see this approach: in gitlab-ci.yml I make several images that I upload to the hub. Next, I upload a file from the docker-compose.yml repository to the server, which will have the following structure:
version: '3'
services:
nextjs:
image: registry.gitlab.com/path_to_project/next:latest
networks:
- site_network
nginx:
image: registry.gitlab.com/path_to_project/nginx:latest
ports:
- 80:80
- 443:443
networks:
- site_network
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
depends_on:
- nextjs
networks:
site_network:
driver: bridge
How correct is this approach? Maybe there is a more reliable and better way? Advanced stack (kubernetes, etc.) not yet considered, I want to learn all the basics first before moving on.
I'm using docker-compose to built customised nginx and php images and then I'd like to push it to DockeHub.
I'm using COMPOSE_PROJECT_NAME in an .env file to set a prefix image name:
COMPOSE_PROJECT_NAME=myapp
and docker-compose.yml is something like:
version: '3'
services:
php-fpm:
build:
context: ./php-fpm
args:
.....
volumes:
.....
expose:
.....
nginx:
build:
context: ./nginx
args:
.....
volumes:
.....
port:
.....
Running docker-compose up -d the images name are:
myapp_nginx
myapp_php-fpm
the container name are:
myapp_nginx_1
myapp_php-fpm_1
Now, to push these images to DocekeHub I need to change image name adding the DockerHub "account" suffix:
myaccount/myapp_nginx
myaccount/myapp_php-fpm
to solve this problem, I added the "image" option to docker-compose.yml:
version: '3'
services:
php-fpm:
build:
context: ./php-fpm
args:
.....
volumes:
.....
expose:
.....
image: myaccount/${COMPOSE_PROJECT_NAME}_php-fpm
nginx:
build:
context: ./nginx
args:
.....
volumes:
.....
port:
.....
image: myaccount/${COMPOSE_PROJECT_NAME}_nginx
now, running docker-compose push the images are pushed to DockerHub.
Ok, my question are:
1) is there a way to insert DockerHub account name myaccout into COMPOSE_PROJECT_NAME variable? Something like: COMPOSE_PROJECT_NAME=myaccount/myapp to create automatically the image name like: myaccount/myapp_nginx and myaccount/myapp_php-fpm?
2) is there a variable to get "service name" the retrieve the name nginx or php-fpm?
For example, into the docker-compose.yml file, I could set: image: myaccount/${COMPOSE_PROJECT_NAME}_<service_name> then If I change the "service name" from nginx to nginx2 automatically the image will be myaccount/myapp_nginx2
3) is there a way to rename the images produced with docker-compose only to permit the push?
Thank you
You'd typically use fixed image names in this context. A published image name shouldn't be dependent on details of the specific Compose file that launched it.
version: '3'
services:
nginx:
build: ./nginx
image: myaccount/nginx # no project name
Imagine you're planning to run this same setup on a different system. It has Docker and Compose installed, but none of your application source code. You should be able to copy the docker-compose.yml file there, delete the build: line, and run the same thing. It doesn't matter if the directory is named project or other, and it doesn't matter if the Compose setup actually names the service proxy instead of nginx; you'd use the same image: to refer to it.
# COMPOSE_PROJECT_NAME=other
version: '3'
services:
proxy:
image: myaccount/nginx # same image as above
If you're unhappy with the names Compose produces, you can manually docker tag and docker push the images outside of Compose. You'll also need to do this if you want an image to have multiple tags (both a date stamp and latest for example) or if for whatever reason you need to push it to multiple repositories (both Docker Hub and Amazon ECR).
I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app
Let's say we have the following docker-compose.yml:
version: '3'
services:
db:
image: "postgres"
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=mysecretpassword
web:
build: web
depends_on: [ db ]
ports:
- "80:80"
The first service, db, just runs a container with the official postgres image from Docker Hub.
The second service, web, first builds a new image based on the Dockerfile in a folder also called web, then runs a container with that image.
While developing, we now can (repeatedly) make changes to whatever is in the web folder, then run docker-compose up --build to run our app locally.
Let's say we now want to deploy to production. My understanding is that docker-compose.yml can now be used to "define a stack in Docker's swarm mode" (see this answer, for instance). However, for the build step of the web service, Docker's compose file documentation states that
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. The docker stack command accepts only pre-built images.
It probably wouldn't be a great idea to build the image on the production machine anyways, as this would leave build artifacts (source code) behind; this should happen on a build server.
My question is, is there a recommended way to modify docker-compose.yml en route to production to swap out build: web with image: <id> somehow?
Nothing on Use Compose in production on that. Is there something wrong with my approach in general?
docker-compose.yml should only contain canonical service definitions.
Anything that's specific to the build environment (e.g. dev vs prod) should be declared in a separate file docker-compose.override.yml. Each build environment can have its own version of that file.
The build: web declaration doesn't belong into docker-compose.yml, as it's only supposed to run locally (and possibly on a build server), not in production.
Therefore, in the example above, this is what docker-compose.yml should look like:
version: '3'
services:
db:
image: "postgres"
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=mysecretpassword
web:
depends_on: [ db ]
ports:
- "80:80"
And this would be the default docker-compose.override.yml for local development:
version: '3'
services:
web:
build: web
Running docker-compose up --build -d will now build the latest code changes and launch our app locally.
There could also be another version docker-compose.override.build.yml, targeting a build/CI server:
version: '3'
services:
web:
build: web
image: mydockeruser/web
Running docker-compose -f docker-compose.yml -f docker-compose.override.build.yml push will build the latest code changes and push the image to its registry/repository.
Finally, there could be another version docker-compose.override.prod.yml:
version: '3'
services:
web:
image: mydockeruser/web
Deploying to production (just to a single Docker host, not a cluster) can now be as simple as copying over only docker-compose.yml and docker-compose.override.prod.yml and running docker-compose -f docker-compose.yml -f docker-compose.override.prod.yml up -d.
The correct way to do it (i.e. the way I do it :P) is to have different docker-compose files; for example, docker-compose.dev.yml and docker-compose.prod.yml. You can then push your production-ready image to a repository, say Docker Hub, and reference that image in docker-compose.prod.yml's web service. All the while you can use the dev docker-compose file (the one with the build option) for local development.
Also, in case you've thought about this, you cannot use env variables as keys in docker-compose (see here). So there is no way to conditionally set either image or build options.
I want to have two docker-compose files, where one overrides another.
(The motivation comes from Docker Compose Docs)
The use case comes from the buildbot environment. The first docker-compose file should define a simple service. This is a service that is going to be tested. Let's take
version: '2'
services:
service-node:
build:
context: ./res
dockerfile: Dockerfile
image: my/server
env_file: .env
The second docker-compose file (let's name it docker-compose.test.yml) overrides the service-node to add a buildbot worker feature, and creates the second container, i.e. buildbot master node, that is going to control testing machinery. Let's take
version: '2'
services:
service-node:
build:
context: ./res
dockerfile: buildbot.worker.Dockerfile
image: my/buildbot-worker
container_name: bb-worker
env_file: ./res/buildbot.worker.env
environment:
- BB_RES_DIR=/var/lib/buildbot
networks:
testlab:
aliases:
- bb-worker
volumes:
- ./vol/bldbot/worker:/home/bldbotworker
depends_on:
- bb-master
bb-master:
build:
context: ./res
dockerfile: buildbot.master.Dockerfile
image: my/buildbot-master
container_name: bb-master
env_file: ./res/buildbot.master.env
environment:
- BB_RES_DIR=/var/lib/buildbot
networks:
- testlab
expose:
- "9989"
volumes:
- ./vol/bldbot/master:/var/lib/buildbot
networks:
testlab:
driver: bridge
Generally this configuration works, i.e. the command
docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d
builds both images and runs both containers, but there is one shortcoming, i.e. the command
docker-compose ps
shows only one service, bb-worker. At the same time
docker ps
shows both.
Furthermore, the command
docker-compose down
stops only one service, and outputs the message/warning Found orphan containers. Of course, the message refers to bb-master.
How can I override the basic docker-compose.yml file to be able to add additional non-orphan service?
You need to run all docker-compose commands with the flags, e.g.:
docker-compose -f docker-compose.yml -f docker-compose.test.yml down
Alternatively, you can make this the default by writing the following to a .env file in the same folder:
COMPOSE_FILE=docker-compose.yml:docker-compose.test.yml
NOTE:
In windows you need tu use ";" as the separator (#louisvno)