Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What's the best practice to use Dockerfile with docker-compose.yml? And how to do CI/CD with Jenkins?
I have 2 microservices and one Postgres database. I create docker-compose.yml file:
version: '3.1'
services:
myflashcards-service-dictionary:
image: myflashcards-service-dictionary
db:
image: postgres
restart: always
ports:
- 5434:5432
The question is what to write in "image:" section? Should I first run
mvn clean install -DskipTests dockerfile:build? But what with the image name?
I'd like to know how to automate the whole CI/CD.
I have Dockerfile:
FROM openjdk:8-jdk-alpine
ADD target/myflashcards-service-dictionary.jar myflashcards-service-dictionary.jar
ENTRYPOINT exec java -Djava.security.egd=file:/dev/./urandom -Dspring.profiles.active=$profile -jar /myflashcards-service-dictionary.jar
EXPOSE 8092
I have also docker-compose.yml but how docker-compose.yml know which image should be used?
Would you briefly outline the main process how to deploy my microservices app to the server?
How to use Dockerfile and docker-compose? When are these files necessary?
Do we need Dockerfile only to create an image in Docker Hub?
Your Dockerfile is similar to the Maven POM file; its a set of instructions for Docker to create an image with (docker build image-name .). Dockerfile is a must, you cannot use Docker without a one. It's like trying to use a Maven without a POM file.
Name of the image is what you give for the Maven plugin (<repository>spotify/foobar</repository>) or docker build <image-name> . and this can be anything you like.
Docker Compose is a tool that can be used to manage a service, that can be comprised of multiple micro-services. It allows users to create an orchestration plan that can be run later. This allows users to script complex information of the Docker environment like volumes, networking, restart policies and many more.
Docker Compose file is an optional one and can be replaced with a different alternative like HashiCorp Nomad But Docker Compose is one of the easiest to use, stick to this if you're new to Docker.
Docker Compose is able to build and use an image at runtime (useful for development) or run an image that already exists in a repository (production recommendation). Full Docker Compose documentation should explain how to write a one.
Build at runtime
version: '3.1'
services:
myflashcards-service-dictionary:
build: path/to/folder/of/Dockerfile
db:
image: postgres
restart: always
ports:
- 5434:5432
Run a pre-existing image
version: '3.1'
services:
myflashcards-service-dictionary:
image: myflashcards-service-dictionary
db:
image: postgres
restart: always
ports:
- 5434:5432
Dockerfile can be used without a Docker Compose, the only difference is that it's not practical to use in production since it considered as a single service deployment. As far as I'm aware, it cannot be used with Docker Swarm
As far as CI/CD goes, you can use a Maven plugin like Dockerfile Maven Plugin. You can find the docs here. This image then can be pushed to a repository like Docker Hub, AWS ECR or even a self-hosted one (I wouldn't recommend this unless you're comfortable with setting up highly secure networks especially if it's not an internal network).
Dockerfile is a spec to build a container image and is used by Docker:
docker build --tag=${REPO}/${IMAGE}:${TAG} --file=./Dockerfile .
The default ${REPO} is docker.io aka DockerHub and is assumed if null|omitted.
You only need Dockerfile for images that you wish to build. For existing images, these are docker pull ... from container image registries (e.g. DockerHUb, Google Container Registry, Quay). Pulls are often performed implicitly by e.g. docker-compose up.
Once built, you may reference this image from a docker-compose.yaml file.
Docker Compose looks in your local image cache (docker image ls) for images. If it doesn't find it, (with your file), it will try to pull myflashcards-service-dictionary:latest and postgres:latest from the default repo (aka dockerhub).
It's possible to include a build spec in docker-compose.yaml too in which case, if not found locally, Docker Compose will try to docker build ... the images for you.
Docker Compose is one tool that permits multiple containers to be configured, run, networked etc. Another, increasingly popular tool for orchestrating containers is Kubernetes.
There's lots of good documentation online for Docker, Docker-Compose and developing CI/CD pipelines.
Related
There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.
This question already has answers here:
How to copy Docker images from one host to another without using a repository
(21 answers)
Closed 3 years ago.
I'm setting up a CI/CD solution. I would like to run a docker application on a production machine that has no access to the internet.
The constraints are as follows:
Build needs to happen on machine A
Resulting image/container needs to be exported and transported to machine B
Optionally: Run the container again with a docker-compose file
I know about docker commit and repos, but this is sadly not an option, as the resulting server does not have access to the internet.
Here's the docker-compose.yaml; this is not set in stone and can change however necessary
version: '2'
services:
test_dev_app:
image: testdevapp:latest
container_name: test_dev_app
hostname: test_dev_app
environment:
DJANGO_SETTINGS_MODULE: "settings.production"
APPLICATION_RUN_TYPE: "uwsgi"
volumes:
- ./:/data/application
ports:
- "8000:8000"
- "8080:8080"
I'd expect to be able to properly transport a container or image and use the same image on a different machine with docker-compose up
Esteban is right about how to do it the registry way, but forgot to mention the "tar" way : you can basically save an image to a tar archive, then later load it to the docker inner registry of another one.
The way to transport the image is up to you!
Still, if you plan to do it often, I recommand following the private registry solution: it's definitely cleaner!
P̶u̶s̶h̶i̶n̶g̶ ̶D̶o̶c̶k̶e̶r̶ ̶i̶m̶a̶g̶e̶s̶ ̶t̶o̶ ̶a̶ ̶r̶e̶g̶i̶s̶t̶r̶y̶ ̶i̶s̶ ̶t̶h̶e̶ ̶o̶n̶l̶y̶ ̶w̶a̶y̶ ̶(̶a̶t̶ ̶l̶e̶a̶s̶t̶ ̶s̶u̶p̶p̶o̶r̶t̶e̶d̶ ̶b̶y̶ ̶d̶o̶c̶k̶e̶r̶ ̶o̶u̶t̶-̶o̶f̶-̶t̶h̶e̶-̶b̶o̶x̶)̶ ̶t̶o̶ ̶s̶h̶a̶r̶e̶ ̶t̶h̶e̶m̶ ̶b̶e̶t̶w̶e̶e̶n̶ ̶s̶e̶r̶v̶e̶r̶s̶.
If internet access is not an option, then take a look at having your own private docker registry.
Deploy it in a network segment accessible from both the pushing and pulling machine.
Then build your docker image including your registry's address and push it:
docker build -t <private_registry_address>/test_dev_app:latest .
docker push <private_registry_address>/test_dev_app:latest
When you push it the docker client will know that it has to use the specified address instead of the public registry.
Or as mentioned by tgogos on the comment below check his link on how to use docker save / docker load on air-gapped environments.
What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)
I am learning docker now. I am trying to figure out what kind of problem Docker label can solve.
I can understand why use label in Dockerfile, e.g add build-related metadata, but I still don't get why using it in docker-compose.yml? What is the difference between using labels vs environment? I assume there will be different use cases but I just can't figure it out.
Can someone give me some practical example?
Thanks
docker-compose.yml is used by docker-compose utility to build and run the services which you have defined in docker-compose.yml
While working with docker-compose we can use two thing
docker-compose build this will build the services which is defined under docker-compose.yml but in order to run this services it has to have a image which is with docker-engine if you do docker image ls you find the images which is built up with the docker-compose and inspect it there you find a label which defines the metadata of that particular image.
docker-compose up this will run the services which is built up in docker-container build now this running container has to have some metadata like env this is set with enviroment in docker-compose.yml
P.S. :- This is my first answer in stack overflow. If you didn't get just give a comment I will try to explain my best.
Another reason to use labels in docker-compose is to flag your containers as part of this docker-compose suite of containers, as opposed to other purposes each docker image might get used for.
Here's an example docker-compose.yml that shares labels across two services:
x-common-labels: &common-labels
my.project.environment: "my project"
my.project.maintainer: "me#example.com"
services:
s1:
image: somebodyelse/someimage
labels:
<<: *common-labels
# ...
s2:
build:
context: .
image: my/s2
labels:
<<: *common-labels
# ...
Then you can do things like this to just kill this project's containers.
docker rm -f $(docker container ls --format "{{.ID}}" --filter "label=my.project.environment")
re: labels vs. environment variables
Labels are only available to the docker and docker-compose commands on your host.
Environment variables are also available at run-time inside the docker container.
LABEL can be utilized to embed as much metadata as possible about the Docker image, so to make it easier to work with.
Some main purposes of adding LABEL to a Docker image are:
As a documentation.
You can provide author, description, link to a usage instructions etc.
For versioning.
You can ensure that some new features even with the same latest tag will be applicable for certain versions, so might not broke some old existing features.
Any other metadata for programmatic access.
This page provides a guideline and the most common usages of Docker LABEL.
docker and docker-compose seem to be interacting with the same dockerFile, what is the difference between the two tools?
The docker cli is used when managing individual containers on a docker engine. It is the client command line to access the docker daemon api.
The docker-compose cli can be used to manage a multi-container application. It also moves many of the options you would enter on the docker run cli into the docker-compose.yml file for easier reuse. It works as a front end "script" on top of the same docker api used by docker, so you can do everything docker-compose does with docker commands and a lot of shell scripting. See this documentation on docker-compose for more details.
Update for Swarm Mode
Since this answer was posted, docker has added a second use of docker-compose.yml files. Starting with the version 3 yml format and docker 1.13, you can use the yml with docker-compose and also to define a stack in docker's swarm mode. To do the latter you need to use docker stack deploy -c docker-compose.yml $stack_name instead of docker-compose up and then manage the stack with docker commands instead of docker-compose commands. The mapping is a one for one between the two uses:
Compose Project -> Swarm Stack: A group of services for a specific purpose
Compose Service -> Swarm Service: One image and it's configuration, possibly scaled up.
Compose Container -> Swarm Task: A single container in a service
For more details on swarm mode, see docker's swarm mode documentation.
docker manages single containers
docker-compose manages multiple container applications
Usage of docker-compose requires 3 steps:
Define the app environment with a Dockerfile
Define the app services in docker-compose.yml
Run docker-compose up to start and run app
Below is a docker-compose.yml example taken from the docker docs:
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
A Dockerfile is a text document that contains all the commands/Instruction a user could call on the command line to assemble an image.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. By default, docker-compose expects the name of the Compose file as docker-compose.yml or docker-compose.yaml. If the compose file has a different name we can specify it with -f flag.
Check here for more details
docker or more specifically docker engine is used when we want to handle only one container whereas the docker-compose is used when we have multiple containers to handle. We would need multiple containers when we have more than one service to be taken care of, like we have an application that has a client server model. We need a container for the server model and one more container for the client model. Docker compose usually requires each container to have its own dockerfile and then a yml file that incorporates all the containers.