Push image to another registry with volume copy - docker

I am running an image in a docker container locally with the following commands
docker pull locustio/locust
and my docker-compose looks as below, for which I use the docker-compose up
version: '3'
services:
locust-service:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H http://master:8089
I have my volume, which is the locustfile.py which has all the code to test my system. Now I would need to push and deploy this image into another private repository along with the volume, that is the file locustfile.py.
How can I do that with the docker-compose push? Or is there any other way I can copy the volume? The docker-compose push for the above compose file doesn't seem to work

Volumes are generally intended to hold data, not application code. You should build your code into a derived Docker image, which then can be pushed.
You can write what you show here into a basic Dockerfile:
FROM locustio/locust
COPY locustfile.py /mnt/locust
# CMD must be a JSON array if it's passing additional options to an ENTRYPOINT
CMD ["-f", "/mnt/locust/locustfile.py", "-H", "http://master:8089"]
Then your docker-compose.yml file only needs to specify to build and run it, but not duplicate any of these options:
version: '3.8'
services:
locust-service:
build: .
image: my-docker-hub-name/locust
ports:
- "8089:8089"
Then docker-compose build && docker-compose push would build and push the image. On the target host you'd need to copy this docker-compose.yml file but remove the build: line.
Glancing at the Locust documentation, this is similar to what is suggested to Use docker image as a base image. You also may find it more flexible to use environment variables to set options, rather than command-line arguments, which would let you split options between the Dockerfile and the docker-compose.yml runtime configuration.

Only docker images can be pushed.
The volumes are generated when you run the image creating a container with its volumes as explained in the official documentation https://docs.docker.com/storage/volumes/ .
I report here the example in the official documentation:
docker run -d \
--name=nginxtest \
-v nginx-vol:/usr/share/nginx/html \
nginx:latest

Related

Why does building and image with docker compose fail but succeeds without it?

I am trying to build an image with docker compose and it fails, however it works with just docker. I have read some SO posts saying that the error thrown when failing happens when a file/folder cannot be found in the Dockerfile. The build works when building with docker so I dont know why it wouldn't work with docker-compose. Why is this happening?
The structure for this project is this:
parent_proj
|_proj
|_Dockerfile
|_docker-compose.yml
Here is my docker-compose file:
version: '3.4'
services:
integrations:
build:
context: .
dockerfile: proj/Dockerfile
network: host
image: int
ports:
- "5000:5000"
Here is the Dockerfile inside proj/
FROM openjdk:11
USER root
#RUN apt-get bash
ARG JAR_FILE=target/proj-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} /app2.jar
ENTRYPOINT ["java","-jar", "/app2.jar"]
When I'm inside the proj folder. I can run
docker build . -t proj
The above succeeds and I can subsequently run the container. However when I am in parent_proj and run docker compose build it fails with the error message
failed to compute cache key: failed to walk
/var/lib/docker/tmp/buildkit-mount316454722/target: lstat
/var/lib/docker/tmp/buildkit-mount316454722/target: no such file or
directory
Why does this happen? How can I build successfully with docker-compose without restructuring the project?
thanks
Your Compose build options and the docker build options you show are different. The successful command is (where -f Dockerfile is the default):
docker build ./proj -t proj # -f Dockerfile
# context: image: dockerfile:
But your Compose setup is running
docker build . -t img -f proj/Dockerfile
# context: image: dockerfile:
Which one is right? In the Dockerfile, you
COPY target/proj-0.0.1-SNAPSHOT.jar /some/container/path
That target/... source path is always relative to the build-context directory (Compose context: option, the directory parameter to docker build), even if it looks like an absolute path and even if the Dockerfile is in a different directory. If that target directory is a subdirectory of proj then you need the first form.
There's a shorthand Compose build: syntax if the only thing you need to specify is the context directory, and I'd use that here. If you don't specifically care what the image name is (you're not pushing it to a registry) then Compose can pick a reasonable name on its own; you don't need to specify image:.
version: '3.8'
services:
integrations:
build: ./proj
ports:
- "5000:5000"

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

Docker Compose does not allow to use local images

The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.

Docker-compose.yml file that builds a base image, then children based on it?

For clarification, when I say base image, I mean the parent image that has all the common configurations, so that the children based on it don't need to download the dependencies individually.
From my understanding, docker-compose.yml files are the run-time configurations, while Dockerfiles are the build-time configurations. However, there is a build option using docker-compose, and I was wondering how I could use this to build a base image.
As of right now, I use a shellscript that runs other shellscripts. One builds all my images, from a base image that it also creates. The other runs them as containers with the necessary configurations. However, the base image is never ran as a container.
Currently, the shellscript I hope to change into a docker-compose file, looks like so:
echo "Creating docker network net1"
docker network create net1
echo "Running api as a container with port 5000 exposed on net1"
docker run --name api_cntr --net net1 -d -p 5000:5000 api_img
echo "Running redis service with port 6379 exposed on net1"
docker run --name message_service --net net1 -p 6379:6379 -d redis
echo "Running celery worker on net1"
docker run --name celery_worker1 --net net1 -d celery_worker_img
echo "Running flower HUD on net1 with port 5555 exposed"
docker run --name flower_hud --net net1 -d -p 5555:5555 flower_hud_img
The shellscript that makes the images, is as follows:
echo "Building Base Image"
docker build -t base ../base-image
echo "Building api image from Dockerfile"
docker build -t api_img ../api
echo "Building celery worker image"
docker build -t celery_worker_img ../celery-worker
echo "Building celery worker HUD"
docker build -t flower_hud_img ../flower-hud
My questions comes down to one thing, can I create this Base image without ever running it in a container with docker-compose. (All the Dockerfiles start with FROM base:latest other than the base itself). I'm looking to make it as easy as possible for other people, so that they only have to run a single command.
EDIT: I am using version 3, and acording to the docs, build: is ignored, and docker-compose only accepts pre-built images.
Yes, kind of. Use it like this:
version: '2'
services:
wls-admin:
container_name: wls-admin
image: weblogic-domain
build:
context: wls-admin
args:
- ADMIN_PORT=${WLS_ADMIN_PORT}
- CLUSTER_NAME=${WLS_CLUSTER_NAME}
- PRODUCTION_MODE=dev
networks:
- wls-network
image clause here makes docker-compose build generate docker image named weblogic-domain for this service. This image can be re-used by other services' Dockerfiles, even in the same build process.
Doing a bit more research based on #amiasato 's anser, it looks as if there is a replicated key, which you can set to 0 like so:
version: "3"
services:
base-image:
build:
context: .
dockerfile: Dockerfile-base
deploy:
mode: replicated
replicas: 0
See https://docs.docker.com/compose/compose-file/compose-file-v3/#replicas
Just a minor addition to Kanedias' answer. If you choose to follow his approach (which was my choice), you can avoid instantiating a container for the base image with the --scale flag from the docker-compose up command:
docker-compose up --scale wls-admin=0
From the up command documentation:
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.
One important thing to note is that the scale setting in the docker-compose.yml was removed in v3, so there is actually nothing to override in v3.
Instead of running docker-compose, you can implement a script, witch builds image with specific tag docker build ... -t your_tag, then runs docker-compose. In children dockerfiles you can use FROM your_tag.
As per the documentation the build option of a service takes a directory as an argument which contains the famous Dockerfile. There is no way to build a base image and then the actual image of the service.
Docker is a environment in which your application runs. When you are creating a base image, it should have things which are not going to change often. Then you need to build baseiamge once and upload to your repository and use FROM baseimage:latest in the Dockerfile.
For example, if you are building a python application you can create it from python and install requirements:
FROM python:3.6
COPY requirements.txt .
RUN pip install -r requirements.txt
here, python:3.6 is the base image which is not going to change often and thus you need not build it every time you are running docker compose commands.
From the shellscript that makes the images, we can see that you have different dockerfiles in different directories. You can use that to create a docker-compose.yml file. The build settings are used to tell docker that how should it build the image.
You can use those dockerfiles in your compose file in this manner:
version: '3'
services:
api_cntr:
image: api_img
build:
context: ./api
container_name:api_cntr
ports:
- 5000:5000
Here, I have assumed that your docker-compose.yml file is placed in a folder which also contains a directory called base-image. And base-image has a dockerfile which is used to build the image.
This can be structure of one of your service. In similar manner, you can create other services also. And while usig docker-compose you will not need to specify a network for each, because all services declared within a docker-compose.yml file are part of an isolated network.

Resources