My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.
Related
I'm trying to understand volumes.
When I build and run this image with docker build -t myserver . and docker run -dp 8080:80 myserver, the web server on it prints "Hallo". When I change "Hallo" to "Huhu" in the Dockerfile and rebuild & run the image/container, it shows "Huhu". So far, no surprises.
Next, I added a docker-compose.yaml file that has two volumes. One volume is mounted on an existing path of where the Dockerfile creates the index.html. The other is mounted on a new and unused path. I build and run everything with docker compose up --build.
On the first build, the web server prints "Hallo" as expected. I can also see the two volumes in Docker GUI and its contents. The index.html that was written to the image, is now present in the volume. (I guess the volume gets mounted before the Dockerfile can write to it.)
On the second build (swap "Hallo" with "huhu" and run docker compose up --build again) I was expecting the webserver to print "Huhu". But it prints "Hallo". So I'm not sure why the data on the volume was not overwritten by the Dockerfile.
Can you explain?
Here are the files:
Dockerfile
FROM nginx
# First build
RUN echo "Hallo" > /usr/share/nginx/html/index.html
# Second build
# RUN echo "Huhu" > /usr/share/nginx/html/index.html
docker-compose.yaml
services:
web:
build: .
ports:
- "8080:80"
volumes:
- html:/usr/share/nginx/html
- persistent:/persistent
volumes:
html:
persistent:
There are three different cases here:
When you build the image, it knows nothing about volumes. Whatever string is in that RUN echo line, it is stored in the image. Volumes are not mounted when you run the docker-compose build step, and the Dockerfile cannot write to a volume at all.
The first time you run a container with the volume mounted, and the first time only, if the volume is empty, Docker copies content from the mount point in the image into the volume. This only happens with named volumes and not bind mounts; it only happens on native Docker and not Kubernetes; the volume content is never updated at all after this happens.
The second time you run a container with the volume mounted, since the volume is already populated, the content from the volume hides the content in the image.
You routinely see various cases that uses named volumes to "pass through" to the image (especially Node applications) or to "share files" with another container (frequently an Nginx server). These only work because Docker (and only Docker) automatically populates empty named volumes, and therefore they only work the first time. If you change your package.json, your Node application that mounts a volume over node_modules won't see updates; if you change your static assets that you're sharing with a Web server, the named volume will hide those changes in both the application and HTTP-server containers.
Since the named-volume auto-copy only happens in this one very specific case, I'd try to avoid using it, and more generally try to avoid mounting anything over non-empty directories in your image.
We have a setup with our Jenkins server running within a Docker container which I have taken over from a colleague who has left.
I am seeing some behaviour which I do not understand and have not been able to work out what is going on from the documentation.
My folder structure looks like this:
└── Master
├── docker-compose.yml
└── jenkins-master
└── Dockerfile
My docker-compose.yaml file looks like this (this is just a snippet of the relevant part):
version: '3'
services:
master:
build: ./jenkins-master
I have updated the version of the base Jenkins image in jenkins-master/Dockerfile and then rebuilt using docker-compose build.
This succeeds and results in an image called master_master
If I run docker images I see this new image as well as a previous image:
REPOSITORY TAG IMAGE ID CREATED SIZE
master_master latest <id1> 16 hours ago 704MB
jenkins_master latest <id2> 10 months ago 707MB
As I understand it, the name master_master is as a result of the base folder name (i.e. Master) and the service name of master in the docker-compose.yaml file.
I don't know how the existing image ended up with the name jenkins_master. Would the folder name have had to be Jenkins rather than Master, or is there another way that would have resulted in this name?
When I run docker-compose up -d it uses the master_master image to launch a container (called master_master_1).
When I run docker-compose -p jenkins up -d it uses the jenkins_master image to launch a container (called jenkins_master_1).
Apart from the different container names, the resultant running containers are different as I can see that the Jenkins versions are different (as per the change I made in the Dockerfile).
I do not change the docker-compose file at all between running these 2 commands and yet different images are run.
The documentation that I have found for specifying the -p (--project-name) flag states:
Sets the project name. This value is prepended along with the service
name to the container on start up. For example, if your project name
is myapp and it includes two services db and web, then Compose
starts containers named myapp_db_1 and myapp_web_1 respectively.
Setting this is optional. If you do not set this, the
COMPOSE_PROJECT_NAME defaults to the basename of the project
directory.
There is nothing that leads me to believe that the -p flag will result in a different image being run.
So what is going on here?
How does docker-compose choose which image to run?
Is this happening due to the names of the images master_master vs jenkins_master?
If you're going to use the docker-compose -p option, you need to use it with every docker-compose command, including docker-compose build.
If your docker-compose.yml file doesn't specify an image:, Compose constructs an image name from the current project name and the Compose service name. The project name and Docker object metadata are the only way it has to remember anything. So what's happening here is that the plain docker-compose build builds the image for the master service in the master project, but then docker-compose -p jenkins up looks for the master service in the jenkins project, and finds the other image.
docker-compose -p jenkins build
docker-compose -p jenkins up -d
It may or may not be easier to set the COMPOSE_PROJECT_NAME environment variable, possibly putting this in a .env file. In a Jenkins context, I also might consider using Jenkins's Docker integration to build (and push) the image, and only referring to image: in the docker-compose.yml file.
Add image option in the docker-compose.yml file. It will create the container with a specified docker image.
build: ./jenkins-master
image: dockerimage_name:tag
I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.
I want to deploy vueJS app inside a docker nginx container but before that container runs the vueJS source has to be compiled via npm run build I want to compilation to run in a container and then exit leaving only the compiled result for the nginx container.
Every time docker-compose up is run the vueJS app has to be recompiled as there is a .env file on the host OS that has to be volume mounted and the variables in here could be updated.
The ideal way I think would be some way of creating stages for docker compose like in gitlab ci so there would be a build stage and when that's finished the nginx container starts. But when I looked this up I couldn't see a way to do this.
What would be the best way to compile my vueJS app every time docker-compose up is run?
If you're already building your Vue.js app into a container (with a Dockerfile), you can make use of the build directive in your docker-compose.yml file. That way, you can use docker-compose build to create containers manually, or use run --build to build containers before they launch.
For example, this Compose file defines a service using a container build file, instead of a prebuilt image:
version: '3'
services:
vueapp:
build: ./my_app # There should be a Dockerfile in this directory
That means I can both build containers and run services separately:
docker-compose build
docker-compose up
Or, I can use the build-before-run option:
# Build containers, and recreate if necessary (build cache will be used)
docker-compose up --build
If your .env file changes (and containers don't pick up changes on restart), you might consider defining them in container build file. Otherwise, consider putting the .env file into a directory (and mount the directory, not the file, because some editors will use a swap file and change the inode - and this breaks the mount). If you mount a directory and change files within the directory, the changes will reflect in the container, because the parent directory's inode didn't change.
I ended up having an nginx container that reads the files from a volume mount and a container that builds the app and places the files in the same volume mount. While the app is compiling, nginx reads the old version and when the compilation is finished the files get replaced with the new ones.
Maybe I'm missing this when reading the docs, but is there a way to overwrite files on the container's file system when issuing a docker run command?
Something akin to the Dockerfile COPY command? The key desire here is to be able to take a particular Docker image, and spin several of the same image up, but with different configuration files. (I'd prefer to do this with environment variables, but the application that I'm Dockerizing is not partial to that.)
You have a few options. Using something like docker-compose, you could automatically build a unique image for each container using your base image as a template. For example, if you had a docker-compose.yml that look liked:
container0:
build: container0
container1:
build: container1
And then inside container0/Dockerfile you had:
FROM larsks/thttpd
COPY index.html /index.html
And inside container0/index.html you had whatever content you
wanted, then running docker-compose build would generate unique
images for each entry (and running docker-compose up would start
everything up).
I've put together an example of the above
here.
Using just the Docker command line, you can use host volume mounts,
which allow you to mount files into a container as well as
directories. Using my thttpd as an example again, you could use the
following -v argument to override /index.html in the container
with the content of your choice:
docker run -v index.html:/index.html larsks/thttpd
And you could accomplish the same thing with docker-compose via the
volume entry:
container0:
image: larsks/thttpd
volumes:
- ./container0/index.html:/index.html
container1:
image: larsks/thttpd
volumes:
- ./container1/index.html:/index.html
I would suggest that using the build mechanism makes more sense if you are trying to override many files, while using volumes is fine for one or two files.
A key difference between the two mechanisms is that when building images, each container will have a copy of the files, while using volume mounts, changes made to the file within the image will be reflected on the host filesystem.