Good day everyone.
My task here is to use some volume which will be shared among several services. This volume will be populated by ADD or COPY command in certain service Dockerfile. Problem I encountered is that volume is not getting updated when services are started via docker-compose up.
Consider following setup:
# docker-compose.yml
version: "3"
services:
service-a:
build:
context: service-a
dockerfile: Dockerfile
volumes:
- shared:/test/dir
service-b:
build:
context: service-b
dockerfile: Dockerfile
volumes:
- shared:/another-test/dir
volumes:
shared:
# ./service-a/Dockerfile
FROM alpine:3.8
COPY test-dir /test/dir
CMD [ "tail", "-f", "/dev/null" ]
# ./service-b/Dockerfile
FROM alpine:3.8
CMD [ "tail", "-f", "/dev/null" ]
And we will create ./service-a/test-dir folder. Now let's build it:
> docker-compose build
Building service-a
Step 1/3 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/3 : COPY test-dir /test/dir
---> ac66ed92b442
Step 3/3 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 932eb32b6184
Removing intermediate container 932eb32b6184
---> 7e0385d17f96
Successfully built 7e0385d17f96
Successfully tagged docker-compose-test_service-a:latest
Building service-b
Step 1/2 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/2 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 59a8b91c6b2d
Removing intermediate container 59a8b91c6b2d
---> 4e2c16ea5a80
Successfully built 4e2c16ea5a80
Successfully tagged docker-compose-test_service-b:latest
And start services:
> docker-compose up --no-build -d
Creating network "docker-compose-test_default" with the default driver
Creating volume "docker-compose-test_shared" with default driver
Creating docker-compose-test_service-a_1 ... done
Creating docker-compose-test_service-b_1 ... done
Let's check mapped directories:
> docker-compose exec service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:14 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
> docker-compose exec service-b ls -lah /another-test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:14 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
Now let's put few text files in ./service-a/test-dir on host machine and build again:
> docker-compose build
Building service-a
Step 1/3 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/3 : COPY test-dir /test/dir
---> bd168b0fc8cc
Step 3/3 : CMD [ "tail", "-f", "/dev/null" ]
---> Running in 6e81b32243e1
Removing intermediate container 6e81b32243e1
---> cc28fc6de9ac
Successfully built cc28fc6de9ac
Successfully tagged docker-compose-test_service-a:latest
Building service-b
Step 1/2 : FROM alpine:3.8
---> 196d12cf6ab1
Step 2/2 : CMD [ "tail", "-f", "/dev/null" ]
---> Using cache
---> 4e2c16ea5a80
Successfully built 4e2c16ea5a80
Successfully tagged docker-compose-test_service-b:latest
As you can see cache is not used on COPY step in service-a, meaning changes are baked into the image. Now let's start services:
> docker-compose up --no-build -d
Recreating docker-compose-test_service-a_1 ...
Recreating docker-compose-test_service-a_1 ... done
Once again service-b remains untouched, only service-a gets recreated. Let's check actual services (this is where problem happens):
> docker-compose exec service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:17 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:20 ..
> docker-compose exec service-b ls -lah /another-test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:17 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:14 ..
So files are not reflected... However if we launch temporary container based on service-a image it will show proper list:
> docker run --rm docker-compose-test_service-a ls -lah /test/dir
total 8
drwxr-xr-x 2 root root 4.0K Dec 12 06:20 .
drwxr-xr-x 3 root root 4.0K Dec 12 06:20 ..
-rwxr-xr-x 1 root root 0 Dec 12 06:20 rrrrrr.txt
-rwxr-xr-x 1 root root 0 Dec 12 06:16 test.txt
Any idea for workaround on this? So far it seems like only complete shutdown via docker-compose down with volume destruction helps. Not the best solution though as with real project it will cause serious downtime.
I hope configuration is readable, but I can put it into small git repo maybe if needed.
Thanks in advance.
Docker will only auto-populate a volume when it is first created. In the workflow you describe, when you delete and recreate the "first" container with its new image, you need to delete and recreate the volume too, which probably means deleting and recreating the whole stack.
There are a couple of ways around this issue:
Rearchitect your application to not need to share files. (Obviously the most work.) If there's some sort of semi-static content, you might need to "bake them in" to the consuming image, or set up the two parts of your system to communicate via HTTP. (This sort of data sharing across containers is not something Docker is great at, and it gets worse when you start looking at multi-host solutions like Swarm or Kubernetes.)
If there are only two parts involved, build a single image that runs the two processes. I've seen other SO questions that do this for a PHP-FPM component that serves dynamic content plus an nginx server that serves static content and forwards some requests to PHP-FPM, but where all of the static and dynamic content is "part of the application" and the HTTP-via-nginx entry point is "the single entrypoint into the container". supervisord is the de facto default control process when this is necessary.
Inside the "producing" container, write some startup-time code that copies data from something in the image into a shared-data location
#!/bin/sh
# Run me as the image's ENTRYPOINT.
if [ -d /data ]; then
cp -r /app/static /data
done
exec "$#"
This will repopulate the data volume on every startup.
Related
I am going through a docker course and have a simple Docker script which sets up an image:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ENV APP_URL=http://api.myapp.com
EXPOSE 3000
CMD ["npm", "start"]
Now, in the script it switches to USER app but when I log in to Docker exec using docker exec -it 187 sh I can ask it whoami and get the response: app which is correct. The problem comes when I try to write a file using the echo command:
echo data > data.txt
sh: can't create data.txt: Permission denied
So then I run ls -la to view files and perms:
/app $ ls -la
total 1456
drwxr-xr-x 1 root root 4096 Oct 20 16:38 .
drwxr-xr-x 1 root root 4096 Oct 20 19:54 ..
-rw-rw-r-- 1 root root 13 Oct 20 13:46 .dockerignore
drwxr-xr-x 7 root root 4096 Mar 9 2021 .git
-rw-r--r-- 1 root root 310 Mar 5 2021 .gitignore
-rw-rw-r-- 1 root root 311 Oct 20 16:38 Dockerfile
-rw-r--r-- 1 root root 3362 Mar 5 2021 README.md
drwxr-xr-x 1 root root 4096 Oct 20 16:38 node_modules
-rw-rw-r-- 1 root root 1434378 Oct 20 16:10 package-lock.json
-rw-r--r-- 1 root root 814 Oct 20 16:10 package.json
drwxr-xr-x 2 root root 4096 Mar 9 2021 public
drwxr-xr-x 2 root root 4096 Oct 20 13:22 src
Which shows that root is the user and group set for these files/dirs. This was obviously intended as we don't want to be logging in as root. So what should I do to be able to add this file to the container? What is the best practice here? Maybe I missed a step somewhere?
Edit: Should the /app be owned by the USER app? If so, what is the point in adding a new user? should I add this to the docker script:
RUN chown app /app
Thanks!
Should the /app be owned by the USER app?
Definitely not. You want to prevent the application from overwriting its own code and static assets, intentionally or otherwise.
So what should I do to be able to add this file to the container?
Create a dedicated directory to hold your application's data. This should be a different directory from the directory with the source code; a subdirectory of your normal application directory is fine. In the Dockerfile, make this directory (only) be owned by your non-root user.
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
# don't switch to this user quite yet
WORKDIR /app
# usual setup and build stuff
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm build
# create the data directory and set its owner
RUN mkdir data && chown app data
# _now_ switch to the non-root user when running the container
EXPOSE 3000
USER app
CMD ["npm", "start"]
In practice, you probably want to persist the application's data beyond the lifespan of a single container. One approach to this is to use a Docker named volume. If you do this, the volume will be initialized from the image, including its ownership, and so you don't need any special setup here.
docker volume create app-data
docker run -v app-data:/app/data ...
For several reasons you may prefer to use a bind mount (if you need to directly access the files from outside of Docker; it may be easier to back up and restore the files; ...). You can also use the docker run -v option to bind-mount a host directory into a container, but it brings along its host-system numeric uid owner. However, notice that the only thing in the image that has the app owner is the data directory, and the code is otherwise world-readable, so if we set the container to run with the same uid as the host user, this will still work.
docker run -v "$PWD/data:/app/data" -u $(id -u) ...
You should not normally need Docker volumes for your application code (it is contained in the image), nor should you need to build a specific host uid into the image.
After investigating usermod, this, github there seems to no acceptable way to enable spring boot write access to /opt/service/log directory/volume which ends up in java.io.FileNotFoundException: log/app.log (Permission denied).
Dockerfile:
FROM openjdk:8-alpine
RUN apk update && apk add --no-cache bash curl busybox
EXPOSE 8080
#1 RUN mkdir -p /opt/service/log ; chown -R user /opt/service/log
VOLUME ["/opt/service/log"]
# a few COPY commands
RUN adduser -D -S -u 1000 user && chown -R 1000 /opt/service/
#2 RUN chmod -R 777 /opt/service
RUN chmod 755 /opt/service/entrypoint.sh
USER 1000
RUN ls -la .
RUN touch /opt/service/log/test.log
ENTRYPOINT ["/opt/service/entrypoint.sh"]
#1 this commented fix works but is not acceptable since the directory can be changed later on.
The output of executing Dockerfile:
[INFO] DOCKER> Step 13/15 : RUN ls -la .
[INFO] DOCKER>
[INFO] DOCKER> ---> Running in a99022c07da2
[INFO] DOCKER> total 28088
drwxr-xr-x 1 user root 4096 Oct 15 11:05 .
drwxr-xr-x 1 root root 4096 Oct 15 11:02 ..
-rw-r--r-- 1 user root 4367 Sep 17 10:18 entrypoint.sh
drwxr-xr-x 2 root root 4096 Oct 15 11:05 log
-rw-r--r-- 1 user root 28741050 Oct 15 11:05 service.jar
[INFO] DOCKER> Removing intermediate container a99022c07da2
[INFO] DOCKER> ---> d0831197c79c
[INFO] DOCKER> Step 14/15 : RUN touch /opt/service/log/test.log
[INFO] DOCKER>
[INFO] DOCKER> ---> Running in 54f5d57499fc
[INFO] DOCKER> [91mtouch: /opt/service/log/test.log: Permission denied
How to make volume writable by user user / spring boot?
You defined /opt/service/log as a volume. Once you have done that, no further changes are possible from RUN commands. The recursive chmod will run, in a temporary container with a temporary anonymous volume mounted, and then the anonymous volume is discarded along with the permission changes you've made.
This is detailed in the Dockerfile documentation:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
My best practice is to remove the VOLUME definition from the Dockerfile entirely because it causes issues like this, and breaks the ability for downstream users to make changes. You can always define a volume mount in your docker-compose.yml or docker run command line, at run time, rather than when building the image. If you must define the volume inside your Dockerfile, then move it to the end of the file, and realize that you will break the ability to extend this image in a later Dockerfile.
I have a Dockerfile as follows:
FROM jenkins/jenkins:2.119
USER jenkins
ENV HOME /var/jenkins_home
COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
RUN chmod 700 ${HOME}/.ssh && \
chmod 600 ${HOME}/.ssh/*
The ssh directory has 755/644 on the dir/file on the build machine. However, when I build with
docker build -t my/temp .
and start the image with an ls command
docker run -it --rm my/temp ls -la /var/jenkins_home/.ssh
neither of the chmod commands are applied to the image
drwxr-xr-x 2 jenkins jenkins 4096 May 3 12:46 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 12:46 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
During the build I see
Step 4/6 : COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
---> 58e0d8242fac
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
---> bbfc828ace79
It looks like the chmod is discarded. How can I stop this happening?
I'm using latest Docker (Edge) on Mac OSX
Version 18.05.0-ce-rc1-mac63 (24246); edge 3b5a9a44cd
EDIT
With --rm didn't work either (after deleting image and rebuilding) but didn't get remove message
docker build -t my/temp --rm=false .
run -it --rm my/temp ls -la /var/jenkins_home/.ssh
drwxr-xr-x 2 jenkins jenkins 4096 May 3 15:42 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 15:42 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
EDIT 2
So basically a bug in Docker where a base image with a VOLUME causes chmod to fail and similarly RUN mkdir on the volume failed but COPY did, but left the directory with the wrong permissions. Thanks to bkconrad.
EDIT 3
Created fork with a fix here https://github.com/systematicmethods/jenkins-docker
build.sh will build an image locally
This has to do with how Docker handles VOLUMEs for images.
From docker inspect my/temp:
"Volumes": {
"/var/jenkins_home": {}
},
There's a helpful ticket about this from the moby project:
https://github.com/moby/moby/issues/12779
Basically you'll need to do your chmod at run time.
Setting your HOME envvar to a non-volume path like /tmp shows the expected behavior:
$ docker run -it --rm my/temp ls -la /tmp/.ssh
total 8
drwx------ 2 jenkins jenkins 4096 May 3 17:31 .
drwxrwxrwt 6 root root 4096 May 3 17:31 ..
-rw------- 1 jenkins jenkins 0 May 3 17:24 dummy
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
As you can see "intermediate container " is being removed , which is a normal behavior of the docker to keep , if you wanted to keep those use below command.
docker build -t my/temp --rm=false .
Its also been explained in one of the post
Why docker build image from docker file will create container when build exit incorrectly?
I'm trying to use mounted volume directory in build process, but it's either not being mounted at the moment or mounted incorrectly.
docker-compose.yml
version: '2'
services:
assoi:
restart: on-failure
build:
context: ./assoi
expose:
- "4129"
links:
- assoi-redis
- assoi-postgres
- assoi-mongo
- assoi-rabbit
volumes:
- ./ugmk:/www
command: pm2 start /www/ugmk.json
...
Dockerfile
...
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
...
sudo docker-compose build out
...
Step 12 : WORKDIR /www
---> Using cache
---> 73504ed64194
Step 13 : RUN ls -al
---> Running in 37bb9f70d4ac
total 8
drwxr-xr-x 2 root root 4096 Aug 22 13:31 .
drwxr-xr-x 65 root root 4096 Aug 22 14:05 ..
---> be1ac6edce56
...
During build, you do not mount or more specifically, you cannot mount any volume.
What you do is COPY, so in your case
COPY ./ugmk /www
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
Volumes are for containers, not for images - volumes should store persistent user-generated data. By definition, this can only happen during the runtime, thus for "containers".
Nevertheless, the upper COPY is the default practice to what you want to achive "build a image with the application pre-deployed/assets compiled" ..
Im having troubles with Docker.
Here is my DockerFile
FROM alpine:3.3
WORKDIR /
COPY myinit.sh /myinit.sh
ENTRYPOINT ["/myinit.sh"]
myinit.sh
#!/bin/bash
set -e
echo 123
Thats the way i build my image
docker build -t test --no-cache
Run logs
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM alpine:3.3
---> d7a513a663c1
Step 2 : WORKDIR /
---> Running in eda8d25fe880
---> 1dcad4f11062
Removing intermediate container eda8d25fe880
Step 3 : COPY myinit.sh /myinit.sh
---> 49234cc3903a
Removing intermediate container ffe6227c921f
Step 4 : ENTRYPOINT /myinit.sh
---> Running in 56d3d748b396
---> 060f6da19513
Removing intermediate container 56d3d748b396
Successfully built 060f6da19513
Thats how i run the container
docker run --rm --name test2 test
docker: Error response from daemon: Container command '/myinit.sh' not found or does not exist..
myinit.sh exist for sure. Here is ls -al
ls -al
total 16
drwxr-xr-x 4 lorddaedra staff 136 10 май 19:43 .
drwxr-xr-x 25 lorddaedra staff 850 10 май 19:42 ..
-rw-r--r-- 1 lorddaedra staff 82 10 май 19:51 Dockerfile
-rwxr-xr-x 1 lorddaedra staff 29 10 май 19:51 myinit.sh
Why it cant see my entrypoint script? Any solutions?
Thanks
It's not the entrypoint script it can't find, but the shell it's referencing -- alpine:3.3 doesn't by default have bash inside it. Change your myinit.sh to:
#!/bin/sh
set -e
echo 123
i.e. referencing /bin/sh instead of /bin/bash