Docker build ignore file permissions - docker

Im building my docker image from a jenkins job.
I do ADD an index.html file to the html directory of nignx.
The permissions on the jenkins host are
-rw-r----- 1 jenkins jenkins 3.3K Nov 10 14:12 index.html
and also the permissions inside the container are set to
-rw-r----- 1 root root 3.2K Nov 10 13:12 index.html
so the webserver serves an 403 Forbidden instead of the file.
Can I omit the permissions on the host and use a default umask (rwxr-xr-x) or do I have to chmod every file I want serve via nginx?

The Docker Documentation for ADD states the following:
All new files and directories are created with a UID and GID of 0.
This means that you have to run either chown or chmod after copying the files.
There are some further discussions here:
https://github.com/docker/docker/issues/7537
https://github.com/docker/docker/pull/9934

Related

File system permissions in a docker container

How does docker create a file system for a Linux container? And how are permissions set up on the root file system?
I encountered a situation when starting a docker container on a particular machine with Ubuntu Server. For some reason, /tmp in the container doesn't have write permissions:
$ docker run -it python:3.11-slim-buster /bin/bash
root#5d5fefe9b9a2:/# ls -la /tmp
total 8
drwxr-xr-t 1 root root 4096 Jan 26 06:58 .
drwxr-xr-x 1 root root 4096 Jan 29 04:31 ..
Note that this has 755 permissions.
However, when I start the same docker image as a container on WSL, I get 777:
$ docker run -it python:3.11-slim-buster /bin/bash
root#201dfe147e5a:/# ls -la /tmp
total 8
drwxrwxrwt 1 root root 4096 Nov 16 06:56 .
drwxr-xr-x 1 root root 4096 Jan 29 04:36 ..
This was fine a few weeks ago on the Ubuntu machine. I recently moved all the files from /var/lib/ubuntu to /ubuntu because the partition mounted at /var was full. Would this have caused the behavior with the permissions of /tmp inside a container? If so, why? And how do I fix it? If not, what else would cause this and...how do I fix it?
Docker uses a so-called union file system for a running container. The recommended driver on Linux is called overlay2. The files and directories for each layer of an image are stored under /var/lib/docker/overlay2, assuming the default config. The directory structure for each layer is combined to create the final file system for the container. See https://docs.docker.com/storage/storagedriver/overlayfs-driver/ for more details.
As for the permissions for the files in the container, they are derived from the permissions of the files in this directory in the host file system. When I copied the files from /var/lib/docker to /docker, I failed to preserve ownership and permissions. My best guess is that umask was applied as each new file was created.

docker command not available in custom pipe for BitBucket Pipeline

I'm working on a build step that handles common deployment tasks in a Docker Swarm Mode cluster. As this is a common problem for us and for others, we've shared this build step as a BitBucket pipe: https://bitbucket.org/matchory/swarm-secret-pipe/
The pipe needs to use the docker command to work with a remote Docker installation. This doesn't work, however, because the docker executable cannot be found when the pipe runs.
The following holds true for our test repository pipeline:
The docker option is set to true:
options:
docker: true
The docker service is enabled for the build step:
main:
- step:
services:
- docker: true
Docker works fine in the repository pipeline itself, but not within the pipe.
Pipeline log shows the docker path being mounted into the pipe container:
docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
radiergummi/swarm-secret-pipe:1.3.7#sha256:baf05b25b38f2a59b044e07f4ad07065de90257a000137a0e1eb71cbe1a438e5
The pipe is pretty standard and uses a recent Alpine image; nothing special in that regard. The PATH is never overwritten. Now for the fun part: If I do ls /usr/local/bin/docker inside the pipe, it shows an empty directory:
ls /usr/local/bin
total 16K
drwxr-xr-x 1 root root 4.0K May 13 13:06 .
drwxr-xr-x 1 root root 4.0K Apr 4 16:06 ..
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 docker
ls /usr/local/bin/docker
total 8K
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 .
drwxr-xr-x 1 root root 4.0K May 13 13:06 ..
ls: /usr/local/bin/docker/docker: No such file or directory
As far as I understand pipelines and Docker, /usr/local/bin/docker should be the docker binary file. Instead, it appears to be an empty directory for some reason.
What is going on here?
I've also looked at other, official, pipes. They don't do anything differently, but seem to be using the docker command just fine (eg. the Azure pipe).
After talking to BitBucket support, I solved the issue. As it turns out, if the docker context is changed, any docker command is sent straight to the remote docker binary, which (on our services) lives at a different path than in BitBucket Pipelines!
As we changed the docker context before using the pipe, and the docker instance mounted into the pipe still has the remote context set, but the pipe searches for the docker binary at another place, the No such file or directory error is thrown.
TL;DR: Always restore the default docker host/context before passing control to a pipe, e.g.:
script:
- export DEFAULT_DOCKER_HOST=$DOCKER_HOST
- unset DOCKER_HOST
- docker context create remote --docker "host=ssh://${DEPLOY_SSH_USER}#${DEPLOY_SSH_HOST}"
- docker context use remote
# do your thing
- export DOCKER_HOST=$DEFAULT_DOCKER_HOST # <------ restore the default host
- pipe: matchory/swarm-secret-pipe:1.3.16

Why I get a nginx 404 when I pointing a Symlink to a path?

I want to access a directory not containing in the docroot from nginx.
The Situation:
I've got a folder which contains files:
Command:
ls -lah /var/www/
Output:
drwxr-x---. 2 docker-www docker-www 6 Jul 24 16:56 some_folder
And I've got a path which should delivers the content from the folder above:
Command:
ls -lah /var/www/typo3/releases/current/typo3-web/web/info/symlink
Output:
lrwxrwxrwx. 1 docker-www docker-www 32 Jul 24 16:56 symlink -> /var/www/some_folder
My nginx config:
root /usr/share/nginx/current/typo3-web/web;
...
location /info/symlink/ {
allow all;
autoindex on;
disable_symlinks off;
}
Problem:
The nginx delivers "404 Not Found"
Things I've tried yet:
Create a normal folder in /var/www/typo3/releases/current/typo3-web/web/info/ and it works. NGINX delivers the file index.
Check the file permissions: the user docker-www has access to the files.
Sorry guys. I have to mount the folder into the docker container by using another docker volume.

Recreate container on stop with docker-compose

I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server

Riak container does not start when its data volume is mounted

The following command works perfectly and the riak service starts as expected:
docker run --name=riak -d -p 8087:8087 -p 8098:8098 -v $(pwd)/schemas:/etc/riak/schema basho/riak-ts
The local schemas directory is mounted successfully and the sql file in it is read by riak. However if I try to mount the riak's data or log directories, the riak service does not start and timeouts after 15 seconds:
docker run --name=riak -d -p 8087:8087 -p 8098:8098 -v $(pwd)/logs:/var/log/riak -v $(pwd)/schemas:/etc/riak/schema basho/riak-ts
Output of docker logs riak:
+ /usr/sbin/riak start
riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to
wait longer, set the environment variable
WAIT_FOR_ERLANG to the
number of seconds to wait.
Why does riak not start when it's logs or data directories are mounted to local directories?
This issue is with the directory owner of mounted log folder. The folder $GROUP and $USER are expected to be riak as follow:
root#20e489124b9a:/var/log# ls -l
drwxr-xr-x 2 riak riak 4096 Jul 19 10:00 riak
but with volumes you are getting:
root#3546d261a465:/var/log# ls -l
drwxr-xr-x 2 root root 4096 Jul 19 09:58 riak
One way to solve this is to have the directory ownership as riak user and group on host before starting the container. I looked the UID/GID (/etc/passwd) in docker image and they were:
riak:x:102:105:Riak user,,,:/var/lib/riak:/bin/bash
now change the ownership on host directories before starting the container as:
sudo chown 102:105 logs/
sudo chown 102:105 data/
This should solve it. At least for now. Details here.

Resources