I want to cif mount a directory into a docker-container.
As there are solutions to this, I tried the --privileged flag and setting the capabilities needed:
docker-compose.yaml:
version: '2.0'
services:
mounttest:
image: test
privileged: true
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
restart: unless-stopped
container_name: test
mem_limit: 500m
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/apps/docker-test/
Dockerfile:
FROM ubuntu:18.04
ADD . /apps/docker-test/
# APT-GET
RUN apt-get update && apt-get install -y \
sudo \
cifs-utils
# CHMOD SHELL SCRIPTS
RUN chmod 0755 /apps/docker-test/run.sh
RUN chmod 0755 /apps/docker-test/build.sh
RUN /apps/docker-test/build.sh
CMD bash /apps/docker-test/run.sh
build.sh:
mkdir -p /test_folder
echo "Mount"
sudo mount -t cifs -o username=XXX,password=XXX,workgroup=XX //server/adress$ /test_folder
run.sh starts a python script
This does not work, instead:
docker-compose build
gives me the error:
Unable to apply new capability set
All the solutions I found only mention the privileged flag or capabilities, which are set. Can anyone help?
This error happens because you're trying to mount a device inside the build step. At this point, these capabilities aren't available for the build container to use and it seems to be rolling out as a flag for disabling security at buildkit rather than enabling custom capabilities at build time.
The usual way to do that is to have your CIFS mount ready when you start your build process, as it'd not expose any authentication, device or mount point, as well as it's easier for docker to handle changes and react to them (since the build process works hard to cache everything before building it).
If you still want to do that, you'll need a few extra steps to enable the insecure flags from both the buildkitd and the docker buildx:
Mind that, as of today (2020-09-09), the support is still experimental and unforeseen consequences can happen.
Ensure that you're using docker version 19.03 or later.
Enable the experimental features, by adding the key "experimental":"enabled" to your ~/.docker/config.json
Create and use a builder that has the security.insecure entitlement enabled:
docker buildx create --driver docker-container --name local \
--buildkitd-flags '--allow-insecure-entitlement security.insecure' \
--use
Change your Dockerfile to use experimental syntax by adding before your first line:
# syntax = docker/dockerfile:experimental
Change the Dockerfile instruction so it runs the code without security constraints:
RUN --security=insecure /apps/docker-test/build.sh
Build your docker image using the BuildKit and the --allow security.insecure flag:
docker buildx build --allow security.insecure .
That way your build will be able to break free the security constraints.
I must reiterate that that's not a recommended practice for a few reasons:
It'll expose the building step for other images to escalate that permission hole.
The builder cannot properly cache that layer, since it's using insecure features.
Keep that in mind and happy mounting :)
The answer I found, is to put the mount command into the run.sh file. As the command (or CMD) in the Dockerfile is only executed when running
docker-compose up
the mount will only be executed after the build, done beforehand, is already finished.
Therefore, before starting the python script, the mount command is executed.
In my case, that only worked with the privileged flag set to true.
Related
I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.
Yes, this question has been posted before. Yes I've applied the relevant solutions and ignored the irrelevant ones. I just don't understand what is going on here.
Problem: I make change to my source code by modifying the default return message. I COPY the src tree into my baseImage (multi-stage builds). I build a stagingImage from the baseImage. The baseImage has the code change but the container running the stagingImage returns the pre-code-change message. I've replicated this behavior on MacOs (Catalina) and Amazon Linux 2.
Note that I am doing this manually, i.e. I'm not relying on IDEs and inotfy-s and the like. The only tools involved are the CLI, vim, make and docker(-compose).
Details: I have a Dockerfile that is capable of doing multi-stage builds. Here's the relevant part of the baseImage build:
COPY ./composer.json /var/www/html/composer.json
COPY ./php-apache/auth.json /root/.composer
COPY ./ /var/www/html
and here's a sample of how I build my other images :
FROM php:7.4-apache AS cleanImage
COPY --from=baseImage / /
FROM cleanImage AS stagingImage
COPY ./.env.staging /var/www/html/.env
RUN /bin/bash -c 'rm -rf /var/lib/apt/lists/*'
# Set entrypoint
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["apachectl"]
I modified the Makefile from Chapter 10 of "Docker In Action". Here's a sample:
## image-base : Build the base image that the others are based on
.PHONY: image-base
image-base: metadata
#echo "Building Base Image"
docker image build --no-cache --force-rm --tag src:$(BUILD_ID)-base \
-f src/Dockerfile \
--target baseImage \
--build-arg BUILD_ID='$(BUILD_ID)' \
--build-arg BUILD_DATE='$(BUILD_TIME_RFC_3339)' \
--build-arg VCS_REF='$(VCS_REF)' \
./src
#echo "Built Base Image. BUILD_ID: $(BUILD_ID)"
## image-staging : Build the staging image
.PHONY: image-staging
image-staging: metadata image-base
#echo "Building Staging App Image"
docker image build -t src:$(BUILD_ID)-staging \
-f src/Dockerfile \
--target=stagingImage \
--build-arg BUILD_ID='$(BUILD_ID)' \
--build-arg BUILD_DATE='$(BUILD_TIME_RFC_3339)' \
--build-arg VCS_REF='$(VCS_REF)' \
./src
#echo "Built Staging App Image. BUILD_ID: $(BUILD_ID)-staging"
## up env=<env> : Bring up environments. env values are prod, local, staging.
.PHONY: up
up:
ifeq ($(strip $(BUILD_ID)),)
$(error BUILD_ID environment variable is not set. Run `make metadata` to generate one)
endif
docker-compose -f docker-compose.yml -f docker-$(env).yml up -d
Where BUILD_ID is of the form YYYYMMDD-epoch-git_SHA. Note that the baseImage uses the --no-cache flag.
So far, so good (I think).
My `docker-compose.yml``` file looks like this:
version: '3.7'
volumes:
web_app:
services:
php-apache:
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
volumes:
- web_app:/var/www/html
env_file:
- ./src/.env.prod
and my docker-staging.yml file looks like this:
version: '3.7'
services:
php-apache:
image: 'src:20200924-174443-3f16358-staging'
container_name: perfboard-staging
ports:
- 8888:80
- 9261:443
env_file:
- ./src/.env.staging
Yes, I've hardcoded the name of the stagingImage for the purposes of debugging.
What I expect to see: when I hit localhost:8888 I expect to see my modified message. I do not.
When inspecting the baseImage, the modified message is there. I cannot directly inspect the stagingImage because I keep getting an Apache error presumably because of the entrypoint.
If I delete every image and container from my system, this behaves as expected.
Deleting the above specific baseIamge and stagingImage does not fix the problem.
Any ideas on where to look?
Your docker-compose.yml file specifies
volumes:
- web_app:/var/www/html
That causes the contents of the web_app volume to be mounted over the /var/www/html directory in the container, hiding whatever was initially in the image.
You should delete this volume declaration.
The first time only when you run the container, Docker will copy the contents of the image into an empty named volume. From that point onward, it treats the volume as user data and never makes any changes to it; even if the underlying image is updated, the volume contents are not, and when you run the container, the volume takes precedence over the updated image code.
There are a number of practical problems with depending on the "copy from an image into a volume" behavior (it doesn't work with host bind mounts; it doesn't work on Kubernetes; it causes image updates to be ignored) and I'd try very hard to avoid mounting any sort of volume over your application code.
If you think it's important to have the volume for some other reason, you need to cause Compose to delete it, so that it will get recreated and have the "first time only" behavior again. docker-compose down -v will delete all containers, networks, and volumes, but that will also include things like your database data. You might be able to use docker-compose down (without -v) to stop the container, then use docker volume ls; docker volume rm dirname_web_app to manually delete the volume.
There is docker-compose that uses base Dockerfile created image for application.
Dockerfile looks similar to below. Some lines are omitted for reason.
FROM ubuntu:18.04
RUN set -e -x ;\
apt-get -y update ;\
apt-get -y upgrade ;
...
USER service
When using this image in docker-compose and adding named volume to service, folder in named volume is not accessible, with message Permission denied. Part from docker-compose looks as below.
version: "3.1"
services:
myapp:
image: myappimage
command:
- /myapp
ports:
- 12345:1234
volumes:
- logs-folder:/var/log/myapp
volumes:
logs-folder:
My assumption was that USER service line is issue, which I confirmed by setting user: root in myapp service.
Now, question is next. I would like to avoid manually creating volume and setting permissions. I would like it to be automated using docker-compose.
Is this possible and if yes, how can this be done?
Yes, there is a trick. Not really in the docker-compose file, but in the Docker file. You need to create the /var/log/myapp folder and set its permissions before switching to the service user:
FROM ubuntu:18.04
RUN useradd myservice
RUN mkdir /var/log/myapp
RUN chown myservice:myservice /var/log/myapp
...
USER myservice:myservice
Docker-compose will preserve permissions.
See Docker Compose mounts named volumes as 'root' exclusively
I had a similar issue but mine was related to a file shared via a volume to a service I was not building with a Dockerfile, but pulling. I had shared a shell script that I used in docker-compose but when I executed it, did not have permission.
I resolved it by using chmod in the command of docker compose
command: -c "chmod a+x ./app/wait-for-it.sh && ./app/wait-for-it.sh -t 150 -h ..."
volumes:
- ./wait-for-it.sh:/app/wait-for-it.sh
You can change volume source permissions to avoid Permission denied error.
chmod a+x logs-folder
I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.
Here's my problem: I want to build a chroot environment inside a docker container. The problem is that debootstrap cannot run, because it cannot mount proc in the chroot:
W: Failure trying to run: chroot /var/chroot mount -t proc proc /proc
(in the log the problem turns out to be: mount: permission denied)
If I run --privileged the container, it (of course) works...
I'd really really really like to debootstrap the chroot in the Dockerfile (much much cleaner). Is there a way I can get it to work?
Thanks a lot!
You could use the fakechroot variant of debootstrap, like this:
fakechroot fakeroot debootstrap --variant=fakechroot ...
Cheers!
No, this is not currently possible.
Issue #1916 (which concerns running privileged operations during docker build) is still an open issue. There was discussion at one point of adding a command-line flag and RUNP command but neither of these have been implemented.
Adding --cap-add=SYS_ADMIN --security-opt apparmor:unconfined to the docker run command works for me.
See moby/moby issue 16429
This still doesn't work (2018-05-31).
Currently the only option is debootstrap followed by docker import - Import from a local directory
# mkdir /path/to/target
# debootstrap bionic /path/to/target
# tar -C /path/to/target -c . | docker import - ubuntu:bionic
debootstrap version 1.0.107, which is available since Debian 10 Buster (July 2019) or in Debian 9 Stretch-Backports has native support for Docker and allows building a Debian root image without requiring privileges.
Dockerfile:
FROM debian:buster-slim AS builder
RUN apt-get -qq update \
&& apt-get -q install --assume-yes debootstrap
ARG MIRROR=http://deb.debian.org/debian
ARG SUITE=sid
RUN debootstrap --variant=minbase "$SUITE" /work "$MIRROR"
RUN chroot /work apt-get -q clean
FROM scratch
COPY --from=builder /work /
CMD ["bash"]
docker build -t my-debian .
docker build -t my-debian:bullseye --build-arg SUITE=bullseye .
There is a fun workaround, but it involves running Docker twice.
The first time, using a standard docker image like ubuntu:latest, only run the first stage of debootstrap by using the --foreign option.
debootstrap --foreign bionic /path/to/target
Then don't let it do anything that would require privileged and isn't needed anyway by modifying the functions that will be used in the second stage.
sed -i '/setup_devices ()/a return 0' /path/to/target/debootstrap/functions
sed -i '/setup_proc ()/a return 0' /path/to/target/functions
The last step for that docker run is to have that docker execution tar itself up to a directory that is included as a volume.
tar --exclude='dev/*' -cvf /guestpath/to/volume/rootfs.tar -C /path/to/target .
Ok, now prep for a second run. First load your tar file as a docker image.
cat /hostpath/to/volume/rootfs.tar | docker import - my_image:latest
Then, run docker using FROM my_image:latest and run the second debootstrap stage.
/debootstrap/debootstrap --second-stage
That might be obtuse, but it does work without requiring --priveledged. You are effectively replacing running chroot with running a 2nd docker container.
This does not address the OP requirements for doing chroot in a container without --privileged set, but it is an alternative method that may be of use.
See Docker Moby for hetergenous rootfs builds. It creates a native temp directory and creates a rootfs in it using debootstrap which needs sudo. THEN it creates a docker image using
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/bash"]
This is a common recipe for running a pre-made rootfs in a docker image. Once the image is built, it does not need special permissions. AND it's supported by the docker devel team.
Short answer, without privileged mode no there isn't a way.
Docker is targeted at micro-services and is not a drop in replacement for virtual machines. Having multiple installations in one container definitely not congruent with that. Why not use multiple docker containers instead?