Container volume not mounting to local path - docker

Python Code:
with open("/var/lib/TestingVolume.txt", "r") as outFile:
data = outFile.read()
with open("/var/lib/TestingVolume.txt", "w") as outFile:
outFile.write("Hi, Hello")
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app /var
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "WriteData.py"]
docker-compose.yml
version: '3.4'
services:
newfoldercopy:
image: newfoldercopy
build:
context: .
dockerfile: ./Dockerfile
volumes:
- D:/PythonService:/var/lib/:cached
- ~:/host-home-folder:cached
- ./data-subfolder:/data:cached
I am using VS code. Added all the docker files to my workspace. Trying to mount the local path to the container volume. The above code is not writing, as the container is writing data on a container virtual file system.
https://code.visualstudio.com/remote/advancedcontainers/add-local-file-mount documentation says that we have to cache. still, it is not working.

You mount to /var/lib/data but write to /var/lib so the path you write to is not at or below the mounted path.
The easiest way to fix it is probably to change your code so you write to /var/lib/data like this
with open("/var/lib/data/TestingVolume.txt", "r") as outFile:
data = outFile.read()
with open("/var/lib/data/TestingVolume.txt", "w") as outFile:
outFile.write("Hi, Hello")

Related

Permission denied while executing binaries in tmp folder (Docker)

Hello I am trying to build an image which can compile and run a c++ program securely.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
ENTRYPOINT [ "worker" ]
version: "3.9"
services:
gpp:
build: .
environment:
- token=test_token
- code=#include <iostream>\r\n\r\nusing namespace std;\r\n\r\nint main() {\r\n int a = 10;\r\n int b = 20;\r\n cout << a << \" \" << b << endl;\r\n int temp = a;\r\n a = b;\r\n b = temp;\r\n cout << a << \" \" << b << endl;\r\n return 0;\r\n}
network_mode: bridge
privileged: false
read_only: true
tmpfs: /tmp
security_opt:
- "no-new-privileges"
cap_drop:
- "all"
Here worker is a golang binary which reads code from environment variable and stores it in /tmp folder as main.cpp, and then tries to compile and run it using g++ /tmp/main.cpp && ./tmp/a.out (using golang exec)
I am getting this error scratch_4-gpp-1 | Error : fork/exec /tmp/a.out: permission denied, from which what I can understand / know that executing anything from tmp directory is restricted.
Since, I am using read_only root file system, I can only work on tmp directory, Please guide me how I can achieve above task keeping my container secured.
Docker's default options for a tmpfs include noexec. docker run --tmpfs allows an extended set of mount options, but neither Compose tmpfs: nor the extended syntax of volumes: allows changing anything other than the size option.
One straightforward option here is to use an anonymous volume. Syntactically this looks like a normal volumes: line, except it only has a container path. The read_only: option will make the container's root filesystem be read-only, but volumes are exempted from this.
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
This will be a "normal" Docker volume, so it will be disk-backed and you'll be able to see it in docker volume ls.
Complete summary of solution -
#davidmaze mentioned to add an anonymous volume using
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
as I replied I am still getting an error Cannot create temporary file in ./: Read-only file system when I tried to compile my program. When I debugged my container to see file system changes in read_only:false mode, I found that compiler is trying to save the a.out file in /bin folder, which is suppose
to be read only.
So I added this additional line before the entry point and my issue was solved.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
WORKDIR /build <---- this line
ENTRYPOINT [ "worker" ]

How to use GCP service account json files in Docker

I am dockerizing Fastapi application which is using Firebase. I need to access service json file and I have configured my docker container as follows.
Dockerfile
FROM python:3.10-slim
ENV PYTHONUNBUFFERED 1
WORKDIR /app
# Install dependencies
COPY ./requirements.txt /requirements.txt
EXPOSE 8000
RUN pip install --no-cache-dir --upgrade -r /requirements.txt
RUN mkdir /env
# Setup directory structure
COPY ./app /app/app
COPY ./service_account.json /env
CMD ["uvicorn", "app.app:app", "--host", "0.0.0.0", "--port", "8000"]
Docker-compose file
version: "3.9"
services:
app:
build:
context: .
restart: always
environment:
- GOOGLE_APPLICATION_CREDENTIALS_CLOUDAPI=${GOOGLE_APPLICATION_CREDENTIALS_CLOUDAPI}
- GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS}
volumes:
- ./env:/env
volumes:
env:
Now when I run docker-compose up -d --build the container fails with the error FileNotFoundError: [Errno 2] No such file or directory: '/env/service_account.json'. When I inspect the container I can see the ENV variable set successfully as shown "GOOGLE_APPLICATION_CREDENTIALS=/env/service_account.json",. Now why is this failing?
You have context: . and COPY ./service_account.json /env
But when you run the container, you have
volumes:
- ./env:/env
Meaning your service_acccount file is not in ./env folder, and is instead outside of it.
When you mount a volume, it replaces the directory inside the container, so if you need a local env folder mounted as /env in the container, then you should move your JSON file somewhere else such as /opt (COPY ./service_account.json /opt), and then set GOOGLE_APPLICATION_CREDENTIALS=/opt/service_account.json
If you don't need the whole folder, then you only need
volumes:
- ./service_account.json:/env/service_account.json:ro
Otherwise, move the JSON file into ./env on your host and change COPY ./env/service_account.json /env

Docker-compose volumes working with Dockerfile : Device or resource busy

I get an issue working with docker-compose service while using Dockerfile build.
Indeed, I provide a .env file into my app/ folder. I want the TAG value of the .env file to be propagate/render into the config.ini file. I tried to achieve using entrypoint.sh (which is launch just after the volumes) but it failed.
There is my docker-compose.yml file
# file docker-compose.yml
version: "3.4"
app-1:
build:
context: ..
dockerfile: deploy/Dockerfile
image: my_image:${TAG}
environment:
- TAG=${TAG}
volumes:
- ../config.ini:/app/config.ini
And then my Dockerfile:
# file Dockerfile
FROM python:3.9
RUN apt-get update -y
RUN apt-get install -y python-pip
COPY ./app /app
WORKDIR /app
RUN pip install -r requirements.txt
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["python", "hello_world.py"]
In my case, I mount a config.ini file with the configuration like :
# file config.ini
[APP_INFO]
name = HELLO_WORLD
version = {TAG}
And finally, in my app folder, I have a .env file where you can found the version of the app, which is evoluing through time.
# file .env
TAG=1.0.0
Finally
#!/bin/bash
echo "TAG:${TAG}"
awk '{sub("{TAG}","${TAG}")}1' /app/config.ini > /app/final_config.ini
mv /app/final_config.ini /app/config.ini
exec "$#" # Need to execute RUN CMD function
I want my entrypoint.sh (which is called before the last DOCKERFILE line and after the docker-compose volumes. With the entrypoint.sh, I want overwritte my mounted file by a new one, cerated using awk.
Unfortunatly, I recover the tag and I can create a final_config.ini file, but I'm not able to overwrite config.ini with it.
I get this error :
mv: cannot move '/app/final_config.ini' to '/app/config.ini': Device or resource busy
How can I overwritting config.ini without getting error? Is there an more simple solution?
Because /app/config.ini is a mountpoint, you can't replace it. You should be able to rewrite it, like this...
cat /app/final_config.ini > /app/config.ini
...but that would, of course, modify the original file on your host. For what you're doing, a better solution is probably to mount the template configuration in an alternate location, and then generate /app/config.ini. E.g, mount it on /app/template_config.ini:
volumes:
- ../config.ini:/app/template_config.ini
And then modify your script to output to the final location:
#!/bin/bash
echo "TAG:${TAG}"
awk '{sub("{TAG}","${TAG}")}1' /app/template_config.ini > /app/config.ini
exec "$#" # Need to execute RUN CMD function

Why do some docker files copy files instead of mount them as a volume

Can somebody explain to me why some Dockerfiles have steps to copy files rather than just mount a volume with the files on.
I have been looking at the setup for a Django project with Docker and the dockerfile has steps with copy commands in:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
In other Dockerfiles I have used (homeassistant) I have just mounted a directory as a volume and it's worked. What's going on here?
Can't I just keep the code and requirements in the same folder and mount them?
Just can't get my head around it
Edit:
For reference I'm looking at the Docker site tutorial for Django and it mounts the root dir as /code
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Why is that volume mounted to /code if we copy the files there anyway. Maybe that is what is throwing me off?
Volumes are used to manage files stored by the Docker container. It allows the Docker container to write to that specific location on the file system. If the only thing you want is to execute a piece of code, it is better to just copy it over to the Docker container so that it does not have write access to the file-system of the host.
Edit:
I do not actually know why they specify the volume in the docker compose setup. The build: . specifies it should use the Dockerfile in the current directory, which already includes the copy statement. It seems a bit pointless. Might be a mistake in the tutorial.

Operation of the mkdir command with dockerfile

I cannot create a directory with the mkdir command in a container with dockerfile.
My Dockerfile file is simply ;
FROM php:fpm
WORKDIR /var/www/html
VOLUME ./code:/var/www/html
RUN mkdir -p /var/www/html/foo
In this way I created a simple php: fpm container.
and I wrote to create a directory called foo.
docker build -t phpx .
I have built with the above code.
In my docker-compose file as follows.
version: '3'
services:
web:
container_name: phpx
build : .
ports:
- "80:80"
volumes:
- ./code:/var/www/html
later; run the following code and I entered the container kernel.
docker exec -it phpx /bin/bash
but there is no a directory called foo in / var / www / html.
I wonder where I'm doing wrong.
Can you help me?
The reason is that you are mounting a volume from your host to /var/www/html.
Step by step:
RUN mkdir -p /var/www/html/foo creates the foo directory inside the filesystem of your container.
docker-compose.yml ./code:/var/www/html "hides" the content of /var/www/html in the container filesystem behind the contents of ./code on the host filesystem.
So actually, when you exec into your container you see the contents of the ./code directory on the host when you look at /var/www/html.
Fix: Either you remove the volume from your docker-compose.yml or you create the foo-directory on the host before starting the container.
Additional Remark: In your Dockerfile you declare a volume as VOLUME ./code:/var/www/html. This does not work and you should probably remove it. In a Dockerfile you cannot specify a path on your host.
Quoting from docker:
The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability. since a given host directory can’t be guaranteed to be available on all hosts. For this reason, you can’t mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.
I am able to create a directory inside the 'workdir' for docker as follows:
Dockerfile content
COPY src/ /app
COPY logging.conf /app
COPY start.sh /app/
COPY Pipfile /app/
COPY Pipfile.lock /app/
COPY .env /app/
RUN mkdir -p /app/logs
COPY logs/some_log.log /app/logs/
WORKDIR /app
I have not mentioned the volume parameter in my 'docker-compose.yaml' file
So here is what I suggest: Remove the volume parameter from the 'Dockerfile' as correctly pointed by the Fabian Braun.
FROM php:fpm
RUN mkdir -p /var/www/html/foo
WORKDIR /var/www/html
And remove the volume parameter from the docker-compose file. It will work. Additionally, I would like to know how you tested of there is a directory named 'foo'.
Docker-compose file content
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile # The name of your docker file
container_name: phpx
ports:
- "80:80"
You can use the SHELL instruction of Dockerfile.
ENV HOME /usr/local
SHELL ["/bin/sh", "-c"]
RUN mkdir $HOME/logs

Resources