COPY failed: forbidden path outside the build context docker compose [duplicate] - docker

This question already has answers here:
How to include files outside of Docker's build context?
(19 answers)
Closed 1 year ago.
THis is the project structure
Project
/deployment
/Dockerfile
/docker-compose.yml
/services
/ui
/widget
Here is the docker file
FROM node:14
WORKDIR /app
USER root
# create new user (only root can do this) and assign owenership to newly created user
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# switch to new created user so that appuser will be responsible for all files and has access
USER appusr:appusr
COPY ../services/ui/widget/ /app/
COPY ../.env /app/
# installing deps
RUN npm install
and docker-compose
version: "3.4"
x-env: &env
HOST: 127.0.0.1
services:
widget:
build:
dockerfile: Dockerfile
context: .
ports:
- 3002:3002
command:
npm start
environment:
<<: *env
restart: always
and from project/deplyment/docker-compose up it shows
Step 6/8 : COPY ../services/ui/widget/ /app/
ERROR: Service 'widget' failed to build : COPY failed: forbidden path outside the build context: ../services/ui/widget/ ()
am i setting the wrong context?

You cannot COPY or ADD files outside the current path where Dockerfile exists.
You should either move these two directories to where Dockerfile is and then change your Dockerfile to:
COPY ./services/ui/widget/ /app/
COPY ./.env /app/
Or use volumes in docker-compose, and remove the two COPY lines.
So, your docker-compose should look like this:
x-env: &env
HOST: 127.0.0.1
services:
widget:
build:
dockerfile: Dockerfile
context: .
ports:
- 3002:3002
command:
npm start
environment:
<<: *env
restart: always
volumes:
- /absolute/path/to/services/ui/widget/:/app/
- /absolute/path/to/.env/:/app/
And this should be your Dockerfile if you use volumesindocker-compose`:
FROM node:14
WORKDIR /app
USER root
# create new user (only root can do this) and assign owenership to newly created user
RUN echo "$(date '+%Y-%m-%d %H:%M:%S'): ======> Setup Appusr" \
&& groupadd -g 1001 appusr \
&& useradd -r -u 1001 -g appusr appusr \
&& mkdir /home/appusr/ \
&& chown -R appusr:appusr /home/appusr/\
&& chown -R appusr:appusr /app
# switch to new created user so that appuser will be responsible for all files and has access
USER appusr:appusr
# installing deps
RUN npm install

You problem is that you are referencing a file which is outside Dockerfile context. By default, is the location from where you execute the build command.
From docker documentation - Copy section:
The path must be inside the context of the build; you cannot COPY ../something /something, because the first step of a docker build is to send the context directory (and subdirectories) to the docker daemon.
However, you can use the parameter -f to specify the dockerfile independently of the folder you are running your build. So you could use the next line executing it from projects:
docker build -f ./deployment/Dockerfile .
You will need to modify your copy lines as well to point at the right location.
COPY ./services/ui/widget/ /app/
COPY ./.env /app/

Related

Permission denied while executing binaries in tmp folder (Docker)

Hello I am trying to build an image which can compile and run a c++ program securely.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
ENTRYPOINT [ "worker" ]
version: "3.9"
services:
gpp:
build: .
environment:
- token=test_token
- code=#include <iostream>\r\n\r\nusing namespace std;\r\n\r\nint main() {\r\n int a = 10;\r\n int b = 20;\r\n cout << a << \" \" << b << endl;\r\n int temp = a;\r\n a = b;\r\n b = temp;\r\n cout << a << \" \" << b << endl;\r\n return 0;\r\n}
network_mode: bridge
privileged: false
read_only: true
tmpfs: /tmp
security_opt:
- "no-new-privileges"
cap_drop:
- "all"
Here worker is a golang binary which reads code from environment variable and stores it in /tmp folder as main.cpp, and then tries to compile and run it using g++ /tmp/main.cpp && ./tmp/a.out (using golang exec)
I am getting this error scratch_4-gpp-1 | Error : fork/exec /tmp/a.out: permission denied, from which what I can understand / know that executing anything from tmp directory is restricted.
Since, I am using read_only root file system, I can only work on tmp directory, Please guide me how I can achieve above task keeping my container secured.
Docker's default options for a tmpfs include noexec. docker run --tmpfs allows an extended set of mount options, but neither Compose tmpfs: nor the extended syntax of volumes: allows changing anything other than the size option.
One straightforward option here is to use an anonymous volume. Syntactically this looks like a normal volumes: line, except it only has a container path. The read_only: option will make the container's root filesystem be read-only, but volumes are exempted from this.
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
This will be a "normal" Docker volume, so it will be disk-backed and you'll be able to see it in docker volume ls.
Complete summary of solution -
#davidmaze mentioned to add an anonymous volume using
version: '3.8'
services:
...
read_only: true
volumes:
- /build # which will be read-write
as I replied I am still getting an error Cannot create temporary file in ./: Read-only file system when I tried to compile my program. When I debugged my container to see file system changes in read_only:false mode, I found that compiler is trying to save the a.out file in /bin folder, which is suppose
to be read only.
So I added this additional line before the entry point and my issue was solved.
FROM golang:latest as builder
WORKDIR /app
COPY . .
RUN go mod download
RUN env CGO_ENABLED=0 go build -o /worker
FROM alpine:latest
RUN apk update && apk add --no-cache g++ && apk add --no-cache tzdata
ENV TZ=Asia/Kolkata
WORKDIR /
COPY --from=builder worker /bin
ARG USER=default
RUN addgroup -S $USER && adduser -S $USER -G $USER
USER $USER
WORKDIR /build <---- this line
ENTRYPOINT [ "worker" ]

docker image is not rebuilt automatically on file change

I am running docker containers with WSL2. When I make changes to my files in the /client directory the changes are not reflected and I have to do docker compose stop client, docker compose build client and docker compose start client. If I cat a file after changing domething one can see the change.
Here is my Dockerfile:
FROM node:16.17.0-alpine
RUN mkdir -p /client/node_modules
RUN chown -R node:node /client/node_modules
RUN chown -R node:node /root
WORKDIR /client
# Copy Files
COPY . .
# Install Dependencies
COPY package.json ./
RUN npm install --force
USER root
I alse have a /server directory with the following Dockerfile and the automatic image rebuild happens on file change there just fine:
FROM node:16.17.0-alpine
RUN mkdir -p /server/node_modules
RUN chown -R node:node /server/node_modules
WORKDIR /server
COPY . .
# Install Dependencies
COPY package.json ./
RUN npm install --force --verbose
USER root
Any help is appreciated.
Solved by adding the following to my docker-compose.yml:
environment:
WATCHPACK_POLLING: "true"
Docker does not take care of the hot-reload.
You should look into the hot-reload documentation of the tools you are building with.

Docker context on remote server “Error response from daemon: invalid volume specification”

I am using docker context to deploy my local container to my debian webserver. I use Docker Desktop for Windows on Windows 10. The app is written using Flask.
At some point I tried “docker-compose up --build” after “docker context use remote” and I was getting the following error:
Error response from daemon: invalid volume specification: ‘C:\Users\user\fin:/fin:rw’
Locally everything works fine when I try to deploy it to the production server the error pops up.
The Dockerfile looks like the following:
FROM python:3.8-slim-buster
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
ENV PATH="/home/user/.local/bin:${PATH}"
COPY . ./
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN useradd -ms /bin/bash user && chown -R user $INSTALL_PATH
USER user
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
CMD gunicorn -c "python:config.gunicorn" "fin.app:create_app()"
while an excerpt of the docker-compose.yml look like the following:
version: '3.8'
services:
flask-app:
container_name: flask-app
restart: always
build: .
command: >
gunicorn -c "python:config.gunicorn" "fin.app:create_app()"
environment:
PYTHONUNBUFFERED: 'true'
volumes:
- '.:/fin'
ports:
- 8000:8000
env_file:
- '.env'
In the .env file the option
COMPOSE_CONVERT_WINDOWS_PATHS=1 is set.
At some point I tried the same procedure using WSL2 with Ubuntu installed, which led to the following message:
Error response from daemon: create \\wsl.localhost\Ubuntu-20.04\home\user\fin: "\\\\wsl.localhost\\Ubuntu-20.04\\home\\user\\fin" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
Based on this message I changed the Dockerfile to:
FROM python:3.8-slim-buster
ENV INSTALL_PATH=/usr/src/app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
ENV PATH=/home/user/.local/bin:${PATH}
COPY . /usr/src/app/
# set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
#ENV COMPOSE_CONVERT_WINDOWS_PATHS=1
RUN useradd -ms /bin/bash user && chown -R user $INSTALL_PATH
USER user
COPY requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
CMD gunicorn -c "python:config.gunicorn" "fin.app:create_app()"
But still the error remains, and I have to clue how to solve it.
Thank you in advance for your help.
You are getting invalid volume specification: ‘C:\Users\user\fin:/fin:rw’ in your production environment is because, the host path C:\Users\user\fin isn't available. You can remove it when you are deploying or change it to an absolute path which is available in your production environment as below.
volumes:
- '/root:/fin:rw'
where /root is a directory available in my production environment.
/path:/path/in/container mounts the host directory, /path at the /path/in/container
path:/path/in/container creates a volume named path with no relationship to the host.
Note the slash at the beginning. if / is present it will be considered as a host directory, else it will be considered as a volume
use this (without quotes and with a slash so it knows you mean this folder):
volumes:
- ./:/fin

Automate project in laravel

I have an app in Laravel with .env.local file (a and I made the next docker-compose file:
api:
container_name: nadal_api
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/var/www/html/app
ports:
- ${APP_PORT}:80
links:
- db
- redis
And my Dockerfile:
FROM composer:latest AS composer
WORKDIR /var/www/html/app/
FROM php:7.2-fpm-stretch
RUN apt-get update && apt-get install -y \
supervisor \
nginx \
zip
ADD docker/nginx.conf /etc/nginx/nginx.conf ADD
docker/virtualhost.conf /etc/nginx/conf.d/default.conf ADD
docker/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
ARG enviroment
COPY --from=composer /usr/bin/composer /usr/bin/composer
COPY .env.local .env RUN chmod -R g+w /var/www/html/app/bootstrap
RUN composer install RUN php artisan key:generate
EXPOSE 80
CMD ["/usr/bin/supervisord"]
I want to clone the repository and when doing a docker-compose build that does the following in the dockerfile:
rename .env.local to .env
give permissions to the storage folder. I have an error in this line
RUN chmod -R g+w /var/www/html/app/bootstrap
chmod: cannot access '/var/www/html/app/bootstrap': No such file or
directory
docker-compose.yaml: ${APP_PORT} take values from .env.local (I tried with env_file but it does not work
In your Dockerfile there is no COPY action to copy all your current project code into created image. Therefore bootstrap folder is not exist in your image. So chmod tells you exactly that.
Volumes (this line - .:/var/www/html/app) will sync your current directory with container later when it will be created depending on image structure. So if you want to give permissions to bootstrap folder then copy project code into image before giving permissions to it.
Add this line before permission operations to make folders accessible.
COPY . /var/www/html/app

File not found from Dockerfile, docker-compose

I have these codes in my Dockerfile.
FROM python:3
# Create user named "airport".
RUN adduser --disabled-password --gecos "" airport
# Login as the newly created "airport" user.
RUN su - airport
# Change working directory.
WORKDIR /home/airport/mount_point/
# Install Python packages at system-wide level.
RUN pip install -r requirements.txt
# Make sure to migrate all static files to the root of the project.
RUN python manage.py collectstatic --noinput
# This utility.sh script is used to reset the project environment. This includes
# removing unecessary .pyc and __pycache__ folders. This is optional and not
# necessary, I just prefer to have my environment clean before Docking.
RUN utility_scripts/utility.sh
When I called docker-compose build it returns /bin/sh: 1: requirements.txt: not found. Despite I have load the necessary volume in my docker-compose.yml. I am sure that requirements.txt is in ./
web:
build:
context: ./
dockerfile: Dockerfile
command: /home/airport/mount_point/start_web.sh
container_name: django_airport
expose:
- "8080"
volumes:
- ./:/home/airport/mount_point/
- ./timezone:/etc/timezone
How can I solve this problem?
Before running RUN pip install -r requirements.txt, you need to add the requirements.txt file to the image.
...
ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt
...
For a sample on how to dockerize a django application, check https://docs.docker.com/compose/django/ . You need to add the requirements.txt and the code to the image.

Resources