Dockerfile user creation permission not granted - docker

i am learning docker, i have created a Dockerfile like this :
FROM node:alpine
RUN addgroup -g 1001 -S appuser && adduser -u 1001 -S appuser -G appuser
RUN apk update && apk add bash
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY ./wait-for-it.sh /usr/wait-for-it.sh
RUN chmod +x /usr/wait-for-it.sh
RUN chmod ugo+rwx /usr
RUN chown -R appuser:appuser /usr/src/app
USER appuser
COPY . .
And i have a docker-compose.override.yml like this
version: '3'
services:
main:
command: bash -c "/usr/src/app/wait-for-it.sh --timeout=0 mongo:27017 && npm run dev"
volumes:
- ./api/src:/usr/src/app/src
this is giving error in main container:
bash: /usr/src/app/wait-for-it.sh: Permission denied
Please help how can give permission to appuser. if i remove user creation everything works fine.

In docker it is common to run application containers as the root user. Remember this is the root user of the container not the host machine. If you do want to use a separate user to run the application within the container, I suggest you move the USER statement to the end of the Dockerfile and before that add a chown to update the ownership

Related

Permission denied when trying to run an executable in a Docker container

I'm trying to make an Express server with access to sqlPackage for DACPAC-deployments.
This is the final stage in my Docker build:
FROM node:16-alpine
WORKDIR /usr/app
ARG SQLPACKAGE_URL=https://download.microsoft.com/download/f/0/9/f091c731-45be-48fa-ae84-bc28388e3ef8/sqlpackage-linux-x64-en-16.0.6161.0.zip
# Install sqlPackage
RUN apk add --no-cache wget unzip
RUN wget -progress=bar:force -q -O sqlpackage.zip $SQLPACKAGE_URL \
&& unzip -qq sqlpackage.zip -d /usr/app/sqlpackage \
&& chmod 777 /usr/app/sqlpackage \
&& chown -R node.node /usr/app/sqlpackage
COPY --from=ts-remover /usr/app ./
USER node
CMD ["main.js"]
ARG PORT=8080
EXPOSE $PORT
My node project cannot run the executable though. I've tried running it manually from within the container, but I get permission denied there too.
/usr/app $ whoami
node
/usr/app $ ls -l
total 41272
drwxrwxrwx 2 node node 20480 Sep 12 14:12 sqlpackage
/usr/app $ ./sqlpackage
/bin/sh: ./sqlpackage: Permission denied
Why can I not execute this file, even though the permissions seem to be correct?

How do I use my Personal Google ID with docker build when using Google Artifact Registry?

Currently I have the following Docker file...
FROM node:16-alpine
RUN apk add curl gnupg
RUN (curl -Ls https://cli.doppler.com/install.sh || wget -qO- https://cli.doppler.com/install.sh) | sh
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY --chown=node:node . .
USER node
ENV GOOGLE_APPLICATION_CREDENTIALS=/home/node/app/google-key.json
RUN --mount=type=secret,id=googlekey,target=/home/node/app/google-key.json
RUN npx --yes google-artifactregistry-auth
RUN npm ci
EXPOSE 3000
CMD ["doppler", "run", "--", "npm", "start"]
This works great with a service account key, however, I don't want all of my developers to have permissions to create a service account key. I can't check the account key in for obvious reasons.
Is there a way to pass the person's personal creds into docker so they can build? All the users have read access to the registry.

Permissions problem in Docker container built in Ubuntu VM composed of files created on Windows host

I work on a project that has a large number of Java SpringBoot services (and other types) running in k8s clusters. Each service has a small start script that executes a more complex script that is provided in a configmap. This all works fine in builds and at runtime.
I need to make some changes to that complex script. I've already made the changes and tested the concept in an isolated script. I still need to do more testing of it. I am attempting to take some of the command lines that run in our Linux build system and run them on my VirtualBox Ubuntu VM that runs on my Windows 10 laptop. Although I am running this on the VM, most of the files were created and written on the host Windows 10 laptop that I get to using a VirtualBox Shared Folder.
When I look at the "ls -l" output of "startService.sh", I just get this:
-rwxrwx--- 1 root vboxsf 634 Aug 24 15:07 startService.sh*
Note that I am running docker with my own uid, and I have that uid in the "vboxsf" group.
It seems like when the file gets copied into the image, either the owner or the perms get changed in a way that make it inaccessible from within the container.
I tried adding a "RUN chmod 777 startService.sh" in the Dockerfile, just before the ENTRYPOINT, but that fails at build time with this:
Step 23/26 : RUN chmod 777 startService.sh
---> Running in 6dbb89c930c1
chmod: startService.sh: Operation not permitted
The command '/bin/sh -c chmod 777 startService.sh' returned a non-zero code: 1
I don't know why this is happening, or whether this is something that might mitigate this.
My "docker build" command looks like it went fine. I saw it execute all the steps that the normal build shows. The "docker run" step seemed to go fine, but it finished very quickly. When I looked at the "docker log" for the container, it just said entirely:
/bin/sh: ./startService.sh: Permission denied
Note that everything here is done the same way it is on the build server. There seems to be something funny with the fact that I'm running an Ubuntu
You have to write chmod +x startService.sh before docker run or docker-compose up -d --build
And example Dockerfile for django. Look at actions with wait-for, you must make same
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-slim as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
python3-dev musl-dev libffi-dev\
&& pip install psycopg2
# lint
RUN pip install --upgrade pip
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# copy project
COPY . .
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-slim
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
RUN mkdir $APP_HOME/currencies
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev bash netcat rabbitmq-server
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
COPY wait-for /bin/wait-for
COPY /log /var/log
COPY /run /var/run
RUN pip install --no-cache /wheels/*
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chown -R app:app /var/log/
RUN chown -R app:app /var/run/
EXPOSE 3000
# change to the app user
USER app
# only for dgango
CMD ["gunicorn", "Config.asgi:application", "--bind", "0.0.0.0:8000", "--workers", "3", "-k","uvicorn.workers.UvicornWorker","--log-file","-"]

Dockerfile for Meteor 2.2 proyect

I have been trying for almost 3 weeks to build and run a meteor app (bundle) using Docker, I have tried all the majors recommendations in Meteor forums, Stackoverflow and official documentarion with no success, first I tried to make the bundle and put inside the docker but have awful results, then I realized what I need to do is to make the bundle inside the docker and use a multistage Dockerfile, here is the one I am using right now:
FROM chneau/meteor:alpine as meteor
USER 0
RUN mkdir -p /build /app
RUN chown 1000 -R /build /app
WORKDIR /app
COPY --chown=1000:1000 ./app .
COPY --chown=1000:1000 ./app/packages.json ./packages/
RUN rm -rf node_modules/*
RUN rm -rf .meteor/local/*
USER 1000
RUN meteor update --packages-only
RUN meteor npm ci
RUN meteor build --architecture=os.linux.x86_64 --directory /build
FROM node:lts-alpine as mid
USER 0
RUN apk add --no-cache python make g++
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=meteor /build/bundle .
USER 1000
WORKDIR /app/programs/server
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm i
FROM node:lts-alpine
USER 0
ENV TZ=America/Santiago
RUN apk add -U --no-cache tzdata && cp /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir -p /app
RUN chown 1000 -R /app
WORKDIR /app
COPY --chown=1000:1000 --from=mid /app .
USER 1000
ENV LC_ALL=C.UTF-8
ENV ROOT_URL=http://localhost/
ENV MONGO_URL=mongodb://locahost:21027/meteor
ENV PORT=3000
EXPOSE 3000
ENTRYPOINT [ "/usr/local/bin/node", "/app/main.js" ]
if I build with docker build -t my-image:v1 . then run my app with docker run -d --env-file .dockerenv --publish 0.0.0.0:3000:3000 --name my-bundle my-image:v1 it exposes port:3000 but when I try to navigate with my browser to http://127.0.0.1:3000 it redirects to https://localhost, if i do docker exec -u 0 -it my-bundle sh, then apk add curl, then curl 127.0.0.1:3000 I can see the meteor app running inside docker.
Has anyone had this issue before, maybe I am missing some configuration? my bundle also works fine outside docker with node main.js in the bundle folder and can visit http://127.0.0.1:3000 with my browser.
Thanks in advance

How to change permission of a folder to 777 in Dockerfile?

I have a project directory like this:
|-var/www
|-docker-compose.yml
|-app
|--uploads
|---photos
|-Dockerfile
This is my docker-compose.yml file:
myapp:
build:
context: myfolder
dockerfile: Dockerfile
container_name: flask
image: api/v1:01
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: 5000
volumes:
- appdata:/var/www
What I want:
I want to change app/uploads/photos this folder's permission to 777.This is an upload folder,so user can upload to this folder.
My Dockerfile now is look like this:
FROM python:3.6.8-alpine3.9
ENV GROUP_ID=1000 \
USER_ID=1000
WORKDIR /var/www/
ADD . /var/www/
RUN apk add --no-cache build-base libffi-dev openssl-dev ncurses-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN addgroup -g $GROUP_ID www
RUN adduser -D -u $USER_ID -G www www -s /bin/sh
USER www
EXPOSE 5000
After looking in this question,In order to achieve what I want,I tried below:
FROM python:3.6.8-alpine3.9
ENV GROUP_ID=1000 \
USER_ID=1000
WORKDIR /var/www/
ADD . /var/www/
RUN apk add --no-cache build-base libffi-dev openssl-dev ncurses-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN addgroup -g $GROUP_ID www
RUN adduser -D -u $USER_ID -G www www -s /bin/sh
RUN chown -R www:www /var/www
RUN chmod -R 777 /var/www/uploads
RUN chmod -R 777 /var/www/uploads/photos
USER www
EXPOSE 5000
But seems like the chmod command in my dockerfile is not taking effect.Cause whenever I upload some files to app/uploads/photos in my code,my nginx server keep getting this error:
PermissionError: [Errno 13] Permission denied: '/var/www/uploads/photos/myfilename.png'
Somebody please help.Please provide a solution for how to change the permission of a folder in Dockerfile.
UPDATE:
I tried to change the permission of /var/www/uploads after build the container and the container is running by doing below:
docker exec -it myapp /bin/sh
then run
chmod -R 777 /var/www/uploads
What I get is chmod: /var/www/uploads: Operation not permitted
Therefore I suspect the this error will also happened when the docker is building,then according to this answer from serverfault, I tried to modify the dockerfile to this:
FROM python:3.6.8-alpine3.9
ENV GROUP_ID=1000 \
USER_ID=1000
WORKDIR /var/www/
ADD . /var/www/
USER root
RUN chmod -R 777 /var/www/uploads
RUN addgroup -g $GROUP_ID www
RUN adduser -D -u $USER_ID -G www www -s /bin/sh
USER www
RUN apk add --no-cache build-base libffi-dev openssl-dev ncurses-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
But it still doesnt work.Look like my above approach is also wrong.Cause I already run in root in the dockerfile.But at the same time,when I access the container in host using docker exec,also getting Operation not permitted.
I am very new in Docker.Just cant figure it out how to get this done.
What I hope to know:
1) How to change the folder var/www/uploads to permission 777?
2) What is the problem causing I cant change the permission from my approach?
3) What is the better way to achieve this? (If any)

Resources