Docker can't give permission to script in entrypoint - docker

I'm trying to deploy a docker image on my rpi3+ (arm7). The entrypoint script work when run manually, but I cant manage to make it work directly in the dockerfile. I always get this error:
Permision denied : Unknow
Heres my docker.
FROM mcr.microsoft.com/dotnet/core/runtime:2.2-bionic-arm32v7
WORKDIR /SenseAI.CollectionAgent
COPY /s .
USER root
CMD /bin/bash -c 'chmod +x /SenseAI.CollectionAgent/run.sh'
ENTRYPOINT ["/SenseAI.CollectionAgent/run.sh"]
The path of the file seems right.
I have tried so many different commands, but none worked.
The content of my run.sh is
#!/bin/bash
set -x #echo on
apt-get update
apt-get install libreadline-dev -y
chmod +x Gateway/SenseaiZ3Gateway
dotnet SenseAI.CollectionAgent.dll
but I think the error really comes from launching run.sh
Thank you !

you need to set the chmod command in RUN Directive:
FROM mcr.microsoft.com/dotnet/core/runtime:2.2-bionic-arm32v7
WORKDIR /SenseAI.CollectionAgent
COPY /s .
USER root
RUN chmod +x /SenseAI.CollectionAgent/run.sh
ENTRYPOINT ["/SenseAI.CollectionAgent/run.sh"]

Related

Giving execution right to entrypoint script does not working in Dockerfile

I have the following Dockerfile:
# pull official base image
FROM python:3.9.7-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev binutils \
&& apk add --no-cache proj-dev geos gdal
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Building the image works fine, but when I try a docker run after building the image I get the following error:
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/src/app/entrypoint.sh": permission denied: unknown.
ERRO[0000] error waiting for container: context canceled
The line RUN chmod +x /usr/src/app/entrypoint.sh doesn't seem to work because when I comment the ENTRYPOINT statement, build the image then start a containerI can see that the execution right isn't added for that file.
Docker version is:
Docker version 20.10.23, build 7155243
I tried replacing the RUN chmod line with a --chmod=777 option to the COPY of the entrypoint and it didn't change anything.
Any idea why a chmod +x would fail in docker ?
I found a workaround by removing the chmod line and replacing the entrypoint line with the following:
ENTRYPOINT ["/bin/sh", "/usr/src/app/entrypoint.sh"]
This way there is no need for a execution right on the entrypoint script.
But I still don't know why the chmod doesn't work.

Permissions problem in Docker container built in Ubuntu VM composed of files created on Windows host

I work on a project that has a large number of Java SpringBoot services (and other types) running in k8s clusters. Each service has a small start script that executes a more complex script that is provided in a configmap. This all works fine in builds and at runtime.
I need to make some changes to that complex script. I've already made the changes and tested the concept in an isolated script. I still need to do more testing of it. I am attempting to take some of the command lines that run in our Linux build system and run them on my VirtualBox Ubuntu VM that runs on my Windows 10 laptop. Although I am running this on the VM, most of the files were created and written on the host Windows 10 laptop that I get to using a VirtualBox Shared Folder.
When I look at the "ls -l" output of "startService.sh", I just get this:
-rwxrwx--- 1 root vboxsf 634 Aug 24 15:07 startService.sh*
Note that I am running docker with my own uid, and I have that uid in the "vboxsf" group.
It seems like when the file gets copied into the image, either the owner or the perms get changed in a way that make it inaccessible from within the container.
I tried adding a "RUN chmod 777 startService.sh" in the Dockerfile, just before the ENTRYPOINT, but that fails at build time with this:
Step 23/26 : RUN chmod 777 startService.sh
---> Running in 6dbb89c930c1
chmod: startService.sh: Operation not permitted
The command '/bin/sh -c chmod 777 startService.sh' returned a non-zero code: 1
I don't know why this is happening, or whether this is something that might mitigate this.
My "docker build" command looks like it went fine. I saw it execute all the steps that the normal build shows. The "docker run" step seemed to go fine, but it finished very quickly. When I looked at the "docker log" for the container, it just said entirely:
/bin/sh: ./startService.sh: Permission denied
Note that everything here is done the same way it is on the build server. There seems to be something funny with the fact that I'm running an Ubuntu
You have to write chmod +x startService.sh before docker run or docker-compose up -d --build
And example Dockerfile for django. Look at actions with wait-for, you must make same
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-slim as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
python3-dev musl-dev libffi-dev\
&& pip install psycopg2
# lint
RUN pip install --upgrade pip
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# copy project
COPY . .
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-slim
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
RUN mkdir $APP_HOME/currencies
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev bash netcat rabbitmq-server
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
COPY wait-for /bin/wait-for
COPY /log /var/log
COPY /run /var/run
RUN pip install --no-cache /wheels/*
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
RUN chown -R app:app /var/log/
RUN chown -R app:app /var/run/
EXPOSE 3000
# change to the app user
USER app
# only for dgango
CMD ["gunicorn", "Config.asgi:application", "--bind", "0.0.0.0:8000", "--workers", "3", "-k","uvicorn.workers.UvicornWorker","--log-file","-"]

when execute shell file in docker container, binary file error

I'm trying to execute shell file that contains python script.
But, I don't know why i met the error like this
file directory structure
/home/kwstat/workplace/analysis/report_me
home
kwstat
workplace
analysis
report_me
report_me.sh
python_file
python_code.py
...
$docker exec -it test /bin/bash -c "source /home/kwstat/workplace/analysis/report_me/report_me.sh"
# Error
/home/kwstat/workplace/analysis/report_me/report_me.sh: line 30: source: /usr/local/bin/python: cannot execute binary file
I tried several things in Dockerfile, But same error occured.
# 1.CMD ["/bin/bash","-l","-c"]
CMD ["/bin/bash","-l","-c"]
# 2. CMD bin/bash
CMD bin/bash
#########My Dockerfile#############
FROM continuumio/miniconda3
# System packages
RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y subversion
WORKDIR /home/kwstat/workplace/analysis/report_me
COPY environments.yml /home/kwstat/workplace/analysis/report_me/environments.yml
RUN conda env create -f /home/kwstat/workplace/analysis/report_me/environments.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN echo "conda activate my_env" >> ~/.profile
# Activate the environment, and make sure it's activated:
#RUN echo "Make sure flask is installed:"
COPY requirements.txt /home/kwstat/me_report_dockerfile/requirements.txt
RUN pip install -r /home/kwstat/me_report_dockerfile/requirements.txt
WORKDIR /home/kwstat/workplace/analysis/report_me/python_file
COPY python_file/ /home/kwstat/workplace/analysis/report_me/python_file
WORKDIR /home/kwstat/workplace/analysis/report_me/
COPY report_me.sh ./report_me.sh
RUN chmod +x report_me.sh
CMD ["/bin/bash"]
please any help ~
my problem was from shell script.
Inside the shell set the coda env path
and all solved.

How to run specific file first before anything else happens in docker run

I have following docker file.
MAINTANER Your Name "youremail#domain.tld"
RUN apt-get update -y && \
apt-get install -y python-pip python-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
There is one file that I want to run even before app.py file runs. How can I achieve that? I don't want to put code inside app.py.
One solution could be to use a docker-entrypoint.sh script. Basically that entrypoint would allow you to define a set of commands to run to initialize your program.
For example, I could create the following docker-entrypoint.sh:
#!/bin/bash
set -e
if [ "$1" = 'app' ]; then
sh /run-my-other-file.sh
exec python app.py
fi
exec "$#"
And I would use it as so in my Dockerfile:
FROM alpine
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["app"]
There is a lot of articles and examples online about docker-entrypoints. You should give it a quick search I am sure you will find a lot of interesting examples that are used by famous production grade containers.
You could try creating a bash file called run.sh and put inside
app.py
try changing the CMD
CMD [ "run.sh" ]
Also make sure permission in executable for run.sh

Change directory command in Docker on Windows is not working

I have docker that runs the following file:
CMD /usr/local/bin/deploy.sh
First command in the deploy.sh file is
cd /home/app
It works in Docker on Linux, however in Docker on Windows I have following error:
/usr/local/bin/deploy.sh: 1: cd: can't cd to /home/app
What is the reason?
Here is my dockerfile:
FROM node:8
RUN apt-get update
RUN npm install pm2 -g
WORKDIR /home/app
ADD ./deploy.sh /usr/local/bin/deploy.sh
RUN chmod g+x /usr/local/bin/deploy.sh
RUN chmod u+x /usr/local/bin/deploy.sh
RUN mkdir /root/.ssh/
RUN touch /root/.ssh/known_hosts
RUN chmod +x /usr/local/bin/deploy.sh
RUN chmod 777 /usr/local/bin/deploy.sh
CMD /usr/local/bin/deploy.sh
It looks like you have CRLF (Windows) line ending and don't LF (linux) that is required for running .sh or working in linux in general (docker -> linux).
Open the deploy.sh file in notepad++ or IntelIJ IDE (I'm primary always working in InteljIJ with docker on Windows) and check line-ending.
I'm always using LF ending globaly for evry project with IntelIJ on Windows.

Resources