Setup different user permissions on files copied in Dockerfile - docker

I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.

In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh

You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up

Related

How do make all environment variables available on the Dockerfile?

I have this Dockerfile
FROM node:14.17.1
ARG GITHUB_TOKEN
ARG REACT_APP_BASE_URL
ARG DATABASE_URL
ARG BASE_URL
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
ENV GITHUB_TOKEN=${GITHUB_TOKEN}
ENV REACT_APP_BASE_URL=${REACT_APP_BASE_URL}
ENV DATABASE_URL=${DATABASE_URL}
ENV BASE_URL=${BASE_URL}
ENV PORT 80
COPY . /usr/src/app
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
But I don't like having to set each environment variable. Is is possible to make all of them available without needing to set one by one?
We need to pay attention to next two items before continue:
As mentioned by #Lukman in comments, TOKEN is not a good item to be stored in image unless you totally for internal use, you decide.
Even we did not specify environment one by one in Dockerfile, we still need to define them in some other place, as program itself can't know what environment you really need.
If you no problem with above, let's go on. Basically, I think define the environment (Here, use ENV1, ENV2 as example) in a script, then source them in container, and let app have ways to access these variables is what you needed.
env.sh:
export ENV1=1
export ENV2=2
app.js:
#!/usr/bin/env node
var env1 = process.env.ENV1;
var env2 = process.env.ENV2;
console.log(env1);
console.log(env2);
entrypoint.sh:
#!/bin/bash
source /usr/src/app/env.sh
exec node /usr/src/app/app.js
Dockerfile:
FROM node:14.17.1
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod -R 755 /usr/src/app
CMD ["/usr/src/app/entrypoint.sh"]
Execution:
$ docker build -t abc:1 .
$ docker run --rm abc:1
1
2
Explain:
We change CMD or ENTRYPOINT in Dockerfile to use customized entrypoint.sh, in this entrypoint.sh, we will first source env.sh which make ENV1 and ENV2 visible to subprocess of entrypoint.sh.
Then, we use exec to replace current process as node app.js, so PID1 becomes node app.js now, meanwhile app.js still could get the environment defined in env.sh.
With above, we no need to define variables in Dockerfile one by one, but still our app could get the environment.
Here's a different (easy) way.
Start by making your file. Here I'm choosing to use everything on my this is messy and not recommended. It's a useful bit of code though so I thought I'd add it.
env | sed 's/^/export /' > env.sh
edit it so you only have what you need
vi env.sh
Use the below to import files into the container. Change pwd to whichever folder you want to share. Using this carelessly may result in you sharing to many files*
sudo docker run -it -v `pwd`:`pwd` ubuntu
Assign appropriate file permissions. I'm using 777 which means anyone can read, write, execute - for demonstration purposes. But you only need execute privileges.
Run this command and make sure you add the full stop.
. /LOCATION/env.sh
If you're confused to where your file is just type pwd in the host console.
You can just add those commands where appropriate to your Dockerfile to automate the process. If I recall there is a VOLUME flag for Dockerfile.

Adding files to docker container based on a docker-compose Environment variables

I have a large set of test files (3.2 gb) that I only want to add to the container if an environment variable (DEBUG) is set. For testing locally I set these in a docker-compose file.
So far, I've added the test data folder to a .dockerignore file and tried the solution mentioned here in my Dockerfile without any success.
I've also tried running the cp command from within a run_app.sh which i call in my docker file:
cp local/folder app/testdata
but get cp: cannot stat 'local/folder': No such file or directory, i guess because it's trying to find a folder that exists on my local machine inside the container?
This is my docker file:
RUN mkdir /app
WORKDIR /app
ADD . /app/
ARG DEBUG
RUN if [ "x$DEBUG" = "True" ] ; echo "Argument not provided" ; echo "Argument is $arg" ; fi
RUN pip install -r requirements.txt
USER nobody
ENV PORT 5000
EXPOSE ${PORT}
CMD /uus/run_app.sh
If it's really just for testing, and it's in a clearly named isolated directory like testdata, you can inject it using a bind mount.
Remove the ARG DEBUG and the build-time option to copy the content into the image. When you run the container, run it with
docker run \
-v $PWD/local/folder:/app/testdata:ro \
...
This makes that host folder appear in that container directory, read-only so you don't accidentally overwrite the test data for later runs.
Note that this hides whatever was in the image on that path before; hence the "if it's in a separate directory, then..." disclaimer.

Copying files with execute permissions in Docker Image

Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.

No file found when using ENTRYPOINT

I am trying to use ENTRYPOINT and whenever I do that I am getting an error as no such file or directory
Dockerfile:
FROM ubuntu:18.04
COPY . /home
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh
WORKDIR /home
RUN chmod 777 /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/bin/bash"]
I have tried giving it permission, tried running it with absolute path also tried this, tried it with #!/bin/bash & #!/bin/sh and in the end, I still get the file not found error.
I am not sure what the problem is.
The question you asked:
I don't remember exactly why, but the file isn't being found because you're calling it docker-entrypoint.sh rather than ./docker-entrypoint.sh.
The question you'll ask soon:
That doesn't entirely fix your problem. You've added execute privileges to the copy of docker-entrypoint.sh in /usr/local/bin, but there's another copy of the file in /home that gets found first and doesn't have execute privileges. You'll get a permissions error when you try to use it. An easy workaround (depending on what you want to do) consists of a modified entrypoint:
ENTRYPOINT ["/bin/bash", "docker-entrypoint.sh"]
Extra details if you'll be using Docker a lot:
Being able to enter a container or image to examine its contents is invaluable. For ubuntu-based images, write down the following line somewhere (replace bash with sh for basically every other linux OS):
docker run -it --rm --entrypoint=bash my_image_name
This will open up a shell in that image and let you play around in the same environment the Dockerfile is running in and debug whatever is causing you problems.

Home symbol `~` not recognized in Dockerfile

In my dockerfile, I want to copy a file from ~/.ssh of my host machine into the container, so i worte it like this:
# create ssh folder and copy ssh keys from local into container
RUN mkdir -p /root/.ssh
COPY ~/.ssh/id_rsa /root/.ssh/
But when I run docker build -t foo to build it, it stopped with an error:
Step 2 : RUN mkdir -p /root/.ssh
---> Using cache
---> db111747d125
Step 3 : COPY ~/.ssh/id_rsa /root/.ssh/
~/.ssh/id_rsa: no such file or directory
It seems the ~ symbol is not recognized by dockerfile, how could I resolve this issue?
In Docker, it is not possible to copy files from anywhere on the system into the image, since this would be considered a security risk. COPY paths are always considered relative to the build context, which is current directory where you run the docker build command.
This is described in the documentation: https://docs.docker.com/reference/builder/#copy
As a result, the ~ has no useful meaning, since it would try and direct you to a location which is not part of the context.
If you want to put your local id_rsa file into the docker, you should put it into the context first, e.g. copy it along side the Dockerfile, and refer to it that way.

Resources