Linux commands in Alpine docker image +755 invalid mode - docker

I am building a Docker image using the Node:12 base image. I am trying to switch it to use Node:12-alpine instead due to the smaller image size. I have installed bash and shadow in Alpine to be able to run chmod commands.
I am running into an error with one of the RUN commands RUN chmod +755. The error message: chmod: invalid mode '+755'
Note that this works using the Node:12 base image, so I think I just need to install another package in Alpine linux? Thanks!
FROM node:12.8-alpine
# Create working directory
RUN mkdir -p /home/node/app
# Set working directory
WORKDIR /home/node/app
# Install bash and shadow for permissions chmod commands
RUN apk add --no-cache bash && apk add shadow
# Add `/home/node/app/node_modules/.bin` to $PATH
ENV PATH /home/node/app/node_modules/.bin:$PATH
# Copy code
COPY --chown=node . /home/node/app
# Update umask
RUN chmod +755 /home/node/app/entrypoint.sh && \
echo 'umask 002' >> /home/node/.profile && \
echo 'umask 002' >> /home/node/.bashrc && \
npm install
ENTRYPOINT ["./entrypoint.sh"]
CMD [ "npm", "start" ]

The various Alpine-based Docker images use a minimal toolset called BusyBox, which tends to only implement the functionality required in standard utilities and no more. In particular, the POSIX.1 definition of chmod specifies (emphasis mine):
The mode operand shall be either a symbolic_mode expression or a non-negative octal integer.
So according to the standard, you can either use the +rwx form to add bits, or the octal 0755 form to specify a permission, but not combine the two.
In the context of a Docker image, you're usually dealing with a pretty fixed filesystem layout, and in any case you know what you want the permissions to be; you should be able to run
RUN chmod 0755 /home/node/app/entrypoint.sh
without installing any additional packages.
(Also note that shell dotfiles usually aren't read by Docker, so the modifications to .profile and .bashrc have no effect. Typically you do want your application to be owned by root but executed by a different user, for an additional layer of security to prevent the application files from being unintentionally modified.)

Related

Setup different user permissions on files copied in Dockerfile

I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up

How to do configure,make and make install in docker build

Problem Statement
I am building a docker of my computational bioinformatics pipeline which contains many tools that will be called at different steps of pipelines. In this process, I am trying to add one tool The ViennaRNA Package which will be downloaded and compliled using source code. I have tried many ways to compile it in docker build (as shown below) but none of them is working.
Failed attempts
Code-1 :
FROM jupyter/scipy-notebook
USER root
MAINTAINER Vivek Ruhela <vivekr#iiitd.ac.in>
# Copy the application folder inside the container
ADD . /test1
# Set the default directory where CMD will execute
WORKDIR /test1
# Set environment variable
ENV HOME /test1
# Install RNAFold
RUN wget https://www.tbi.univie.ac.at/RNA/download/sourcecode/2_4_x/ViennaRNA-2.4.14.tar.gz -P ~/Tools
RUN tar xvzf ~/Tools/ViennaRNA-2.4.14.tar.gz -C ~/Tools
WORKDIR "~/Tools/ViennaRNA-2.4.14/"
RUN ./configure
RUN make && make check && make install
Error : configure file not found
Code-2 :
FROM jupyter/scipy-notebook
USER root
MAINTAINER Vivek Ruhela <vivekr#iiitd.ac.in>
# Copy the application folder inside the container
ADD . /test1
# Set the default directory where CMD will execute
WORKDIR /test1
# Set environment variable
ENV HOME /test1
# Install RNAFold
RUN wget https://www.tbi.univie.ac.at/RNA/download/sourcecode/2_4_x/ViennaRNA-2.4.14.tar.gz -P ~/Tools
RUN tar xvzf ~/Tools/ViennaRNA-2.4.14.tar.gz -C ~/Tools
RUN bash ~/Tools/ViennaRNA-2.4.14/configure
WORKDIR "~/Tools/ViennaRNA-2.4.14/"
RUN make && make check && make install
Error : make: *** No targets specified and no makefile found. Stop.
I also tried another way to tell the file location explicitly e.g.
RUN make -C ~/Tools/ViennaRNA-2.4.14/
Sill this approach is not working.
Expected Procedure
I have installed this tool in my system many times using the standard procedure as mentioned in tool documentation as
./configure
make
make check
make install
Similarly for docker, the following code should work
WORKDIR ~/Tools/ViennaRNA-2.4.14/
RUN ./configure && make && make check && make install
But this code is not working because I don't see any effect of workdir. I have checked that configure is creating makefile properly in my system. So it should create the make file in docker also.
Any suggestions on why this code is not working.
you are extract all the files in Tools folder which is in home ,try this:
WORKDIR $HOME/Tools/ViennaRNA-2.4.14
RUN ./configure
RUN make && make check && make install
the problem is WORKDIR ~/Tools/ViennaRNA-2.4.14/ is translated to exactly ~/Tools/ViennaRNA-2.4.14/ which is created a folder named ~ , you may also use $HOME instead

Copying files with execute permissions in Docker Image

Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.

App relies on sourcing secrets.sh for env variables. How to accomplish this in my Dockerfile?

I'm working on creating a container to hold my running Django app. During development and manual deployment I've been setting environment variables by sourcing a secrets.sh file in my repo. This has worked fine until now that I'm trying to automate my server's configuration environment in a Dockerfile.
So far it looks like this:
FROM python:3.7-alpine
RUN pip install --upgrade pip
RUN pip install pipenv
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
WORKDIR /home/appuser/site
COPY . /home/appuser/site
RUN /bin/sh -c "source secrets.sh"
RUN env
I'd expect this to set the environment variables properly but it doesn't. I've also tried adding the variables to my appuser's bashrc, but this doesn't work either.
Am I missing something here? Is there another best practice to set env variables to be accessible by django, without having to check them into the Dockerfile in my repo?
Each RUN step launches a totally new container with a totally new shell; only its filesystem is persisted afterwards. RUN commands that try to start processes or set environment variables are no-ops. (RUN export or RUN service start do absolutely nothing.)
In your setup you need the environment variables to be set at container startup time based on information that isn't available at build time. (You don't want to persist secrets in an image: they can be easily read out by anyone who gets the image later on.) The usual way to do this is with an entrypoint script; this could look like
#!/bin/sh
# If the secrets file exists, read it in.
if [ -f /secrets.sh ]; then
# (Prefer POSIX "." to bash-specific "source".)
. /secrets.sh
fi
# Now run the main container CMD, replacing this script.
exec "$#"
A typical Dockerfile built around this would look like:
FROM python:3.7-alpine
RUN pip install --upgrade pip
WORKDIR /app
# Install Python dependencies, as an early step to support
# Docker layer caching.
COPY requirements.txt ./
RUN pip install -r requirements.txt
# Install the main application.
COPY . ./
# Create a non-root user. It doesn't own the source files,
# and so can't modify the application.
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
# Startup-time metadata.
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["/app/app.py"]
And then when you go to run the container, you'd inject the secrets file
docker run -p 8080:8080 -v $PWD/secrets-prod.sh:/secrets.sh myimage
(As a matter of style, I reserve ENTRYPOINT for this pattern and for single-binary FROM scratch containers, and always use CMD for whatever the container's main process is.)

Install packages from DVD repository in Docker image

I am trying to build a Docker image that needs to install some packages from a DVD iso but I cannot mount the iso inside the container.
My Dockerfile is:
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest
USER root
WORKDIR /home
COPY tools.iso ./
COPY tools.repo /etc/yum.repos.d/
RUN mkdir /mnt/tools && \
mount -r ./tools.iso /mnt/tools && \
yum -y install make && \
umount /mnt/tools && \
rm tools.iso
CMD /bin/bash
When I run docker build it returns the following error:
mount: /home/tools.iso: failed to setup loop device: No such file or directory
I also tried to add the command modprobe loop before mounting the iso but logs says it returned code=1.
Is this the correct way to install packages from a DVD in Docker?
In general Docker containers can't access host devices and shouldn't mount additional filesystems. These restrictions are even tighter during a docker build sequence, because the various options that would let you circumvent it aren't available to you.
The most straightforward option is to write a wrapper script that does the mount and unmount for you, something like:
#!/bin/sh
if [ ! -d tools ]; then mkdir tools; fi
mount -r tools.iso tools
docker build "$#" .
umount tools
Then you can have a two-stage Docker image where the first stage has access to the entire DVD contents and runs its installer, and the second stage actually builds the image you want to run. That would look something like (totally hypothetically)
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest AS install
COPY tools tools
RUN cd tools && ./install.sh /opt/whatever
FROM registry.access.redhat.com/rhscl/devtoolset-7-toolchain-rhel7:latest
COPY --from=install /opt/whatever /opt
EXPOSE 8888
CMD ["/opt/whatever/bin/whateverd", "--bind", "0.0.0.0:8888", "--foreground"]
The obvious problem with this is that the entire contents of the DVD will be sent across a networked channel from the host to itself as part of the docker build sequence, and then copied again during the COPY step; if it does get into the gigabyte range then this starts to get unwieldy. You can use a .dockerignore file to cause some of it to be hidden to speed this up a little.
Depending on what the software is, you should also consider whether it can run successfully in a Docker container (does it expect to be running multiple services with a fairly rigid communication pattern?); a virtual machine may prove to be a better deployment option, and "mount a DVD to a VM" is a much better-defined operation.

Resources