Home symbol `~` not recognized in Dockerfile - docker

In my dockerfile, I want to copy a file from ~/.ssh of my host machine into the container, so i worte it like this:
# create ssh folder and copy ssh keys from local into container
RUN mkdir -p /root/.ssh
COPY ~/.ssh/id_rsa /root/.ssh/
But when I run docker build -t foo to build it, it stopped with an error:
Step 2 : RUN mkdir -p /root/.ssh
---> Using cache
---> db111747d125
Step 3 : COPY ~/.ssh/id_rsa /root/.ssh/
~/.ssh/id_rsa: no such file or directory
It seems the ~ symbol is not recognized by dockerfile, how could I resolve this issue?

In Docker, it is not possible to copy files from anywhere on the system into the image, since this would be considered a security risk. COPY paths are always considered relative to the build context, which is current directory where you run the docker build command.
This is described in the documentation: https://docs.docker.com/reference/builder/#copy
As a result, the ~ has no useful meaning, since it would try and direct you to a location which is not part of the context.
If you want to put your local id_rsa file into the docker, you should put it into the context first, e.g. copy it along side the Dockerfile, and refer to it that way.

Related

how to copy files and folders from docker context to images?

I have a rocker/tidyverse:4.2.0 image which I'm using to create an image for myself. I need folders and files, but it's not showing up in the home folder. What am I doing wrong?
FROM rocker/tidyverse:4.2.0
RUN mkdir -p $HOME/rstudio/R_scripts
WORKDIR $HOME/rstudio/R_scripts
COPY ./R_scripts/* $HOME/rstudio/R_scripts/
COPY ./R_scripts/.Rprofile $HOME/rstudio/.Rprofile
RUN ls -l $HOME/rstudio
And this is how I run it.
docker run -it --rm -d -p 8787:8787 -e PASSWORD=rstudio --name rstudio-server -v /mnt/c/Users/test/sources:/home/rstudio/repos --net=host rstudio-server:4.2.0
When I check in the home folder, I can't find the folders I copied. R_scripts folder is in the same folder which contains Dockerfile
Docker images tend to not have "users" or "home directories" in a way you might think about them in a typical Linux system. This also means environment variables like $HOME often just aren't defined.
This means that when you try to set the current container directory
WORKDIR $HOME/rstudio/R_scripts
since $HOME is empty, the files just end up in a /rstudio directory in the root of the container filesystem. (And this might be okay!)
Style-wise, it's worth remembering that the right-hand side of COPY can be a relative path relative to the current WORKDIR, and that WORKDIR and COPY will create directories if they don't already exist. This means you don't usually need to RUN mkdir, and you don't usually need to repeat the full container path. Here I might write
FROM rocker/tidyverse:4.2.0
WORKDIR /rstudio/R_scripts # without $HOME, creates the directory
COPY ./R_scripts/* ./ # ./ is WORKDIR
COPY ./R_scripts/.Rprofile ../ # ../ is WORKDIR's parent
# RUN ls -l /rstudio # invisible using BuildKit backend by default

Setup different user permissions on files copied in Dockerfile

I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up

Docker build failed when copying in multi step build

I get an error when using the COPY --from=reference in my Dockerfile. I created a minimal example:
FROM alpine AS build
FROM scratch
COPY --from=build / /
This causes the following build output:
$ docker build .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine AS build
---> b7b28af77ffe
Step 2/3 : FROM scratch
--->
Step 3/3 : COPY --from=build / /
failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 165578 cannot be mapped to a host ID
The builds run fine in CI, but it fails on my laptop running Ubuntu 18:04. What could be causing this issue?
I've just had this issue. I wanted to copy the binaries of a standard node image to my image in a multi-stage build.
Worked fine locally. Didn't work in BitBucket Pipeline.
As mentioned by #BMitch, the issue was use of userns.
With BitBucket, the userns setting is 100000:65536, which (as I understand it) means that the "safe" userIDs must be between 100000 and 165536.
The userID you have on your source files is outside of that range, but it doesn't mean it is userID 165578. Don't ask me why, but the userID is actually 165536 lower than the value reported, so 165578 - 100000 - 65536 = 42.
The solution I have is to change the user:group ownership for the source files to root:root, copy them to my image, and set the user:group ownership back (though as I'm typing this, I've not done that bit yet as I'm not 100% it is necessary).
ARG NODE_VERSION
FROM node:${NODE_VERSION}-stretch as node
# To get the files copied to the destination image in BitBucket, we need
# to set the files owner to root as BitBucket uses userns of 100000:65536.
RUN \
chown root:root -R /usr/local/bin && \
chown root:root -R /usr/local/lib/node_modules && \
chown root:root -R /opt
FROM .... # my image has a load of other things in it.
# Add node - you could also add --chown=<user>:<group> to the COPY commands if you want
COPY --from=node /usr/local/bin /usr/local/bin
COPY --from=node /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=node /opt /opt
That error is indicating that you have enabled userns on your Ubuntu docker host, but that there is no mapping for uid 165578. These mappings should be controlled by /etc/subuid.
Docker's userns documentation contains more examples of configuring this file.
You can also modify the source image, finding any files owned by 165578 and changing them to be within your expected range.

Docker WORKDIR - on my machine or the container?

What context does the WORKDIR keyword in a Dockerfile refer to? Is it in the context I run docker build from or inside the container I am producing?
I find myself often putting RUN cd && ... in my docker files and am hoping there's another way, I feel like I'm missing something.
It is inside the container.
Taken for the Dockerfile reference site https://docs.docker.com/engine/reference/builder/#workdir
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
So rather than adding RUN cd && ... you could do:
WORKDIR /path/to/dir
RUN command
All paths in a Dockerfile, except the first half of COPY and ADD instructions, refer to image filesystem paths. The source paths for COPY and ADD are relative paths (even if they start with /) relative to the build context (the directory at the end of the docker build command, frequently the directory containing the Dockerfile). Nothing in a Dockerfile can ever reference an absolute path on the host or content outside the build context tree.
The only difference between these two Dockerfiles is the directory the second command gets launched in.
RUN cd /dir && command1
RUN command2
WORKDIR /dir
RUN command1
RUN command2
WORKDIR sets the directory inside the image and hence allows you to avoid RUN cd calls.

Adding a directory path to your docker image

I am trying to add a directory to my docker image. I tried the below methods. During the build I dont see any errors, but once I run the container ( I am using docker-compose) and get into it docker exec -it 410e434a7304 /bin/sh I dont see the directory in the path I am copying it into nor do I see it as a volume when I do docker inspect.
Approach 1 : Classic mkdir
# Define working directory
WORKDIR /root
RUN cd /var \
mdkir www \\ no www directory created
COPY <fileDirectory> /var/www/<fileDirectory>
Approach 2 : Volume
FROM openjdk:8u171 as build
# Define working directory
WORKDIR /root
VOLUME["/var/www"]
COPY <fileDirectory> /var/www/<fileDirectory>
Your first approach is correct in principle, only that your RUN statement is faulty. Try:
RUN cd /var && mkdir www
Also, please note the fundamental difference between RUN mkdir and VOLUME: the former simply creates a directory on your container, while the latter is chiefly intended for mounting directories from your container to the host your container is running on.
This is how I made it work:
# Define working directory
WORKDIR /root
COPY <fileDirectory> /root/<fileDirectory>
RUN cd /var && mkdir www && cp -R /root/<fileDirectory> /var/www
RUN rm -rf /root/email-media
I had to copy the from my host machine to docker image's working directory /root and from /root to the desired destination. Later removed the directory from/root`
Not sure if thats the cleanest way, if I followed the approach 1 with the right syntax suggested by #Fritz it could never find the the path created and throw an error.
After running the RUN layer it would remove the container (as below) and in the COPY line it would not have the reference to the path created in the run line.
Step 16/22 : RUN cd /var && mkdir www && cp -R /root/<fileDirectory> /var/www
---> Running in a9c7df27116e
Removing intermediate container a9c7df27116e

Resources