I have a rocker/tidyverse:4.2.0 image which I'm using to create an image for myself. I need folders and files, but it's not showing up in the home folder. What am I doing wrong?
FROM rocker/tidyverse:4.2.0
RUN mkdir -p $HOME/rstudio/R_scripts
WORKDIR $HOME/rstudio/R_scripts
COPY ./R_scripts/* $HOME/rstudio/R_scripts/
COPY ./R_scripts/.Rprofile $HOME/rstudio/.Rprofile
RUN ls -l $HOME/rstudio
And this is how I run it.
docker run -it --rm -d -p 8787:8787 -e PASSWORD=rstudio --name rstudio-server -v /mnt/c/Users/test/sources:/home/rstudio/repos --net=host rstudio-server:4.2.0
When I check in the home folder, I can't find the folders I copied. R_scripts folder is in the same folder which contains Dockerfile
Docker images tend to not have "users" or "home directories" in a way you might think about them in a typical Linux system. This also means environment variables like $HOME often just aren't defined.
This means that when you try to set the current container directory
WORKDIR $HOME/rstudio/R_scripts
since $HOME is empty, the files just end up in a /rstudio directory in the root of the container filesystem. (And this might be okay!)
Style-wise, it's worth remembering that the right-hand side of COPY can be a relative path relative to the current WORKDIR, and that WORKDIR and COPY will create directories if they don't already exist. This means you don't usually need to RUN mkdir, and you don't usually need to repeat the full container path. Here I might write
FROM rocker/tidyverse:4.2.0
WORKDIR /rstudio/R_scripts # without $HOME, creates the directory
COPY ./R_scripts/* ./ # ./ is WORKDIR
COPY ./R_scripts/.Rprofile ../ # ../ is WORKDIR's parent
# RUN ls -l /rstudio # invisible using BuildKit backend by default
Related
I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up
I've read tutorials about use docker:
docker run -it -p 9001:3000 -v $(pwd):/app simple-node-docker
but if i use:
docker run -it -p 9001:3000 simple-node-docker
it's working too? -v is not more needed? or is taking from the Dockerfile the line WORKDIR?
FROM node:9-slim
# WORKDIR specifies the directory our
# application's code will live within
WORKDIR /app
another tutorials use mkdir ./app on the workfile, anothers don't, so WORKDIR is enough to docker create the folder automatically if does not exist
There are two common ways to get application content into a Docker container. Many Node tutorials I've seen confusingly do both of them. You don't need docker run -v, provided you docker build your container when you make changes.
The first way is to copy a static copy of the application into the image. You'd do this via a Dockerfile, typically looking something like this:
FROM node
WORKDIR /app
# Install only dependencies now, to make rebuilds faster
COPY package.json yarn.lock ./
RUN yarn install
# NB: node_modules is in .dockerignore so this doesn't overwrite
# the previous step
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
The resulting Docker image is self-contained: if you have just the image (maybe you docker pulled it from a repository) you can run it, as you note, without any special -v option. This path has the downside that you need to re-run docker build to recreate the image if you've made any changes.
The second way is to use docker run -v to inject the current source directory into the container. For example:
docker run \
--rm \ # clean up after we're done
-p 3000:3000 \ # publish a port
-v $PWD:/app \ # mount current directory over /app
-w /app \ # set default working directory
node \ # image to run
yarn start # command to run
This path hides everything in the /app directory in the image and replaces it in the container with whatever you have in your current directory. This requires you to have a built functional copy of the application's source tree available, and so it supports things like live reloading; helpful for development, not what you want for Docker in production.
Like I say, I've seen a lot of tutorials do both things:
# First build an image, populating /app in that image
docker build -t myimage .
# Now run it, hiding whatever was in /app
docker run --rm -p3000:3000 -v$PWD:/app myimage
You don't need the -v option, but you do need to manually rebuild things if your application changes.
$EDITOR src/file.js
yarn test
sudo docker build -t myimage .
sudo docker run --rm -p3000:3000 myimage
As I note here the docker commands require root-equivalent permission; but on the flip side the final docker run command is very close to what you'd run "for real" (maybe via Docker Compose or Kubernetes, but without requiring a copy of the application source).
I am trying to add a directory to my docker image. I tried the below methods. During the build I dont see any errors, but once I run the container ( I am using docker-compose) and get into it docker exec -it 410e434a7304 /bin/sh I dont see the directory in the path I am copying it into nor do I see it as a volume when I do docker inspect.
Approach 1 : Classic mkdir
# Define working directory
WORKDIR /root
RUN cd /var \
mdkir www \\ no www directory created
COPY <fileDirectory> /var/www/<fileDirectory>
Approach 2 : Volume
FROM openjdk:8u171 as build
# Define working directory
WORKDIR /root
VOLUME["/var/www"]
COPY <fileDirectory> /var/www/<fileDirectory>
Your first approach is correct in principle, only that your RUN statement is faulty. Try:
RUN cd /var && mkdir www
Also, please note the fundamental difference between RUN mkdir and VOLUME: the former simply creates a directory on your container, while the latter is chiefly intended for mounting directories from your container to the host your container is running on.
This is how I made it work:
# Define working directory
WORKDIR /root
COPY <fileDirectory> /root/<fileDirectory>
RUN cd /var && mkdir www && cp -R /root/<fileDirectory> /var/www
RUN rm -rf /root/email-media
I had to copy the from my host machine to docker image's working directory /root and from /root to the desired destination. Later removed the directory from/root`
Not sure if thats the cleanest way, if I followed the approach 1 with the right syntax suggested by #Fritz it could never find the the path created and throw an error.
After running the RUN layer it would remove the container (as below) and in the COPY line it would not have the reference to the path created in the run line.
Step 16/22 : RUN cd /var && mkdir www && cp -R /root/<fileDirectory> /var/www
---> Running in a9c7df27116e
Removing intermediate container a9c7df27116e
I've been expiriencing a bit of weird behavior regarding volumes. We have a container which contains a database, and is expected to bind mount folders from the host which contain the data. I'm trying to create a child container which ships with test data, as it is just used for testing.
This requires that during the build step, some data is copied off the host machine, and then some scripts run which create additional files. I've noticed however though when I have a look at the running container, only the copied files exist, and the ones created by scripts do not. I've boiled down the steps to the following docker file:
FROM ubuntu:xenial-20180112.1
VOLUME /test
COPY /test/copydir/copyfile.txt /test/copydir/copyfile.txt
RUN mkdir -p /test/mkdir && \
touch /test/mkdir/touch.txt
Note that when I bash into the running container and do an
ls -l /test
I only get the 'copydir' folder. If I run an ls in my dockerfile however, I see that both folders exist.
What's going on here?
edit:
For additional context, the following prints out that both directories exist:
FROM ubuntu:xenial-20180112.1
VOLUME /test
COPY /test/copydir/copyfile.txt /test/copydir/copyfile.txt
RUN mkdir -p /test/mkdir && \
touch /test/mkdir/touch.txt && \
ls -l /test
But the following only shows that copydir exists:
FROM ubuntu:xenial-20180112.1
VOLUME /test
COPY /test/copydir/copyfile.txt /test/copydir/copyfile.txt
RUN mkdir -p /test/mkdir && \
touch /test/mkdir/touch.txt
RUN ls -l /test
I don't have the exact explanation of this but when you build an image with a Dockerfile it will make the lightest image possible. When you use RUN you don't make data persistant but you only do an action that will give a result that will not stay in the image.
Note that apt-get and yum commands make installations persist. It's kinda weird.
Try to change your Dockerfile to:
FROM ubuntu:xenial-20180112.1
RUN mkdir -p /test
COPY /test/copydir/copyfile.txt /test/copydir/copyfile.txt
RUN mkdir -p /test/mkdir && \
touch /test/mkdir/touch.txt
VOLUME /test
In a remark you said "The example I provided was cut down for clarity, in actuality the volume is defined by a parent image." That would relate the problem to the case that is not possible to undeclare a volume entry for a derived image. If that is possible (e.g. using docker-copyedit) then your problem may go away. ;)
In my dockerfile, I want to copy a file from ~/.ssh of my host machine into the container, so i worte it like this:
# create ssh folder and copy ssh keys from local into container
RUN mkdir -p /root/.ssh
COPY ~/.ssh/id_rsa /root/.ssh/
But when I run docker build -t foo to build it, it stopped with an error:
Step 2 : RUN mkdir -p /root/.ssh
---> Using cache
---> db111747d125
Step 3 : COPY ~/.ssh/id_rsa /root/.ssh/
~/.ssh/id_rsa: no such file or directory
It seems the ~ symbol is not recognized by dockerfile, how could I resolve this issue?
In Docker, it is not possible to copy files from anywhere on the system into the image, since this would be considered a security risk. COPY paths are always considered relative to the build context, which is current directory where you run the docker build command.
This is described in the documentation: https://docs.docker.com/reference/builder/#copy
As a result, the ~ has no useful meaning, since it would try and direct you to a location which is not part of the context.
If you want to put your local id_rsa file into the docker, you should put it into the context first, e.g. copy it along side the Dockerfile, and refer to it that way.