Create a custom folder and assign user permission - docker

I'm trying to customize a Dockerfile, I just want to create a folder and assign the user (PID and GID) on the new folder.
Here is my full Dockerfile :
FROM linuxserver/nextcloud
COPY script /home/
RUN /home/script
The content of the script file :
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
I gave him the following permission : chmod +x script
At this moment it doesn't create the folder, and I see no error in logs.
Command to run the container :
docker run -d \
--name=nextcloud \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Europe/Paris \
-p 443:443 \
-p 8080:80 \
-v /home/foouser/nextcloud:/config \
-v /home/foouser/data:/data \
--restart unless-stopped \
nextcloud_custom
Logs from build :
Step 1/3 : FROM linuxserver/nextcloud
---> d1af592649f2
Step 2/3 : COPY script /home/
---> 0b005872bd3b
Step 3/3 : RUN /home/script
---> Running in 9fbd3f9654df
Removing intermediate container 9fbd3f9654df
---> 91cc65981944
Successfully built 91cc65981944
Successfully tagged nextcloud_custom:latest

you can try to run the commands directly:
RUN mkdir -p /data/local_data && chown -R abc:abc /data/local_data
you may try also to chabge your shebang to:
#!/bin/bash
to debugging you may try to set -xin your script a well.
EDIT:
I had notice this Removing intermediate container in your logs , the solution to it would be to use volume with your docker run command:
-v /path/your/new/folder/HOST:/path/your/new/folder/container

You are trying to modify a folder which is specified as a VOLUME in your base image, but as per Docker documentation on Volumes:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
linuxserver/nextcloud does declare a volume /data which you are trying to change afterward, it's like doing:
VOLUME /data
...
RUN mkdir -p /data/local_data
The directory created will be discarded. You can however create your directory on container startup by modifying it's entrypoint so when container starts the directory is created. Currently linuxserver/nextcloud uses /init as entrypoint, so you can do:
Your script content which you then define as entrypoint:
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
# Call the base image entrypoint with parameters
/init "$#"
Dockerfile:
FROM linuxserver/nextcloud
# Copy the script and call it at entrypoint instead
COPY script /home/
ENTRYPOINT ["/home/script"]

Related

can we create a docker image with multiple instances in it?

I want an image with elasticsearch and zipkin in it but i dont want to download it from docker hub instead I have downloaded the tar.gz file of those and then creating those images. I am able to run both of them individually but not simultaneously (by docker run command).
Please see below Dockerfile
FROM openjdk:11
RUN groupadd -g 1000 elk-zipkin && useradd elk-zipkin -u 1000 -g 1000
RUN mkdir /usr/share/elasticsearch/
RUN mkdir /usr/share/zipkin
#RUN mkdir /usr/share/kibana
COPY /artifacts/elasticsearch-7.17.6.tar.gz /usr/share/elasticsearch
COPY artifacts/zipkin.jar /usr/share/zipkin
#COPY /artifacts/kibana-7.17.6.tar.gz /usr/share/kibana
COPY script.sh /usr/share/zipkin
WORKDIR /usr/share/elasticsearch
RUN tar xvf elasticsearch-7.17.6.tar.gz
#RUN tar xvf kibana-7.17.6.tar.gz
WORKDIR /usr/share/elasticsearch/elasticsearch-7.17.6
RUN set -ex && for path in data logs config config/scripts; do \
mkdir -p "$path"; \
chown -R elk-zipkin:elk-zipkin "$path"; \
done
USER elk-zipkin
ENV PATH=$PATH:/usr/share/elasticsearch/elasticsearch-7.17.6/bin
WORKDIR /usr/share/elasticsearch/elasticsearch-7.17.6/config
#RUN sed -i "s|#network.host: 192.168.0.1|network.host: 0.0.0.0|g" elasticsearch.yml
#RUN sed -i "s|#discovery.seed_hosts: ["host1", "host2"]|discovery.type: single-node|g" elasticsearch.yml
COPY /artifacts/elasticsearch.yml /usr/share/elasticsearch/elasticsearch-7.17.6/config
#CMD ["elasticsearch"]
#EXPOSE 9200 9300
#WORKDIR /usr/share/zipkin
#CMD ["java","-jar","zipkin.jar"]
#EXPOSE 9411
WORKDIR /usr/share/zipkin
CMD ["sh","script.sh"]
script.sh:
java -jar zipkin.jar elasticsearch
Run command for them:
for zipkin -
docker run -d --name=zipkin \ -p=9411:9411 \ --env=STORAGE_TYPE="elasticsearch" \ --env=ES_HOSTS="someurl" IMAGEID
for elasticsearch -
docker run -d --name=elasticsearch1 -p=9200:9200 -p=9300:9300 IMAGEID
I have tried to run both of the service i.e. elasticsearch and zipkin individually and simultaneously.
I am expecting that both should be in one image and by only single docker run command both of the services should get run.
Somehow I made it, one can create a Dockerfile like mentioned in the question and then have to pass some sleep time into the script file to give some extra time for getting up the previous services.
Example:
nohup elasticsearch &
sleep 10
nohup java -jar zipkin.jar
Note: As per comments and the basics of container, one should not create multiple services inside the same container.

Reference Shell Script Variable in Dockerfile

I have a config.sh:
IMAGE_NAME="back_end"
APP_PORT=80
PUBLIC_PORT=8080
and a build.sh:
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build -t ${IMAGE_NAME} .
and a run.sh:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
and finally, a Dockerfile:
...
CMD ["gunicorn", "-b", "0.0.0.0:${APP_PORT}", "main:app"]
I'd like to be able to reference the APP_PORT variable in my config.sh within the Dockerfile as shown above. However, what I have does not work and it complains: Error: ${APP_PORT} is not a valid port number. So it's not interpreting APP_PORT as a variable. Is there a way to reference the variables within config.sh from within the Dockerfile?
Thanks!
EDIT: New Files based on suggested solutions (still don't work)
I have a config.sh:
IMAGE_NAME="back_end"
APP_PORT=80
PUBLIC_PORT=8080
and a build.sh:
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build --build-arg APP_PORT="${APP_PORT}" -t "${IMAGE_NAME}" .
and a run.sh:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
and finally, a Dockerfile:
FROM python:buster
LABEL maintainer="..."
ARG APP_PORT
#ENV PORT $APP_PORT
ENV APP_PORT=${APP_PORT}
#RUN echo "$PORT"
# Install gunicorn & falcon
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
# Add demo app
COPY ./app /app
COPY ./config.sh /app/config.sh
WORKDIR /app
RUN ls -a
CMD ["gunicorn", "-b", "0.0.0.0:${APP_PORT}", "main:app"]
run.sh still fails and reports: Error: '${APP_PORT} is not a valid port number.'
Define a variable in Dockerfile as follows:
FROM python:buster
LABEL maintainer="..."
ARG APP_PORT
ENV APP_PORT=${APP_PORT}
# Install gunicorn & falcon
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
# Add demo app
COPY ./app /app
COPY ./config.sh /app/config.sh
WORKDIR /app
CMD gunicorn -b 0.0.0.0:$APP_PORT main:app # NOTE! without separating with ["",""]
Pass it as build-arg, e.g. in your build.sh:
Note! Passing build argument is only necessary when it is used for building docker image. You use it on CMD and one can omit passing it during building docker image.
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build --build-arg APP_PORT="${APP_PORT}" -t "${IMAGE_NAME}" .
# sudo docker build --build-arg APP_PORT=80 -t back_end . -> You may omit using config.sh and directly define the value of variables
and pass value of $APP_PORT in run.sh as well when starting the container:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-e APP_PORT=$APP_PORT \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
You need a shell to replace environment variables and when your CMD is in exec form, there's no shell.
If you use the shell form, there is a shell and you can use environment variables.
CMD gunicorn -b 0.0.0.0:${APP_PORT} main:app
Read here for more information on the two forms of the CMD statement: https://docs.docker.com/engine/reference/builder/#cmd

Server Tomcat using Docker on Windows PC

I made a webapp in Java and I would like to test the site locally using Docker.
The war file I created works perfectly but to be read correctly it must be inserted inside this path:
/tomcat/webapps/ROOT/
For these reasons I decided to use this form:
https://hub.docker.com/_/tomcat
In particular I decided to use this Dockerfile:
https://github.com/docker-library/tomcat/blob/ec2d88f0a3b34292c1693e90bdf786e2545a157e/9.0/jre11-slim/Dockerfile
I added this code towards the end:
...
EXPOSE 8080
CMD ["cd /usr/local/tomcat/webapps/"]
CMD ["mv ROOT ROOT.old"]
CMD ["mkdir ROOT"]
COPY ./esercitazione.1.maven/ /usr/local/tomcat/webapps/ROOT/
CMD ["catalina.sh", "run"]
I used this code at the Windows 10 prompt:
D:
cd "D:\DATI\Docker-Tomcat-Win10"
docker build -t tomcat-9-java-11:v2.0 .
docker run -it --rm --name tomcat-9-java-11-container -p 8888:8080 tomcat-9-java-11:v2.0
When I enter this link on the browser:
http://192.168.99.103:8888/
I see this:
https://prnt.sc/n2vti1
I'm a beginner with both Docker and Tomcat and I need a little help.
Inside this path I put my unzipped .war file:
D:\DATI\Docker-Tomcat-Win10\esercitazione.1.maven
Thank you
%%%%%%%%%%%%%%%%%%%%%%%%%%
#Shree Tiwari
%%%%%%%%%%%%%%%%%%%%%%%%%%
First of all thank you for your help!
I deleted all the containers and all the images on Docker and used your Dockerfile (I only changed the name of the folder containing the .war files to be tested).
FROM tomcat:9-jre8
ENV JAVA_OPTS="-Xms512m -Xmx1024m -XX:MaxPermSize=256m -XX:MaxMetaspaceSize=128m"
WORKDIR /usr/local/tomcat/webapps/
RUN rm -rf /usr/local/tomcat/webapps/*
COPY ./webapps/*.war /usr/local/tomcat/webapps
EXPOSE 8080
CMD ["catalina.sh", "run"]
I placed the .war file at this address:
D:\DATI\Docker-Tomcat-Win10\webapps\esercitazione.1.maven.war
I opened the Windows prompt and I typed:
D:
cd "D:\DATI\Docker-Tomcat-Win10"
docker build -t tomcat:v1.0 .
docker run -it --rm --name tomcat-container -p 8888:8080 tomcat:v1.0
I entered this URL in the browser:
http://192.168.99.103:8888/esercitazione.1.maven/
then this other:
http://192.168.99.103:8888/
Unfortunately none of them went well.
The only mistake I encountered when creating the image is this:
"SECURITY WARNING: You are building to Docker image from Windows against a non-Windows Docker host. Files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories."
%%%%%%%%%%%%%%%%%%%%%%%%%%
#Miq
%%%%%%%%%%%%%%%%%%%%%%%%%%
First of all thank you for your help!
I also tested your code but it doesn't work.
Dockerfile:
FROM tomcat:9-jre11-slim
RUN mv webapps/ROOT webapps/ROOT.old && mkdir webapps/ROOT
COPY ./esercitazione.1.maven/ webapps/ROOT/
Code:
D:
cd "D:\DATI\Docker-Tomcat-Win10"
docker build -t tomcat:v2.0 .
docker run -it --rm --name tomcat-container tomcat:v2.0
Browser:
http://192.168.99.103:8080/
Other tests:
FROM tomcat:9-jre11-slim
ENV JAVA_OPTS="-Xms512m -Xmx1024m -XX:MaxPermSize=256m -XX:MaxMetaspaceSize=128m"
RUN mv webapps/ROOT webapps/ROOT.old && mkdir webapps/ROOT
WORKDIR /usr/local/tomcat/webapps/
RUN rm -rf /usr/local/tomcat/webapps/ROOT/*
COPY ./webapps/*.war /usr/local/tomcat/webapps/ROOT/
EXPOSE 8080
CMD ["catalina.sh", "run"]
docker ps -a
docker images
docker stop tomcat-container
docker rmi tomcat:v3.0
D:
cd "D:\DATI\Docker-Tomcat-Win10"
docker build -t tomcat:v3.0 .
docker run -d --name tomcat-container -p 8888:8080 tomcat:v3.0
http://192.168.99.103:8888/
%%%%%%%%%%%%%%%%%%%%%%%%%%
Other tests: (8 April 2019)
FROM tomcat:9.0.17-jre11-slim
LABEL Author="Nome Cognome"
EXPOSE 8080
RUN rm -fr /usr/local/tomcat/webapps/ROOT
COPY ./esercitazione.1.maven.war /usr/local/tomcat/webapps/ROOT.war
CMD ["catalina.sh", "run"]
>
docker build -t tomcat-eb:v.9.0.17 .
docker run -it --rm -p 8888:8080 tomcat-eb:v.9.0.17
>
I'm going here:
http://192.168.99.103:8888
and the browser sends me here:
https://192.168.99.103:8443
%%%%%%%%%%%%%%%%%%%%%%%%%%
Other tests: (I choose another image)
FROM tomee:8-jre-8.0.0-M2-webprofile
LABEL Author="Nome Cognome"
EXPOSE 8080
RUN rm -fr /usr/local/tomcat/webapps/ROOT
COPY ./esercitazione.1.maven.war /usr/local/tomcat/webapps/ROOT.war
CMD ["catalina.sh", "run"]
>
docker build -t tomcat-eb:v.9.0.17 .
docker run -it --rm -p 8888:8080 tomcat-eb:v.9.0.17
>
If I go here:
http://192.168.99.103:8888
I see Tomcat home, non my webapp.
Is this a problem without a solution?
The biggest problem I see here is that you use CMD instead RUN in your dockerfile. CMD is to define a command that will be run when container is ran. With the Dockerfile you have now, only the last one is executed when you start your container and all of those mkdirs, moves, etc. are never executed. As said, you need to use RUN keywords to indicate command that build process should execute and commit as image layer. You probably also do not need the catalina.sh CMD as it might come from the image you base on, but you need to check it on doc page for the base image or take a peek to it's Dockerfile or use docker history imagename To see the layers and commands used to create them.
I took a deeper look into this and the base image you use. In addition to the CMD's your problem is that you change the workdir by running cd commands. catalina.sh exists in $CATALINA_HOME dir, which then is marked as workdir in base image. When you change the active directory by executing cd it breaks the image runtime.
I'd suggest you try with following dockerfile:
FROM tomcat:9-jre11-slim
RUN mv webapps/ROOT webapps/ROOT.old && mkdir webapps/ROOT
COPY ./esercitazione.1.maven/ webapps/ROOT/
No need to use EXPOSE and CMD as they are defined in base image. also, base image defines WORKDIR to $CATALINA_HOME and that's where you will be when executing any following commands (treat WORKDIR as cd but in docker style).
Hope that helps.
First you need to compile your Code to a war file then you can use below Dockerfile
FROM tomcat:9-jre8
ENV JAVA_OPTS="-Xms512m -Xmx1024m -XX:MaxPermSize=256m -XX:MaxMetaspaceSize=128m"
WORKDIR /usr/local/tomcat/webapps/
RUN rm -rf /usr/local/tomcat/webapps/*
COPY ./esercitazione.1.maven/*.war /usr/local/tomcat/webapps
EXPOSE 8080
CMD ["catalina.sh", "run"]

A working how-to for data extraction of non-root named volume permissions working with linux and win

I'm trying a simple workflow without success and it take me a loooooot of time to test many solutions on SO and github. Permission for named folder and more generaly permissions volume in docker is a nightmare link1 link2 imho.
So i restart from scratch, trying to create a simple proof of concept for my use case.
I want this general workflow :
user on windows and/or linux build the Dockerfile
user run the container (if possible not as root)
the container launch a crontab which run a script writing in the data volume each minute
users (on linux or windows) get the results from the data volume (not root) because permissions are correctly mapped
I use supercronic because it runs crontab in container without root permission.
The Dockerfile :
FROM artemklevtsov/r-alpine:latest as baseImage
RUN mkdir -p /usr/local/src/myscript/
RUN mkdir -p /usr/local/src/myscript/result
COPY . /usr/local/src/myscript/
WORKDIR /usr/local/src/myscript/
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk --no-cache add busybox-suid curl
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.$
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=9aeb41e00cc7b71d30d33c57a2333f2c2581a201
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
CMD ["supercronic", "crontab"]
The crontab file :
* * * * * sh /usr/local/src/myscript/run.sh > /proc/1/fd/1 2>&1
The run.sh script
#!/bin/bash
name=$(date '+%Y-%m-%d-%s')
echo "some data for the file" >> ./result/fileName$name
The commands :
# create the volume for result, uid/gid option are not possible for windows
docker volume create --name myTestVolume
docker run --mount type=volume,source=myTestVolume,destination=/usr/local/src/myscript/result test
docker run --rm -v myTestVolume:/alpine_data -v $(pwd)/local_backup:/alpine_backup alpine:latest tar cvf /alpine_backup/scrap_data_"$(date '+%y-%m-%d')".tar /alpine_data
When i do this the result folder local_backup and files it contains has root:root permissions, so user who launch this container cannot access the files.
Is there a solution which works, which permits windows/linux/mac users who launch the same script to access easily the files into volume without problem of permissions ?
EDIT 1 :
The strategy first described here only work with binded volume, and not named volume. We use an entrypoint.sh to chown uid/gid of folders of container based on information given by docker run.
I copy paste the modified Dockerfile :
FROM artemklevtsov/r-alpine:latest as baseImage
RUN mkdir -p /usr/local/src/myscript/
RUN mkdir -p /usr/local/src/myscript/result
COPY . /usr/local/src/myscript/
ENTRYPOINT [ "/usr/local/src/myscript/entrypoint.sh" ]
WORKDIR /usr/local/src/myscript/
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk --no-cache add busybox-suid curl su-exec
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.$
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=9aeb41e00cc7b71d30d33c57a2333f2c2581a201
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
CMD ["supercronic", "crontab"]
The entrypoint.sh
#!/bin/sh
set -e
addgroup -g $GID scrap && adduser -s /bin/sh -D -G scrap -u $UID scrap
if [ "$(whoami)" == "root" ]; then
chown -R scrap:scrap /usr/local/src/myscript/
chown --dereference scrap "/proc/$$/fd/1" "/proc/$$/fd/2" || :
exec su-exec scrap "$#"
fi
The procedure to build,launch, export:
docker build . --tag=test
docker run -e UID=1000 -e GID=1000 --mount type=volume,source=myTestVolume,destination=/usr/local/src/myscript/result test
docker run --rm -v myTestVolume:/alpine_data -v $(pwd)/local_backup:/alpine_backup alpine:latest tar cvf /alpine_backup/scrap_data_"$(date '+%y-%m-%d')".tar /alpine_data
EDIT 2 :
For Windows, using docker toolbox and binded volume, i found the answer on SO. I use the c:/Users/MyUsers folder for binding, it's more simple.
docker run --name test -d -e UID=1000 -e GID=1000 --mount type=bind,source=/c/Users/myusers/localbackup,destination=/usr/local/src/myscript/result dockertest --name rflightscraps
Result of investigation
crontab run with scrap user [OK]
UID/GID of local user are mapped to container user scrap [OK]
Exported data continue to be root [NOT OK].
Windows / Linux [HALF OK]
If i use bind volume and not a named volume, it works. But this is not the desired behavior, how can i use the named volume with correct permission on Win/Linux ...
Let me divide the answer into two parts Linux Part and Docker part. You need to understand both in order to solve this problem.
Linux Part
It is easy to run cronjobs as user other than root in Linux.
This can be achieved by creating a user in docker container with the same UID as of that in the host machine and copying the crontab file as /var/spool/cron/crontabs/user_name.
From man crontab
crontab is the program used to install, deinstall or list the
tables used to drive the cron(8) daemon in Vixie Cron. Each user can
have their own crontab, and though these are files in
/var/spool/cron/crontabs, they are not intended to be edited directly.
Since Linux identifies users by User Id, inside docker the UID will be bound to the newly created user whereas in host machine the same will be binded with host user.
So, You don't have any permission issue as the files is owned by the host_user. Now you would have understood why I mentioned creating user with same UID as of that in host machine.
Docker Part
Docker considers all the directories(or layers) to be UNION FILE SYSTEM. Whenever you build an image each instruction creates a layer and the layer is marked as read-only. This is the reason Docker containers doesn't persist data. So you have to explicitly tell docker that some directories need to persist data by using VOLUME keyword.
You can run containers without mentioning volume explicitly. If you do so, docker daemon considers them to be UFS and resets the permissions.
In order to preserve the changes to a file/directory including ownership. The respective file should be declared as Volume in Dockerfile.
From UNION FILE SYSTEM
Indeed, when a container has booted, it is moved into memory, and the boot filesystem is unmounted to free up the RAM used by the initrd disk image. So far this looks pretty much like a typical Linux virtualization stack. Indeed, Docker next layers a root filesystem, rootfs, on top of the boot filesystem. This rootfs can be one or more operating systems (e.g., a Debian or Ubuntu filesystem).
Docker calls each of these filesystems images. Images can be layered on top of one another. The image below is called the parent image and you can traverse each layer until you reach the bottom of the image stack where the final image is called the base image. Finally, when a container is launched from an image, Docker mounts a read-write filesystem on top of any layers below. This is where whatever processes we want our Docker container to run will execute. When Docker first starts a container, the initial read-write layer is empty. As changes occur, they are applied to this layer; for example, if you want to change a file, then that file will be copied from the read-only layer below into the read-write layer. The read-only version of the file will still exist but is now hidden underneath the copy.
Example:
Let us assume that we have a user called host_user. The UID of host_user is 1000. Now we are going to create a user called docker_user in Docker container. So I'll assign him UID as 1000. Now whatever files that are owned by docker_user in Docker container is also owned by host_user if those files are accessible by host_user from host(i.e through volumes).
Now you can share the binded directory with others without any permission issues. You can even give 777 permission on the corresponding directory which allows others to edit the data. Else, You can leave 755 permissions which allows others to copy but only the owner to edit the data.
I've declared the directory to persist changes as a volume. This preserves all changes. Be careful as once you declare a directory as volume further changes made to that directory while building the will be ignored as those changes will be in separate layers. Hence do all your changes in the directory and then declare it as volume.
Here is the Docker file.
FROM alpine:latest
ARG ID=1000
#UID as arg so we can also pass custom user_id
ARG CRON_USER=docker_user
#same goes for username
COPY crontab /var/spool/cron/crontabs/$CRON_USER
RUN adduser -g "Custom Cron User" -DH -u $ID $CRON_USER && \
chmod 0600 /var/spool/cron/crontabs/$CRON_USER && \
mkdir /temp && \
chown -R $ID:$ID /temp && \
chmod 777 /temp
VOLUME /temp
#Specify the dir to be preserved as Volume else docker considers it as Union File System
ENTRYPOINT ["crond", "-f", "-l", "2"]
Here is the crontab
* * * * * /usr/bin/whoami >> /temp/cron.log
Building the image
docker build . -t test
Create new volume
docker volume create --name myTestVolume
Run with Data volume
docker run --rm --name test -d -v myTestVolume:/usr/local/src/myscript/result test:latest
Whenever you mount myTestVolume to other container you can see the
data under /usr/local/src/myscript/result is owned by UID 1000
if no user exist with that UID in that container or the username of
corresponding UID.
Run with Bind volume
docker run --rm --name test - -dv $PWD:/usr/local/src/myscript/result test:latest
When you do an ls -al /home/host_user/temp You will see that file called cron.log is created and is owned by **host_user**.
The same will be owned by docker_user in docker container when you do an ls -al /temp. The contents of cron.log will be docker_user.
So, Your effective Dockerfile should be
FROM artemklevtsov/r-alpine:latest as baseImage
ARG ID=1000
ARG CRON_USER=docker_user
RUN adduser -g "Custom Cron User" -DH -u $ID $CRON_USER && \
chmod 0600 /var/spool/cron/crontabs/$CRON_USER && \
echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories && \
apk --no-cache add busybox-suid curl && \
mkdir -p /usr/local/src/myscript/result && \
chown -R $ID:$ID /usr/local/src/myscript/result && \
chmod 777 /usr/local/src/myscript/result
COPY crontab /var/spool/cron/crontabs/$CRON_USER
COPY . /usr/local/src/myscript/
VOLUME /usr/local/src/myscript/result
#This preserves chown and chmod changes.
WORKDIR /usr/local/src/myscript/
ENTRYPOINT ["crond", "-f", "-l", "2"]
Now whenever you attach a Data/bind volume to /usr/local/src/myscript/result it will be owned by user having UID 1000 and the same is persistent across all the containers whichever has mounted the same volume with their corresponding user with 1000 as file owners.
Please Note: I've given 777 permissions in order to share with every one. You can skip that step in your Dockerfle based on your convinence.
References:
Crontab manual.
User identiier - Wiki.
User ID Definition.
About storage drivers.
UNION FILE SYSTEM.

Cannot execute RUN mkdir in a Dockerfile

This is an error message I get when building a Docker image:
Step 18 : RUN mkdir /var/www/app && chown luqo33:www-data /var/www/app
---> Running in 7b5854406120 mkdir: cannot create directory '/var/www/app': No such file or directory
This is a fragment of Dockerfile that causes the error:
FROM ubuntu:14.04
RUN groupadd -r luqo33 && useradd -r -g luqo33 luqo33
<installing nginx, fpm, php and a couple of other things>
RUN mkdir /var/www/app && chown luqo33:www-data /var/www/app
VOLUME /var/www/app
WORKDIR /var/www/app
mkdir: cannot create directory '/var/www/app': No such file or directory sound so nonsensical - of course there is no such directory. I want to create it. What is wrong here?
The problem is that /var/www doesn't exist either, and mkdir isn't recursive by default -- it expects the immediate parent directory to exist.
Use:
mkdir -p /var/www/app
...or install a package that creates a /var/www prior to reaching this point in your Dockerfile.
When creating subdirectories hanging off from a non-existing parent directory(s) you must pass the -p flag to mkdir ... Please update your Dockerfile with
RUN mkdir -p ...
I tested this and it's correct.
You can also simply use
WORKDIR /var/www/app
It will automatically create the folders if they don't exist.
Then switch back to the directory you need to be in.
Apart from the previous use cases, you can also use Docker Compose to create directories in case you want to make new dummy folders on docker-compose up:
volumes:
- .:/ftp/
- /ftp/node_modules
- /ftp/files
Also if you want to use multiple directories and then their change owner, you can use:
RUN set -ex && bash -c 'mkdir -p /var/www/{app1,app2}' && bash -c 'chown luqo33:www-data /var/www/{app1,app2}'
Just this!

Resources