I am trying to add non-root user to my docker container. Thanks to some great previous posts, I think I was able to make a valid non-root user by using docker file attached below.
Then I attempted to create a .bashrc in that user's home directory through the same dockerfile. Building the image was successful without any problems. However, I could not create the .bashrc to under $HOME_DIR by running the created image. As a note, the creation of the user and the home directory works fine.
Could you please tell me why it doesn't work?
FROM ubuntu:20.04
ENV TZ=Asia/Tokyo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
ENV DEBCONF_NOWARNINGS=yes
ARG USERNAME=test
ARG GROUPNAME=test
ARG UID=1004
ARG GID=1004
ARG PASSWORD=test
ARG HOME_DIR=/home/$USERNAME/
RUN groupadd -g $GID $GROUPNAME && \
useradd -m -s /bin/bash -u $UID -g $GID -G sudo $USERNAME && \
echo $USERNAME:$PASSWORD | chpasswd
USER $USERNAME
WORKDIR $HOME_DIR
####
## The following lines do not work for me
####
RUN echo test >> $HOME_DIR/.bashrc
EDIT: adding command used to run the image:
sudo docker run --name [container name] \
-v /home/test/Work:/home/test -i -t --shm-size 30G [image name]
When you build the image, you create a file called .bashrc in /home/test.
However, when you run the image, you map a directory on the host to /home/test.
When you do that, all the files in the image that are in /home/test become 'hidden' and replaced with the directory you map into the image.
You can verify that the file is in the image by running it without mapping the directory to /home/test. Then you will be able to see the .bashrc file in /home/test.
A solution could be to have the .bashrc file in the /home/test/Work directory on the host machine.
Related
I have built a local uaa docker image and tried to run in local.
But I am getting this error when I am trying to start the docker image.
I built the docker image via this below command and the build is successful too.
docker build -t uaa-local --build-arg uaa_yml_name=local.yml .
when I am trying to run the local uaa docker image, I am getting this below error. What I am doing wrong
Content of DockerFile
FROM openjdk:11-jre
ARG uaa_yml_name=local.yml
ENV UAA_CONFIG_PATH /uaa
ENV CATALINA_HOME /tomcat
ADD run.sh /tmp/
ADD conf/$uaa_yml_name /uaa/uaa.yml
RUN chmod +x /tmp/run.sh
RUN wget -q https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.57/bin/apache-tomcat-8.5.57.tar.gz
RUN tar zxf apache-tomcat-8.5.57.tar.gz
RUN rm apache-tomcat-8.5.57.tar.gz
RUN mkdir /tomcat
RUN mv apache-tomcat-8.5.57/* /tomcat
RUN rm -rf /tomcat/webapps/*
ADD dist/cloudfoundry-identity-uaa-74.22.0.war /tomcat/webapps/
RUN mv /tomcat/webapps/cloudfoundry-identity-uaa-74.22.0.war /tomcat/webapps/ROOT.war
RUN mkdir -p /tomcat/webapps/ROOT && cd /tomcat/webapps/ROOT && unzip ../ROOT.war
ADD conf/log4j2.properties /tomcat/webapps/ROOT/WEB-INF/classes/log4j2.properties
RUN rm -rf /tomcat/webapps/ROOT.war
EXPOSE 8080
CMD ["/tmp/run.sh"]
On further investigation I think it is looking for run.sh file in the /tmp/ folder which is added on line 5 in Dockerfile..but when I checked for the file in /tmp/ folder it is not there..Is it because of that?And how to resolve that? I already have the run.sh in my current folder.
I'm trying a simple workflow without success and it take me a loooooot of time to test many solutions on SO and github. Permission for named folder and more generaly permissions volume in docker is a nightmare link1 link2 imho.
So i restart from scratch, trying to create a simple proof of concept for my use case.
I want this general workflow :
user on windows and/or linux build the Dockerfile
user run the container (if possible not as root)
the container launch a crontab which run a script writing in the data volume each minute
users (on linux or windows) get the results from the data volume (not root) because permissions are correctly mapped
I use supercronic because it runs crontab in container without root permission.
The Dockerfile :
FROM artemklevtsov/r-alpine:latest as baseImage
RUN mkdir -p /usr/local/src/myscript/
RUN mkdir -p /usr/local/src/myscript/result
COPY . /usr/local/src/myscript/
WORKDIR /usr/local/src/myscript/
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk --no-cache add busybox-suid curl
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.$
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=9aeb41e00cc7b71d30d33c57a2333f2c2581a201
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
CMD ["supercronic", "crontab"]
The crontab file :
* * * * * sh /usr/local/src/myscript/run.sh > /proc/1/fd/1 2>&1
The run.sh script
#!/bin/bash
name=$(date '+%Y-%m-%d-%s')
echo "some data for the file" >> ./result/fileName$name
The commands :
# create the volume for result, uid/gid option are not possible for windows
docker volume create --name myTestVolume
docker run --mount type=volume,source=myTestVolume,destination=/usr/local/src/myscript/result test
docker run --rm -v myTestVolume:/alpine_data -v $(pwd)/local_backup:/alpine_backup alpine:latest tar cvf /alpine_backup/scrap_data_"$(date '+%y-%m-%d')".tar /alpine_data
When i do this the result folder local_backup and files it contains has root:root permissions, so user who launch this container cannot access the files.
Is there a solution which works, which permits windows/linux/mac users who launch the same script to access easily the files into volume without problem of permissions ?
EDIT 1 :
The strategy first described here only work with binded volume, and not named volume. We use an entrypoint.sh to chown uid/gid of folders of container based on information given by docker run.
I copy paste the modified Dockerfile :
FROM artemklevtsov/r-alpine:latest as baseImage
RUN mkdir -p /usr/local/src/myscript/
RUN mkdir -p /usr/local/src/myscript/result
COPY . /usr/local/src/myscript/
ENTRYPOINT [ "/usr/local/src/myscript/entrypoint.sh" ]
WORKDIR /usr/local/src/myscript/
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk --no-cache add busybox-suid curl su-exec
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.$
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=9aeb41e00cc7b71d30d33c57a2333f2c2581a201
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
CMD ["supercronic", "crontab"]
The entrypoint.sh
#!/bin/sh
set -e
addgroup -g $GID scrap && adduser -s /bin/sh -D -G scrap -u $UID scrap
if [ "$(whoami)" == "root" ]; then
chown -R scrap:scrap /usr/local/src/myscript/
chown --dereference scrap "/proc/$$/fd/1" "/proc/$$/fd/2" || :
exec su-exec scrap "$#"
fi
The procedure to build,launch, export:
docker build . --tag=test
docker run -e UID=1000 -e GID=1000 --mount type=volume,source=myTestVolume,destination=/usr/local/src/myscript/result test
docker run --rm -v myTestVolume:/alpine_data -v $(pwd)/local_backup:/alpine_backup alpine:latest tar cvf /alpine_backup/scrap_data_"$(date '+%y-%m-%d')".tar /alpine_data
EDIT 2 :
For Windows, using docker toolbox and binded volume, i found the answer on SO. I use the c:/Users/MyUsers folder for binding, it's more simple.
docker run --name test -d -e UID=1000 -e GID=1000 --mount type=bind,source=/c/Users/myusers/localbackup,destination=/usr/local/src/myscript/result dockertest --name rflightscraps
Result of investigation
crontab run with scrap user [OK]
UID/GID of local user are mapped to container user scrap [OK]
Exported data continue to be root [NOT OK].
Windows / Linux [HALF OK]
If i use bind volume and not a named volume, it works. But this is not the desired behavior, how can i use the named volume with correct permission on Win/Linux ...
Let me divide the answer into two parts Linux Part and Docker part. You need to understand both in order to solve this problem.
Linux Part
It is easy to run cronjobs as user other than root in Linux.
This can be achieved by creating a user in docker container with the same UID as of that in the host machine and copying the crontab file as /var/spool/cron/crontabs/user_name.
From man crontab
crontab is the program used to install, deinstall or list the
tables used to drive the cron(8) daemon in Vixie Cron. Each user can
have their own crontab, and though these are files in
/var/spool/cron/crontabs, they are not intended to be edited directly.
Since Linux identifies users by User Id, inside docker the UID will be bound to the newly created user whereas in host machine the same will be binded with host user.
So, You don't have any permission issue as the files is owned by the host_user. Now you would have understood why I mentioned creating user with same UID as of that in host machine.
Docker Part
Docker considers all the directories(or layers) to be UNION FILE SYSTEM. Whenever you build an image each instruction creates a layer and the layer is marked as read-only. This is the reason Docker containers doesn't persist data. So you have to explicitly tell docker that some directories need to persist data by using VOLUME keyword.
You can run containers without mentioning volume explicitly. If you do so, docker daemon considers them to be UFS and resets the permissions.
In order to preserve the changes to a file/directory including ownership. The respective file should be declared as Volume in Dockerfile.
From UNION FILE SYSTEM
Indeed, when a container has booted, it is moved into memory, and the boot filesystem is unmounted to free up the RAM used by the initrd disk image. So far this looks pretty much like a typical Linux virtualization stack. Indeed, Docker next layers a root filesystem, rootfs, on top of the boot filesystem. This rootfs can be one or more operating systems (e.g., a Debian or Ubuntu filesystem).
Docker calls each of these filesystems images. Images can be layered on top of one another. The image below is called the parent image and you can traverse each layer until you reach the bottom of the image stack where the final image is called the base image. Finally, when a container is launched from an image, Docker mounts a read-write filesystem on top of any layers below. This is where whatever processes we want our Docker container to run will execute. When Docker first starts a container, the initial read-write layer is empty. As changes occur, they are applied to this layer; for example, if you want to change a file, then that file will be copied from the read-only layer below into the read-write layer. The read-only version of the file will still exist but is now hidden underneath the copy.
Example:
Let us assume that we have a user called host_user. The UID of host_user is 1000. Now we are going to create a user called docker_user in Docker container. So I'll assign him UID as 1000. Now whatever files that are owned by docker_user in Docker container is also owned by host_user if those files are accessible by host_user from host(i.e through volumes).
Now you can share the binded directory with others without any permission issues. You can even give 777 permission on the corresponding directory which allows others to edit the data. Else, You can leave 755 permissions which allows others to copy but only the owner to edit the data.
I've declared the directory to persist changes as a volume. This preserves all changes. Be careful as once you declare a directory as volume further changes made to that directory while building the will be ignored as those changes will be in separate layers. Hence do all your changes in the directory and then declare it as volume.
Here is the Docker file.
FROM alpine:latest
ARG ID=1000
#UID as arg so we can also pass custom user_id
ARG CRON_USER=docker_user
#same goes for username
COPY crontab /var/spool/cron/crontabs/$CRON_USER
RUN adduser -g "Custom Cron User" -DH -u $ID $CRON_USER && \
chmod 0600 /var/spool/cron/crontabs/$CRON_USER && \
mkdir /temp && \
chown -R $ID:$ID /temp && \
chmod 777 /temp
VOLUME /temp
#Specify the dir to be preserved as Volume else docker considers it as Union File System
ENTRYPOINT ["crond", "-f", "-l", "2"]
Here is the crontab
* * * * * /usr/bin/whoami >> /temp/cron.log
Building the image
docker build . -t test
Create new volume
docker volume create --name myTestVolume
Run with Data volume
docker run --rm --name test -d -v myTestVolume:/usr/local/src/myscript/result test:latest
Whenever you mount myTestVolume to other container you can see the
data under /usr/local/src/myscript/result is owned by UID 1000
if no user exist with that UID in that container or the username of
corresponding UID.
Run with Bind volume
docker run --rm --name test - -dv $PWD:/usr/local/src/myscript/result test:latest
When you do an ls -al /home/host_user/temp You will see that file called cron.log is created and is owned by **host_user**.
The same will be owned by docker_user in docker container when you do an ls -al /temp. The contents of cron.log will be docker_user.
So, Your effective Dockerfile should be
FROM artemklevtsov/r-alpine:latest as baseImage
ARG ID=1000
ARG CRON_USER=docker_user
RUN adduser -g "Custom Cron User" -DH -u $ID $CRON_USER && \
chmod 0600 /var/spool/cron/crontabs/$CRON_USER && \
echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories && \
apk --no-cache add busybox-suid curl && \
mkdir -p /usr/local/src/myscript/result && \
chown -R $ID:$ID /usr/local/src/myscript/result && \
chmod 777 /usr/local/src/myscript/result
COPY crontab /var/spool/cron/crontabs/$CRON_USER
COPY . /usr/local/src/myscript/
VOLUME /usr/local/src/myscript/result
#This preserves chown and chmod changes.
WORKDIR /usr/local/src/myscript/
ENTRYPOINT ["crond", "-f", "-l", "2"]
Now whenever you attach a Data/bind volume to /usr/local/src/myscript/result it will be owned by user having UID 1000 and the same is persistent across all the containers whichever has mounted the same volume with their corresponding user with 1000 as file owners.
Please Note: I've given 777 permissions in order to share with every one. You can skip that step in your Dockerfle based on your convinence.
References:
Crontab manual.
User identiier - Wiki.
User ID Definition.
About storage drivers.
UNION FILE SYSTEM.
I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.
When I try to run a GUI, like xclock for example I get the error:
Error: Can't open display:
I'm trying to use Docker to run a ROS container, and I need to see the GUI applications that run inside of it.
I did this once just using a Vagrant VM and was able to use X11 to get it done.
So far I've tried putting way #1 and #2 into a docker file based on the info here:
http://wiki.ros.org/docker/Tutorials/GUI
Then I tried copying most of the dockerfile here:
https://hub.docker.com/r/mjenz/ros-indigo-gui/~/dockerfile/
Here's my current docker file:
# Set the base image to use to ros:kinetic
FROM ros:kinetic
# Set the file maintainer (your name - the file's author)
MAINTAINER me
# Set ENV for x11 display
ENV DISPLAY $DISPLAY
ENV QT_X11_NO_MITSHM 1
# Install an x11 app like xclock to test this
run apt-get update
run apt-get install x11-apps --assume-yes
# Stuff I copied to make a ros user
ARG uid=1000
ARG gid=1000
RUN export uid=${uid} gid=${gid} && \
groupadd -g ${gid} ros && \
useradd -m -u ${uid} -g ros -s /bin/bash ros && \
passwd -d ros && \
usermod -aG sudo ros
USER ros
WORKDIR /home/ros
# Sourcing this before .bashrc runs breaks ROS completions
RUN echo "\nsource /opt/ros/kinetic/setup.bash" >> /home/ros/.bashrc
# Copy entrypoint script into the image, this currently echos hello world
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
My personal preference is to inject the display variable and share the unix socket or X windows with something like:
docker run -it --rm -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /etc/localtime:/etc/localtime:ro \
my-gui-image
Sharing the localtime just allows the timezone to match up as well, I've been using this for email apps.
The other option is to spin up a VNC server, run your app on that server, and then connect to the container with a VNC client. I'm less a fan of that one since you end up with two processes running inside the container making signal handling and logs a challenge. It does have the advantage that the app is better isolated so if hacked, it doesn't have access to your X display.
I have the following in my dockerfile. (There is much more. but I have pasted the relevant part here)
RUN useradd jenkins
USER jenkins
# Maven settings
RUN mkdir ~/.m2
COPY settings.xml ~/.m2/settings.xml
The docker build goes through fine and when I run docker image, I see NO errors.
but I do not see .m2 directory created at /home/jenkins/.m2 in the host filesystem.
I also tried replacing ~ with /home/jenkins and still I do not see .m2 being created.
what am I doing wrong?
Thanks
I tried something similar and got
Step 4 : RUN mkdir ~/.m2
---> Running in 9216915b2463
mkdir: cannot create directory '/home/jenkins/.m2': No such file or directory
your useradd is not enough to create /home/jenkins
I do for my user gg
RUN useradd -d /home/gg -m -s /bin/bash gg
RUN echo gg:gg | chpasswd
RUN echo 'gg ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/gg
RUN chmod 0440 /etc/sudoers.d/gg
USER gg
ENV HOME /home/gg
WORKDIR /home/gg
This creates the directory of the user gg
`