Docker container volumes are not working as expected - docker

Actually, I am trying to run the below following command
docker run -it --rm -v $(pwd):/var/www/html --user node node:12.13.1-alpine ash.
Expected result
The files inside the container (i.e /var/www/html ) should have user as node.
Actual result
But, the files inside the containers are showing the same user as of the host.
Also, can't create a directory inside the container.
It is working for my other colleagues. So, any help in this would be much appreciated.
Many thanks,
Alwin
Note:
Docker version 19.03.7, build 7141c199a2
Have added necessary permission to docker command so that it doesn't
need sudo for running it

Running docker run with --user does not change the permission of the original existing files. From Docker reference:
The developer can set a default user to run the first process with the Dockerfile USER instruction. When starting a container, the operator can override the USER instruction by passing the -u option.
It only overrides the user running Node.js inside the container. During mount, the original permission and owner of /var/www/html is unchanged. Verify this by ls -n and see if the UID of the owner of the folder is the same when mounted inside Docker. Make sure the UID is the same as node user you specified.
I don't know how it works in your colleagues computers though. That's why it's important to use UID/GID instead just using username. The same username in the container can have different UID with the same username in the host.
EDIT: I checked that node image that you use contains node user with UID 1000. The first user created in Linux usually also has UID 1000. So if the /var/www/html is owned by UID 1000, it will run. But UID 1000 could possibly belong to different usernames in Docker and in the host. Because you specified --user node, which is translated into UID 1000 inside the container as username node itself exists, it won't work if /var/www/html is owned by different UID in your host, which probably is your case.

You have to add USER into Dockerfile before building
# App is running normal user mode
USER node
so now when you run docker image it will run with normal node user mode

Related

Root User Docker

I understand that it's considered a bad security practice to run Docker images as root, but I have a specific situation that I wanted to pass by the community to see if anyone can help.
We are currently using a pipeline on an Amazon Linux 2 instance with a single user called ec2-user. Unfortunately, a lot of the scripts we're using for our pipeline have hard-coded paths baked in (notably /home/ec2-user/) ... which may or may not reference the $HOME variable.
I've been talking to one of the engineers that is building a Docker image for our pipeline and suggested that he creates a new user entirely so root user isn't running our pipeline.
For example:
# add clip user
RUN groupadd-r clip && useradd -r -g clip clip
# disable root
RUN chsh -s /usr/sbin/nologin root
# set environment variables
ENV HOME /home/clip
ENV DEBIAN FRONTEND-noninteractive
However, the engineer mentioned that the clip user inside the container will have some uid that may or may not exist in the host machine. For example, if the clip user had uid 1001 in the container, but 1001 was john in the host, all the files created as the clip user inside the container would be owned by john on the outside.
Further, he is more concerned about the situation where the clip user has a uid in the container that doesn’t exist in the host’s passwd. In that case files created by the clip user in the container would be owned by a bare unassociated uid on the host.
If we decided to pass in ids from the host as the user/group to run the image. The kernel will be ok with it (same kernel as the host), and when all is said and done files created inside the container will then be owned by the user/group you pass in. However, the container wouldn’t know who that user/group are, so it’ll just use the raw ids, and stuff like $HOME or whoami won’t work.
With that said, we're curious if anyone else has experienced these problems and if anyone has found solutions?
Everything you say is totally normal. The container has its own /etc/passwd file, and so a given numeric user ID might map to different user names (or to not at all) in the host and in the container. Beyond some cosmetic issues around debug shells, it shouldn't usually matter if the current numeric uid is actually present in the container /etc/passwd, and there's no reason a container uid would need to be mapped in the host /etc/passwd.
Note that there are a couple of ways to directly assume another user ID in Docker, either using the docker run -u option or the Dockerfile USER directive. The RUN chsh command you propose doesn't really do anything and doesn't prevent becoming root inside a container.
clip user inside the container will have some uid that may or may not exist in the host machine.
True, totally normal.
For example, if the clip user had uid 1001 in the container, but 1001 was john in the host, all the files created as the clip user inside the container would be owned by john on the outside.
This is partially true, but only in the case where you've explicitly mapped a host directory into the container with a docker run -v option. Otherwise, the host user with uid 1001 won't be able to navigate to the /var/lib/docker/... directory that actually contains the container files, so it doesn't matter that they could hypothetically write them.
The more usual case around this is to explicitly supply a host uid so that the container process can save its state in a mapped host directory. Pass a numeric uid to the docker run -u option; there's no particular need for that uid to exist in the container's /etc/passwd.
docker run \
-u $(id -u) \
-v "$PWD/data:/data" \
...
the container wouldn’t know who that user/group are, so it’ll just use the raw ids, and stuff like $HOME or whoami won’t work.
Unless your application explicitly calls these things, they won't usually matter. "Home directory" is a pretty poorly defined concept in a Docker container since it's usually a wrapper around a single process.

Google Cloud docker image file getting deleted

I am running a docker image for Juypter and tensorboard. The data seem to get deleted everytime the VM instance is stopped is there away to stop this from happening i could find anything on the web that would allow me to do this?
TL;DR: You are not persisting your data.
Docker containers does not persist data out of the box, you need to explicity tell docker to persist any data created inside the container when the container is deleted.
You can read more at Use volumes page at Docker documentation.
If you want to persist data you need to do the next steps:
Create a local volume inside the VM where you want to persist data. This command should be executed on the GCE instance
mkdir -p /opt/data/jupyterdata
Set the correct ownership of the folder to the user id that the user inside your container uses. For example, let's imagine that your container lspvic/tensorboard-notebook run the application using the user tensorflow with the UID 1500. So you need to set the ownership of your folder to the UID 1500:
chown 1500:1500 /opt/data/jupyterdata -R
Modify your docker run command to mount the local directory as a volume inside the container. For example, lets imagine that inside your container you want to save the files at /var/lib/jupyter (this is an example), you will need to modify the docker run command as follows:
docker run -it --rm -p 8888:8888 \
-v /opt/data/jupyterdata:/var/lib/jupyter:Z \
lspvic/tensorboard-notebook
NOTE: the :Z parameter is needed to avoid SELINUX issues
With this steps now your data saved on folder /var/lib/jupyter inside the container will be saved on /opt/data/jupyterdata inside the VM so no more data loss.

Understanding Docker user/uid creation

Even after going through lot of materials and SO answers still I'm not clear on docker uid/user usage or implementation.
I understand the below points:
An instance of an image is called a container.
uid/gid is maintained by the underlying kernel, not by Container.
Kernel understand uid/gid number not username/groupname and name is an alias and just for human readable.
All containers are processes maintained by docker daemon and will be visible as process in host machine (ps -ef)
root (id = 0) is the default user within a container and this can be changed either by USER instruction in Dockerfile or by passing -u flag in docker run
With the above all said, when I have the below command in my Dockerfile, I presume that a new user (my-user) will be created with incremented uid.
RUN addgroup my-group && adduser -D my-user -G my-group
What happens if I run the same image multiple times i.e multiple containers? Will the same uid be assigned to all processes?
What happens if I add same above command in another image and run that image as container? - will I get new uid or same uid as the previous one?
How the uid increment happens in Container in relation with the host machine.
Any pointers would be helpful.
Absent user namespace remapping, there are only two things that matter:
What the numeric user ID is; and
What's in the /etc/passwd file.
Remember that each container and the host have separate filesystems, so each of these things could have separate /etc/passwd files.
What happens if I run the same image multiple times i.e multiple containers? Will the same uid be assigned to all processes?
Yes, because each container gets a copy of the same /etc/passwd file from the image.
What happens if I add same above command in another image and run that image as container? - will I get new uid or same uid as the previous one?
It depends on what adduser actually does; it could be the same or different.
How the uid increment happens in Container in relation with the host machine.
They're completely and totally independent.
Also remember that you can docker push/docker pull a built image to run it on a different host. That will bring the image's /etc/passwd file along with it, but the host environment could be totally different. Correspondingly, it's not a best practice to try to match some specific host's uid mapping in a Dockerfile, because it will be wrong if you try to run the same image anywhere else.
When you try to add users in the RUN statement, it does not create a user on the host. If you do not specify an user with the USER statement in your Dockerfile or the -u flag while starting container (Assuming the parent Dockerfiles also do not include the USER statement), the container process on host will simple run as root user if you have started the docker daemon as root.
So if you create a user using RUN addgroup my-group && adduser -D my-user -G my-group it will simply create an user in the container i.e. the user is local to the container. So each instance (container) of that image you run will have the same uid of the user inside the container. Note: That user will not exist on the host.
If you want to run the container process on host as another user (which exists on host) then you have 3 options:
Add a USER statement in the Dockerfile
Use the -u flag while running the container
You can use docker's user namespace feature
I highly recommend understanding the user namespace and mappings by reading this documentation: Isolate containers with a user namespace

Making docker container write files that the host machine can delete

I have a docker-based build environment - in order to build my project, I run a docker container with the --volume parameter, so it can access my project directory and build it.
The problem is that the files created by the container cannot be deleted by the host machine. The only workaround I currently have is to start an interactive container with the directory mounted and delete it.
Bottom line question: It is possible to make docker write to the mounted area files with permissions such that the host can later delete them?
This has less to do with Docker and more to do with basic Unix file permissions. Your docker containers are running as root, which means any files created by the container are owned by root on your host. You fix this the way you fix any other file permission problem, by either (a) ensuring that that the files/directories are created with your user id or (b) ensuring that permissions allow you do delete the files even if they're not owned by you or (c) using elevated privileges (e.g., sudo rm ...) to delete the files.
Depending on what you're doing, option (a) may be easy. If you can run the contanier as a non-root user, e.g:
docker run -u $UID -v $HOME/output:/some/container/path ...
...then everything will Just Work, because the files will be created with your userid.
If the container must run as root initially, you may be able to take care of root actions in your ENTRYPOINT or CMD script, and then switch to another uid to run the main application. To do this, you would need to pass your user id into the container (e.g., as an environment variable), and then later use something like runuser to switch to the new userid:
exec runuser -u $TARGE_UID /some/command
If neither of the above is an option, then sudo rm -rf mydirectory should work just as well as spinning up an interactive container.
If you need your build artifacts just to put them to the docker image on the next stage then it is probably worth to use multi-stage build option.

How to give docker container write/chmod permissions on mapped volume?

I have a synology NAS which has docker support and wanted to run some docker containers (I'm pretty new to Docker) on it. For example pocketmine-pm (but I believe I have the write issue also with other containers).
I created a volume on the host and mapped this in the container settings. (And in the synology docker settings for the volume mapping I did not click on "read only").
According to the Dockerfile a new user 'pocketmine' is created inside the container and this user is used to start the server. The user seems to have the user ID 1000 (first UID for new linux users). The container also uses an Entrypoint.sh script to start the server.
Initially the container was not able to write files to the mapped directory. I had to SSH into the host 'chown' the directory for the UID 1000:
sudo chown 1000:1000 /volXy/docker/pocketminemp -R
After that the archive could be downloaded and extracted.
Unfortunately I was not able to connect to the server from my iOS device. The server is listed as 'online' but the connection fails without any specific message. I then checked the logs of the container and saw the following entries (not sure if this really prevents the connection but I will give it a try):
[*] Everything done! Run ./start.sh to start PocketMine-MP
chown: changing ownership of '/pocketmine/entrypoint.sh': Operation not permitted
chown: changing ownership of '/pocketmine/server.properties.original': Operation not permitted
Loading pocketmine.yml...
Apparently the container cannot chown a file it was previously able to download.
Does anybody know what can be done to fix this? Do I need to chmod the mapped volume and why did I need to chown the directory to UID 1000 (a user that doesn't really exist on the host) - isn't there a more elegant way to fix the permissions?
When you run the container, you should be able to use the --user="uid:gid" flag to specify the user you wish to run the container as.
Source: https://docs.docker.com/engine/reference/run/#user

Resources