Run containers as other user - docker

The running containers are occupying space as root user. My requirement is that the containers should run as some_user.
I checked online but was unable to get a better clarity.Each forum says a different thing.What is the correct method.
PS:I am running docker in a server and it will be a big deal to restart docker process.So if there is any method to directly specify during run it will be good.

I found a workaround for this problem.
Data for all docker is stored in /var/lib/docker/devicemapper/devicemapper/data.
Since this folder is mounted on to the volume of root user.
The solution of it will be (mounting) /var directory entirely to another physical location.
telinit 1
cp -pR /var /home/var
mv /var /var.old
ln -s /home/var /var
telinit 2
sudo rm -rf /var.old

Related

Allow Docker Container & Host User To Write on Bind Mounted Host Directory

Any help from any source is appreciated.
Server has a Docker container with alpine, nginx, php. This container is able to write in bind mounted host directory, only when I set "chown -R nobody directory" to the host directory (nobody is a user in container).
I am using VSCode's extension "Remote - SSH" to connect to server as user ubuntu. VSCode is able to edit files in that same host directory (being used for bind mount), only when I set "chown -R ubuntu directory".
Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.
Image used: https://hub.docker.com/r/trafex/php-nginx
What I tried:
In Container, I added user "nobody" to group "ubuntu". On host, directory (used as mount) was set "sudo chown -R ubuntu:ubuntu directory", user "ubuntu" was already added to group "ubuntu".
VSCode did edit, container was unable to edit. (Edit: IT WORKED, I changed the directory permission for the group to allow write)
Edit: the container already created without Dockerfile also ran and maybe edited with important changes, so maybe I can't use Dockerfile or entrypoint.sh way to solve problem. Can It be achieved through running commands inside container or without creating container again? This container can be stopped.
Edit: I am wondering, in Triet Doan's answer, an option is to modify UID and GID of already created user in the container, will doing this for the user and group "nobody" can cause any problems inside container, I am wondering because probably many commands for settings already executed inside container, files are already edited by php on mounted directory & container is running for days
Edit: I found that alpine has no usermod & groupmod.
This article wrote about this problem very nicely. I would just summarize the main ideas here.
The easiest way to tackle with this permission problem is to modify UID and GID in the container to the same UID and GID that are used in the host machine.
In your case, we try to get the UID and GID of user ubuntu and use them in the container.
The author suggests 3 ways:
1. Create a new user with the same UID and GID of the host machine in entrypoint.sh.
Here’s the Dockerfile version for Ubuntu base image.
FROM ubuntu:latest
RUN apt-get update && apt-get -y install gosu
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
The entrypoint.sh was created as follows:
#!/bin/bash
USER_ID=${LOCAL_UID:-9001}
GROUP_ID=${LOCAL_GID:-9001}
echo "Starting with UID: $USER_ID, GID: $GROUP_ID"
useradd -u $USER_ID -o -m user
groupmod -g $GROUP_ID user
export HOME=/home/user
exec /usr/sbin/gosu user "$#"
Simply build the container with the docker build command.
docker build -t ubuntu-test1 .
The LOCAL_UID and LOCAL_GID can be passed to the container in the docker run command.
$ docker run -it --name ubuntu-test -e LOCAL_UID=$(id -u $USER) -e LOCAL_GID=$(id -g $USER) ubuntu-test1 /bin/bash
Starting with UID: 1001, GID: 1001
user#1291224a8029:/$ id
uid=1001(user) gid=1001(user) groups=1001(user)
We can see that the UID and GID in the container are the same as those in the host.
2. Mount the host machine’s /etc/passwd and /etc/group to a container
This is also a fine approach and simpler at a glance. One drawback of this approach is that a new user created in a container can’t access the bind-mounted file and directories because UID and GID are different from the host machine’s ones.
One must be careful to have /etc/passwd and /etc/group with read-only access, otherwise the container might access and overwrite the host machine’s /etc/passwd and /etc/group. Therefore, the author doesn't recommend this way.
$ docker run -it --name ubuntu-test --mount type=bind,source=/etc/passwd,target=/etc/passwd,readonly --mount type=bind,source=/etc/group,target=/etc/g
roup,readonly -u $(id -u $USER):$(id -g $USER) ubuntu /bin/bash
ether#903ad03490f3:/$ id
uid=1001(user) gid=1001(user) groups=1001(user)
3. Modify UID and GID with the same UID and GID of the host machine
This is mostly the same approach as No.1, but just modify the UID and GID in case a new user has been created in the container already. Assume you have a new user created in the Dockerfile, then just call these commands in either Dockerfile or entrypoint.sh.
If your username and group name were "test", then you can use usermod and groupmod commands to modify UID and GID in the container. The taken UID and GID as environment variables from the host machine will be used for this "test" user.
usermod -u $USER_ID -o -m -d <path-to-new-home> test
groupmod -g $GROUP_ID test
Problem: if I set "ubuntu" as owner, container can't write (using php to write), if I set "nobody" as owner, VSCode SSH can't write. I am finding a way to allow both to write without changing directory owner user again and again, or similar ease.
First, I'd recommend the container image should create a new username for the files inside the container, rather than reusing nobody since that user may also be used for other OS tasks that shouldn't have any special access.
Next, as Triet suggests, an entrypoint that adjusts the container's user/group to match the volume is preferred. My own version of these scripts can be found in this base image that includes a fix-perms script that makes the user id and group id of the container user match the id's of a mounted volume. In particular, the following lines of that script where $opt_u is the container username, $opt_g is the container group name, and $1 is the volume mount location:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
# update the gid
if [ -n "$opt_g" ]; then
OLD_GID=$(getent group "${opt_g}" | cut -f3 -d:)
NEW_GID=$(stat -c "%g" "$1")
if [ "$OLD_GID" != "$NEW_GID" ]; then
echo "Changing GID of $opt_g from $OLD_GID to $NEW_GID"
groupmod -g "$NEW_GID" -o "$opt_g"
if [ -n "$opt_r" ]; then
find / -xdev -group "$OLD_GID" -exec chgrp -h "$opt_g" {} \;
fi
fi
fi
Then I start the container as root, and the container runs the fix-perms script from the entrypoint, followed by a command similar to:
exec gosu ${container_user} ${orig_command}
This replaces the entrypoint that's running as root with the application running as the specified user. I've got more examples of this in:
DockerCon presentation
Similar SO questions
What I tried: In Container, I added user "nobody" to group "ubuntu".
On host, directory (used as mount) was set "sudo chown -R
ubuntu:ubuntu directory", user "ubuntu" was already added to group
"ubuntu". VSCode did edit, container was unable to edit.
I'd avoid this and create a new user. Nobody is designed to be as unprivileged as possible, so there could be unintended consequences with giving it more access.
Edit: the container already created without Dockerfile also ran and
maybe edited with important changes, so maybe I can't use Dockerfile
or entrypoint.sh way to solve problem. Can It be achieved through
running commands inside container or without creating container again?
This container can be stopped.
This is a pretty big code smell in containers. They should be designed to be ephemeral. If you can't easily replace them, you're missing the ability to upgrade to a newer image, and creating a lot of state drift that you'll eventually need to cleanup. Your changes that should be preserved need to be in a volume. If there are other changes that would be lost when the container is deleted, they will be visible in docker diff and I'd recommend fixing this now rather than increasing the size of the technical debt.
Edit: I am wondering, in Triet Doan's answer, an option is to modify
UID and GID of already created user in the container, will doing this
for the user and group "nobody" can cause any problems inside
container, I am wondering because probably many commands for settings
already executed inside container, files are already edited by php on
mounted directory & container is running for days
I would build a newer image that doesn't depend on this username. Within the container, if there's data you need to preserve, it should be in a volume.
Edit: I found that alpine has no usermod & groupmod.
I use the following in the entrypoint script to install it on the fly, but the shadow package should be included in the image you build rather than doing this on the fly for every new container:
if ! type usermod >/dev/null 2>&1 || \
! type groupmod >/dev/null 2>&1; then
if type apk /dev/null 2>&1; then
echo "Warning: installing shadow, this should be included in your image"
apk add --no-cache shadow
else
echo "Commands usermod and groupmod are required."
exit 1
fi
fi

Why is docker not completely deleting my file?

I am trying to build using:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1 AS builder
COPY pythonnet/src/ pythonnet/src
WORKDIR /pythonnet/src/runtime
RUN dotnet build -f netstandard2.0 -p:DefineConstants=\"MONO_LINUX\;XPLAT\;PYTHON3\;PYTHON37\;UCS4\;NETSTANDARD\" Python.Runtime.15.csproj
# copy myApp csproj and restore
COPY src/myApp/*.csproj /src/myApp/
WORKDIR /src/myApp
RUN dotnet restore
# now copy everything else as separate docker step
# (copy to staging folder, remove csproj, and copy down - so we don't overwrite project above)
WORKDIR /
COPY src/myApp/ ./staging/src/myApp
RUN rm ./staging/src/myApp/*.csproj \
&& cp -r ./staging/* ./ \
&& rm -rf ./staging
This was working fine, and in Windows 10 still does, but in CentOS 7 I get:
Step 10/40 : RUN rm ./staging/src/myApp/*.csproj && cp -r ./staging/* ./ && rm -rf ./staging
---> Running in 6b17ae0fae89
cp: cannot stat './staging/src/myApp/myApp.csproj': No such file or directory
Using ls instead of cp throws a similar file not found error, so it looks like Docker still knows about myApp.csproj but cannot see it since it has been removed.
Is there a way around this? I have tried using rsync but similar problems.
I simply ignored the issue by tacking on ;exit 0 on the offending lines. Not great, but does the job.
EDIT: This worked for me as I cannot upgrade the version of CemtOS. If you can, check out Alexander Block's answer.
I don't know specifically how to solve this problem as there's a lot of context in the filesystem that you haven't (and probably can't) share with us.
My suggestion on a strategy is that you:
comment out all lines from the failing one 'til the end of the Dockerfile
build the partial image
docker exec -it [image] bash to jump into the image
poke around and figure out what's going wrong
repeat 1-4 until things work as expected
It's not as fun as a perfectly insightful answer of course but this is a relentlessly effective algorithm even if it's tedious and annoying.
EDIT
My wild guess is that somehow, someway the linux machine doesn't have the file where it's expected for some reason and so it doesn't get copied into the image at all and that's why the docker build process can't find it. But there's no way to know without debugging the build process.
cp -r will stop and fail with that cannot stat <file> message whenever the source is a symbolic link and the target of the link does not exist. It will not copy links to non-existent files.
So my guess is that after you run COPY src/myApp/ ./staging/src/myApp your file ./staging/src/myApp/myApp.csproj is a symbolic link to a non-existent file. Why the following RUN rm ./staging/src/*.csproj doesn't remove it and stays silent about that, I don't know the answer to that.
To help demonstrate my theory, see below showing cp failing on a symlink on Centos 7.
[547] $ docker run --rm -it centos:7
Unable to find image 'centos:7' locally
7: Pulling from library/centos
524b0c1e57f8: Pull complete
Digest: sha256:e9ce0b76f29f942502facd849f3e468232492b259b9d9f076f71b392293f1582
Status: Downloaded newer image for centos:7
[root#a47b77cf2800 /]# ln -s /tmp/foo /tmp/bar
[root#a47b77cf2800 /]# ls -l /tmp/foo
ls: cannot access /tmp/foo: No such file or directory
[root#a47b77cf2800 /]# ls -l /tmp/bar
lrwxrwxrwx 1 root root 8 Jul 6 05:44 /tmp/bar -> /tmp/foo
[root#a47b77cf2800 /]# cp /tmp/foo /tmp/1
cp: cannot stat '/tmp/foo': No such file or directory
[root#a47b77cf2800 /]# cp /tmp/bar /tmp/2
cp: cannot stat '/tmp/bar': No such file or directory
Notice how you copy reports that it cannot stat either the source or destination of the symbolic link. It's the exact symptom you are seeing.
If you just want to get past this, you can try tar instead of cp or rsync.
Instead of
cp -r ./staging/* ./
use this instead:
tar -C ./staging -cf - . | tar -xf -
tar will happily copy symlinks that don't exist.
You've very likely encountered a kernel bug that has been fixed a long time ago in more recent kernels. As of https://de.wikipedia.org/wiki/CentOS, CentOS 7 is based on the Linux Kernel 3.10, which is pretty old already and does not have good Docker support in regard to the storage backend (overlay filesystem).
CentOS tried to backport needed fixes and features into 3.10, but seems to not have succeeded fully when it comes to overlay support. There are multiple (slightly different) issues regarding this which you can find when searching for "CentOS 7 overlay driver" on the internet. All of them have in common that removing of files from parent overlays does not work as expected.
For me it looks like rm calls on files return success, even though the files are not fully removed. Directory listings (e.g. by ls or shell expansion as in your case) then still list the file, while accessing the file then fails (no matter if read, write or deletion of the file).
I assume that what you've seen is just another incarnation of these issues. You should either switch to CentOS 8 or upgrade your Kernel (which is not officially supported by CentOS as far as I understand). Or even more radical, switch to a distribution which is used more often in combination with Docker and generally offers more recent Kernels, e.g. Debian or Ubuntu.

How to get files generated by docker run to host

I have run docker run to generate a file
sudo docker run -i --mount type=bind,src=/home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly,target=/home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly 990210oliver/mycc.docker:v1 MyCC.py /home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly/final_contigs_c10K.fa
This is the message I've got after executing.
20181029_0753
4mer
1_rename.py /home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly/final_contigs_c10K.fa 1000
Seqs >= 1000 : 32551
Minimum contig lengh for first stage clustering: 1236
run Prodigal.
/opt/prodigal.linux -i My.fa -a gene.aa -d gene.nuc -f gbk -o output -s potential_genes.txt
run fetchMG.
run UCLUST.
Get Feature.
2_GetFeatures_4mer.py for fisrt stage clustering
2_GetFeatures_4mer.py for second stage clustering
3_GetMatrix.py 1236 for fisrt stage clustering
22896 contigs entering first stage clustering
Clustering...
1_bhsne.py 20
2_ap.py /opt/ap 500 0
Cluster Correction.
to Split and Merge.
1_ClusterCorrection_Split.py 40 2
2_ClusterCorrection_Merge.py 40
Get contig by cluster.
20181029_0811
I now want to get the files generated by MyCC.py to host.
After reading Copying files from Docker container to host, I tried,
sudo docker cp 642ef90103be:/opt /home/mathed/data
But I got an error message
mkdir /home/mathed/data/opt: permission denied
Is there a way to get the files generated to a directory /home/mathed/data?
Thank you.
I assume your dest path does not exist.
Docker cp doc stats that in that case :
SRC_PATH specifies a directory
DEST_PATH does not exist
DEST_PATH is created as a directory and the contents of the source directory are copied into this directory
Thus it is trying to create a directory fro DEST_PATH... and docker must have the rights to do so.
According to the owner of the DEST_PATH top existing directory, you may have to either
create the directory first so that it will not be created by docker and give it the correct rights (looks like it has no rights to do so) using sudo chown {user}:{folder} + chmod +x {folder}
change the rights to the parent existing directory (chown + chmod again),
switch to path where docker is allowed to write.

how to change root dir of docker on ubuntu 18.04 LTS? (docker change location of volumes)

I installed ubuntu 18.04 LTS and checked a setting for docker (17.06.2-ce) to install at the same time.
I tested by starting the hello-world (sudo docker run hello-world) :
[...]
Hello from Docker!
This message shows that your installation appears to be working correctly.
[...]
I mounted a software raid on the folder named /raid, and make a folder /docker-data in it.
I try to change the root dir of my docker to put it in /raid/docker-data/ by following the few tutorials on the network... in vain.
these solutions don't work either :
/etc/default/docker : I can't find this
As in the 2nd solution : docker can't find his folder.
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Has anyone managed to do this feat in recent months?
(this is my 3rd installation of ubuntu and I just broke it...)
Apparently on Ubuntu 18.04 LTS, docker 17.06.2-it needs to work with snap, I'm going to dig this way. I will try to return answer later...
The common solution is to move the data and create a symlink:
systemctl stop docker
mv /var/lib/docker /raid/docker-data
ln -s /raid/docker-data /var/lib/docker
systemctl start docker
You can also tell docker about the new location with a setting in /etc/docker/daemon.json. If you don't have this file, you could create one with the contents:
{
"data-root": "/raid/docker-data"
}
I would recommend the first solution since you will find many 3rd party tools expect docker to be located in /var/lib/docker.
Sorry for this late response.
to come to my problem, after having looked at it more closely:
I am on Ubuntu 18.04, I can add or remove the docker service only via snap install (or remove) docker.
a docker party installs in /var/snap/
so I transpose your solution like this:
mv /var/snap/ /raid/snap
ln -s /raid/snap /var/snap
and finally I install docker via snap install docker
I have a test with sudo docker info, and I have this error message that appears :
cannot perform operation: mount --rbind /var/snap /tmp/snap.rootfs_RRAjdq//var/snap: Permission denied
(snap.rootfs_* because the end does not stop changing at each command launch)
and yet the installation went well some docker is apparently of course on /raid/snap.
I come back to you to give you the solution that allowed me to solve this problem.
cannot perform operation: mount --rbind /var/snap /tmp/snap.rootfs_RRAjdq//var/snap: Permission denied
I know why : https://bugs.launchpad.net/snapcraft/+bug/1620771 :
When /home is a symlink snaps don't work.
When /home is a real directory snaps work, see output below
In my case :
When /raid/snap is a symlink snaps don't work.
When /var/snap is a real directory snaps work.
I deleted docker. I had to reinstall snapcraf (snapd) because I was left on file modifications of it (wrong way)
from there, I stopped the snapd service:
sudo mv /var/snap/ /raid/snap
sudo mount --rbind /raid/snap /var/snap
I started the snapd service.
sudo snap install docker
sudo docker info <= to test
sudo docker run hello-world <= to test
I fixed my mount on fstab:
/raid/snap /var/snap none bind
I restarted my OS : it worked, at least for my case. (I checked all along this file consistency handling to see if the docker files was going well on the raid...)
Change Docker root storage (data path):
run this command to find docker data path:
$ sudo docker info | grep -i root
default path:
root#user-testing-HP-ProBook-4540s:/etc/docker# docker info | grep -i root
Root Dir: /var/lib/docker/aufs
WARNING: No swap limit support
Docker Root Dir: /var/lib/docker
first, stop the docker:
sudo service docker stop
copy the corrent data path to new path:
sudo rsync -aqxP /var/lib/docker /data/docker-data/
add the following on (/etc/docker/daemon.json) file:
(if the file is not there create the file with vim or your fav editor(sudo vim /etc/docker/deamon.json) )
{
"data-root": "/data/docker-data/docker"
}
conform with cat command:
cat /etc/docker/deamon.json
output will be like this:
root#user-testing-HP-ProBook-4540s:/home/user/Downloads# cat /etc/docker/daemon.json
{
"data-root": "/data/docker-data/docker"
}
root#user-testing-HP-ProBook-4540s:/home/user/Downloads#
start docker:
sudo service docker start
check the root (data path) path now:
$ sudo docker info | grep -i root
out put will be like this:
root#user-testing-HP-ProBook-4540s:/home/user/Downloads# sudo docker info | grep -i root
Root Dir: /data/docker-data/docker/aufs
WARNING: No swap limit support
Docker Root Dir: /data/docker-data/docker
root#user-testing-HP-ProBook-4540s:/home/user/Downloads#

How do you run an Openshift Docker container as something besides root?

I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker

Resources