Setting the User UID in a Bitnami Docker Container - docker

I am running a number of Bitnami Docker containers which all uses a user UID of 1001 inside the container. However, these containers needs to write files to a mounted host directory as a user with UID 1010.
Is there a way to achieve this, apart from rewriting all the Dockerfiles involved and rebuilding all these images?
Using Docker Compose 1.25.5 and Docker 19.03.8 on Ubuntu 20.04. The user 1001 in the container also happens to have no name:
I have no name!#32f6e5ad9cbd:/$ id
uid=1001 gid=0(root) groups=0(root)
I have no name!#32f6e5ad9cbd:/$ whoami
whoami: cannot find name for user ID 1001
$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin

Try with user: 1010:0.
If you use the root (0) for the GID you shouldn't have issues with permissions:
$ id
uid=1010 gid=0(root) groups=0(root)

Related

How to run crictl command as non root user

How to run crictl as non-root user.
My docker commands work with non-root user because my user is added to docker group.
id
uid=1002(kube) gid=100(users) groups=100(users),10(wheel),1001(dockerroot),1002(docker)
I am running dockerD daemon which uses containerd and runc as runtime.
I installed crictl binary and pointed it to connect to existing dockershim socket with config file as below.
cat /etc/crictl.yaml
runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
timeout: 2
debug: false
pull-image-on-create: false
crictl works fine with sudo but without sudo it fails like this.
[user#hostname~]$ crictl ps
FATA[0002] connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded
I also tried to change group of dockershim.sock to 'docker' from 'root' just like docker.sock was to try, still same.
srwxr-xr-x 1 root docker 0 Jan 2 23:36 /var/run/dockershim.sock
srw-rw---- 1 root docker 0 Jan 2 23:33 /var/run/docker.sock
sudo usermod -aG docker $USER
or you can see docker postinstall

How to specify userid and groupid for volume mount point on Docker host

I have been frustrated by this issue for a while because this has been asked multiple times here, such as in How to deal with persistent storage (e.g. databases) in Docker and What is the (best) way to manage permissions for Docker shared volumes?, but the answers do not address the issue at all.
The first "answer" says to just use named volumes instead of traditional bind mounts. That solves nothing because when the named volume is mounted on the host, for instance at the default location /var/lib/docker/volumes/<volume name>/_data, then that mount point will have the uid/gid of the mount point inside the container.
The other "answer" given, before docker had named volumes, was to use a data-only container. This exhibits the same exact problem.
The reason this is a huge problem for me is that I have many embedded machines on which I want to run the docker host, and the user may have a different uid/gid on each of these. Therefore I cannot hardcode a uid/gid in a Dockerfile for the mount points for my persistent volumes, to achieve matching ids.
Here's an example of the problem: Say my user is foo on the host, with uid 1001 and gid 1001, and the user writing files to the volume inside the container has uid 1002. When I run the container, docker will chown 1002:1002 the mount point dir on the host, and write files with this uid, which I can't even read/write with my user foo.
Visually (all these operations on the host):
$ docker volume create --driver local --opt type=volume --opt device=/home/<my_host_user>/logs --opt o=bind logs
logs
$ docker volume inspect logs
[
{
"CreatedAt": "2020-08-26T16:26:08+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/logs/_data",
"Name": "logs",
"Options": {
"device": "/home/<myhostuser>/logs",
"o": "bind",
"type": "volume"
},
"Scope": "local"
}
]
$ pwd
/home/foo
$ mkdir logs && ls -ld logs
drwxr-xr-x 2 foo foo 4096 Aug 26 17:24 logs
Then running the container:
$ docker run --rm --name <cont_name> -it --net="host" --mount src=logs,target=/home/<container_user>/logs <my docker image>
And now the mount point:
$ ls -ld logs
drwxr-xr-x 2 1002 1002 4096 Aug 26 17:30 logs
$ ls -l logs/
total 4
-rw-r----- 1 1002 1002 0 Aug 26 17:30 log
-rw-r----- 1 1002 1002 2967 Aug 26 17:27 log.1
As you can see, the logs written to the volume have a uid/gid which doesn't correspond to something that exists on the host and which I can't access without root/sudo.
Now then, is there ANY way that docker can be told to map uid/gids in the container to uid/gids on the host, or even simpler to just use the specified uid/gid for the host mount point?
my env:
Ubuntu 22.04
Docker version 20.10.17, build 100c701
create mount piont path with suitable permission.
# docker file
RUN mkdir --parents '$volumeDir' ; chown --recursive '$userName':'$userGroup' '$volumeDir'
next, create container and mount volume .
# terminal
docker run --name=containerName --interactive
--user=$userName:$userGroup --mount='source=volumeName,target==$volumeDir,readonly=false'
imageName /bin/bash
you will got suitable permission

Why is File Mounted on Container via docker-compose not Accessible?

In my docker-compose file, I try to mount a file from the host into the docker container.
The docker-compose file I have something like this:
version "2"
services:
myservice:
image: images/previmage:1.0.0
volumes:
- /opt/files/aaa.conf:/aaa.conf
After the service is started, I look at the contents at the root of the container using docker from the host:
sudo docker container exec myservice_1 ls /
The result of that ls command for the aaa.conf entry shows that it looks like it is there, but permissions are not what I expect:
drwxr-xr-x. 2 root root 6 Apr 11 2018 opt
-?????????? ? ? ? ? ? aaa.conf
ls: cannot access /aaa.conf: Permission denied
Similarly, if I try other commands like 'cat aaa.conf', I get Permission denied.
I understand that permissions for the file need to be set on the host side.
On the host I made permissions both 755 and then 777, but I still get Permission denied.
Is this the expected behavior?
Edit [running on AWS/EC2]
sudo docker container exec myservice_1 cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
sudo docker container exec myservice_1 id -u
33016
I had same problem, It's for SELinux (Check this post)
Disable SELinux for a specific container
You can disable SELinux for a specific container by adding --security-opt label:disable to your docker run command:
docker container run --security-opt label:disable myservice_1
Adding SELinux Rule (Recommended)
According to this post, You can also use this command to enable access to the files
chcon -Rt svirt_sandbox_file_t /path/to/volume
Completely disable SELinux!
Not recommended, but also works:
su -c "setenforce 0"

Permission denied to access /var/run/docker.sock mounted in a OpenShift container

Objective
Know how to trouble shoot and what knowledge is required to trouble shoot permission issues of Docker container accessing host files.
Problem
Access to /var/run/docker.sock mounted inside a OpenShift container via hostPath causes permission denied. The issue does not happen if the same container is deployed to K8S 1.9.x, hence it is OpenShift specific issue.
[ec2-user#ip-10-0-4-62 ~]$ ls -laZ /var/run/docker.sock
srw-rw----. root docker system_u:object_r:container_var_run_t:s0 /var/run/docker.sock
[ec2-user#ip-10-0-4-62 ~]$ docker exec 9d0c6763d855 ls -laZ /var/run/docker.sock
srw-rw----. 1 root 1002 system_u:object_r:container_var_run_t:s0 0 Jan 16 09:54 /var/run/docker.sock
https://bugzilla.redhat.com/show_bug.cgi?id=1244634 says svirt_sandbox_file_t SELinux label is required for RHEL, so changed the label.
$ chcon -Rt container_runtime_t docker.sock
[ec2-user#ip-10-0-4-62 ~]$ ls -aZ /var/run/docker.sock
srw-rw----. root docker system_u:object_r:svirt_sandbox_file_t:s0 /var/run/docker.sock
Redeploy the container but still permission denied.
$ docker exec -it 9d0c6763d855 curl -ivs --unix-socket /var/run/docker.sock http://localhost/version
* Trying /var/run/docker.sock...
* Immediate connect fail for /var/run/docker.sock: Permission denied
* Closing connection 0
OpenShift by default does not allow hostPath so it was addressed.
oc adm policy add-scc-to-user privileged system:serviceaccount:{{ DATADOG_NAMESPACE }}:{{ DATADOG_SERVICE_ACCOUNT }}
I suppose SELinux or OpenShift SCC or other container/docker permission is causing this but need a clue how to find the cause.
Openshift requires special permissions for in order to allow pods to use volumes in nodes.
Do the following:
Create standard security-context yaml:
kind: SecurityContextConstraints
apiVersion: v1
metadata:
name: scc-hostpath
allowPrivilegedContainer: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- my-admin-user
groups:
- my-admin-group
oc create -f scc-hostpath.yam
Add the "allowHostDirVolumePlugin" privilege to this security-context:
oc patch scc scc-hostpath -p '{"allowHostDirVolumePlugin": true}'
Associate the pod's service account with the above security context
oc adm policy add-scc-to-user scc-hostpath system:serviceaccount:<service_account_name>

Riak container does not start when its data volume is mounted

The following command works perfectly and the riak service starts as expected:
docker run --name=riak -d -p 8087:8087 -p 8098:8098 -v $(pwd)/schemas:/etc/riak/schema basho/riak-ts
The local schemas directory is mounted successfully and the sql file in it is read by riak. However if I try to mount the riak's data or log directories, the riak service does not start and timeouts after 15 seconds:
docker run --name=riak -d -p 8087:8087 -p 8098:8098 -v $(pwd)/logs:/var/log/riak -v $(pwd)/schemas:/etc/riak/schema basho/riak-ts
Output of docker logs riak:
+ /usr/sbin/riak start
riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to
wait longer, set the environment variable
WAIT_FOR_ERLANG to the
number of seconds to wait.
Why does riak not start when it's logs or data directories are mounted to local directories?
This issue is with the directory owner of mounted log folder. The folder $GROUP and $USER are expected to be riak as follow:
root#20e489124b9a:/var/log# ls -l
drwxr-xr-x 2 riak riak 4096 Jul 19 10:00 riak
but with volumes you are getting:
root#3546d261a465:/var/log# ls -l
drwxr-xr-x 2 root root 4096 Jul 19 09:58 riak
One way to solve this is to have the directory ownership as riak user and group on host before starting the container. I looked the UID/GID (/etc/passwd) in docker image and they were:
riak:x:102:105:Riak user,,,:/var/lib/riak:/bin/bash
now change the ownership on host directories before starting the container as:
sudo chown 102:105 logs/
sudo chown 102:105 data/
This should solve it. At least for now. Details here.

Resources