The AWS CLI v2 documentation presents an option and guide to installing / configuring the cli via docker. The guide is straightforward enough to follow, and the container works fine with the key items being
mounting the local .aws directory to provide credentials to the container
mounting $pwd for any I/O work required
I'm using it for s3 and realized that any files I copy to my local drive from s3 show as owned by root.
>docker run --rm -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
download: s3://xxx/hello to ./hello
>ls -l
total 0
-rw-r--r-- 1 root root 0 Oct 2 09:43 hello
This makes sense, as the process is running as root in the container, but isn't ideal. There isn't a any other user in the container, so I can't just run "as" kirk.
>docker run --rm -u kirk -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
docker: Error response from daemon: unable to find user kirk: no matching entries in passwd file.
Is there a way to mount the volume "as" a user or by delegating user access to the container? I don't care (& not sure I can control) the user inside the container, but I would like the process to run in the context of a user on the host system. What's the right approach here?
You can run a container as a user that doesn't exist inside the image using numerical values for -u ${UID}:${GID}. For example:
docker run --rm \
-u 1000:1000 \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
-v ${PWD}:/aws:rw \
amazon/aws-cli s3 cp s3://devops-example/lolz.gif .
... will copy the file as UID 1000 GID 1000.
Note: using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables for passing credentials instead of mounting the credentials file. The full list of environment variables is available here.
Related
I need to compile some programs using files in a docker container. Once compiled, the container is no longer used.
Therefore I always use the following command. docker run --rm -v my_file:docker_file my_images my_command
But I find that there are always some problems.
For example, take a simple C language program that outputs "hello, world" as an example.
docker run -it --rm -v /home/cuiyujie/workspace/workGem5/gem5/hello.c:/home/cuiyujie/workspace/workGem5/gem5/hello.c -v /home/cuiyujie/workspace/workGem5/gem5/build:/home/cuiyujie/workspace/workGem5/gem5/build gerrie/gem5-bare-env
After entering the container, execute gcc hello.c -o hello, cp hello build.
I found outside the container that the hello file belongs to root.
-rwxr-xr-x 1 root root 16696 2月 23 10:23 hello*
I don't have permission to delete it. what should I do to make it become the permissions of the host user?
If you run your container as your own UID, files created in the host volumes will be owned by your UID. That comes with the disclaimer that your container needs to be designed to run as a user other than root (e.g. not need access to files owned by root inside the container). Here's an example of running as your uid/gid and full access to your home directory using bash on Linux (the $(id -u) expansion may not work in other environments):
docker container run \
-u "$(id -u):$(id -g)" -w "$(pwd)" \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-v "$HOME:$HOME" \
<your_image>
You can use chown to change the ownership of a file. You'll need permission to run it with sudo though.
$ sudo chown $USER hello
If you also want to change the group of the file to your primary group, you can put a . after the user:
$ sudo chown $USER. hello
I'm trying to let a docker container access a letsencrypt certificate from the host file system.
I do not want to run the docker container as root, but rather as a user with very specific access rights.
Neither do I want to change the permissions of the certificate.
All I want, is for the given user, to have access to read the certificate inside the docker container.
The certificate has the following setup:
-rw-r----- 1 root cert-group
The user who's going to run the docker container, is in the cert-group:
uid=113(myuser) gid=117(myuser) groups=117(myuser),999(cert-group),998(docker)
This works as long as we're on the host - I am able to read the file as expected with the user "myuser".
Now I want to do this within a docker container with the certificate mounted as a volume.
I have done multiple test cases, but none with any luck.
A simple docker-compose file for testing:
version: '3.7'
services:
test:
image: alpine:latest
volumes:
- /etc/ssl/letsencrypt/cert.pem:/cert.pem:ro
command: >
sh -c 'ls -l / && cat /etc/passwd && cat /etc/group && cat /cert.pem'
user: "113:117"
restart: "no"
This ouputs a lot, but most important is:
test_1 | -rw-r----- 1 root ping 3998 Jul 15 09:51 cert.pem
test_1 | cat: can't open '/cert.pem': Permission denied
test_1 | ping:x:999:
Here I assume that "ping" is an internal group for docker alpine, however, im getting some mixed information about how this collaborates with the host.
From this article https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf my takeaway is, that there's a single kernel handling all permissions (the host) and therefore if the same uid and gid is used, the permissions would inherit from the host. However, even though that the running user is 113:117, which on the host is part of the group 999 it still doesnt give me access to read the file.
Next I found this article https://medium.com/#nielssj/docker-volumes-and-file-system-permissions-772c1aee23ca where especially this bullet point caught my attention:
The container OS enforces file permissions on all operations made in
the container runtime according to its own configuration. For example,
if a user A exists in both host and container, adding user A to group
B on the host will not allow user A to write to a directory owned by
group B inside the container unless group B is created inside the
container as well and user A is added to it.
This made me think, that maybe a custom Dockerfile was needed, to add the user inside docker, and make the user part of 999 (which is known as ping as earlier stated):
FROM alpine:latest
RUN adduser -S --uid 113 -G ping myuser
USER myuser
Running this gives me the exact same result, now with myuser appended to passwd though:
test_1 | myuser:x:113:999:Linux User,,,:/home/myuser:/sbin/nologin
This is just a couple of things that I've tried.
Another is syncing /etc/passwd and /etc/group with volumes found in some other blog
volumes:
- /etc/passwd:/etc/passwd
- /etc/group:/etc/group
This makes it visually look correct inside the container, but it doesnt change the end result - still permission denied.
Any help or pointers in the right direction would be really appreciated since I'm running out of ideas.
Docker containers do not know the uid/gid of the user running the container on the host. All requests to run containers go through the docker socket, and then to the docker engine that is often running as root, and no uid/gid's are passed in those API calls. The docker engine is just running the container as the user specified in the Dockerfile or as part of the container create command (in this case, from the docker-compose.yml).
Once inside the container, the mapping from uid/gid to names is done with the /etc/passwd and /etc/group file that is inside the container. Importantly, at the filesystem level, uid/gid values are not being mapped between the container and the host (with the exception of user namespaces, but if implemented properly, that would only make this problem worse). And all filesystem operations happen at the uid/gid level, not based on names. So when you do a host volume mount, the uid/gid's are passed directly through.
The issue you are encountering here is how you are telling the container to pick the uid/gid to run the container processes. By specifying user: "113:117" you have told the container to not only specify the uid (113), but also the gid (117) of the process. When that's done, none of the secondary groups from /etc/group are assigned to the user. To get those secondary groups assigned, you want to only specify the uid, user: "113", which will then lookup the group assignments from the /etc/passwd and /etc/group file inside the container. E.g.:
user: "113"
Unfortunately, the lookup for group membership is done by docker before any volumes are mounted, so you have the following scenario.
First, create an image with an example user assigned to a few groups:
$ cat df.users
FROM alpine:latest
RUN addgroup -g 4242 group1 \
&& addgroup -g 8888 group2 \
&& adduser -u 1000 -D -H test \
&& addgroup test group1 \
&& addgroup test group2
$ docker build -t test-users -f df.users .
...
Next, run that image, comparing the id on the host to the id inside the container:
$ id
uid=1000(bmitch) gid=1000(bmitch) groups=1000(bmitch),24(cdrom),25(floppy),...
$ docker run -it --rm -u bmitch -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
docker: Error response from daemon: unable to find user bmitch: no matching entries in passwd file.
Woops, docker doesn't see the entry from /etc/passwd, lets try with the test user we created in the image:
$ docker run -it --rm -u test -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch) groups=4242,8888
That works, and assigns the groups from the /etc/group file in the image, not the one we mounted. We can also see that uid works too:
$ docker run -it --rm -u 1000 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch) groups=4242,8888
As soon as we specify the gid, the secondary groups are gone:
$ docker run -it --rm -u 1000:1000 -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro test-users:latest id
uid=1000(bmitch) gid=1000(bmitch)
And if we run without overriding the /etc/passwd and /etc/group file, we can see the correct permissions:
$ docker run -it --rm -u test test-users:latest id
uid=1000(test) gid=1000(test) groups=4242(group1),8888(group2)
Likely the best option is to add a container user with the group membership matching the uid/gid values from the host. For host volumes, I've also solved this problem with a base image that dynamically adjusts the user or group inside the container to match the uid/gid of the file mounted in a volume. This is done as root, and then gosu is used to drop permissions back to the user. You can see that at sudo-bmitch/docker-base on github, specifically the fix-perms script that I would run as part of an entrypoint.
Also, be aware that mounting the /etc/passwd and /etc/group can break file permissions of other files within the container filesystem, and this user may have access inside that container that is not appropriate (e.g. you may have special access to the ping command that gives the ability to modify files or run ping commands that a normal user wouldn't have access to). This is why I tend to adjust the container user/group rather than completely replace these files.
Actually your solution is not wrong. I did the same with few differences.
This is my Dockerfile:
FROM alpine:latest
RUN addgroup -S cert-group -g 117 \
&& adduser -S --uid 113 -G cert-group myuser
USER myuser
And my docker-compose.yml:
version: '3.7'
services:
test:
build:
dockerfile: ./Dockerfile
context: .
command: >
sh -c 'ls -l / && cat /etc/passwd && cat /etc/group && cat /cert.pem'
volumes:
- "/tmp/test.txt:/cert.pem:ro"
restart: "no"
My '/tmp/test.txt' is assigned to 113:117.
IMHO, I think the problem in your docker-compose.yml that doesn't use your image. You should remove the image: and add build:
I have gone through the same issue today and luckily, the below solution helped me.
"Add :Z to your volumes mounts"
Reference: https://github.com/moby/moby/issues/41202
Note: Unfortunately It's issue with only Centos, I didn't face any problem with Ubuntu.
I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.
As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins
I'm starting containers from my docker image like this:
$ docker run -it --rm --user=999:998 my-image:latest bash
where the uid and gid are for a system user called sdp:
$ id sdp uid=999(sdp) gid=998(sdp) groups=998(sdp),999(docker)
but: container says "no"...
groups: cannot find name for group ID 998
I have no name!#75490c598f4c:/home/myfolder$ whoami
whoami: cannot find name for user ID 999
what am I doing wrong?
Note that I need to run containers based on this image on multiple systems and cannot guarantee that the uid:gid of the user will be the same across systems which is why I need to specify it on the command line rather than in the Dockerfile.
Thanks in advance.
This sort of error will happen when the uid/gid does not exist in the /etc/passwd or /etc/group file inside the container. There are various ways to work around that. One is to directly map these files from your host into the container with something like:
$ docker run -it --rm --user=999:998 \
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro \
my-image:latest bash
I'm not a fan of that solution since files inside the container filesystem may now have the wrong ownership, leading to potential security holes and errors.
Typically, the reason people want to change the uid/gid inside the container is because they are mounting files from the host into the container as a host volume and want permissions to be seamless across the two. In that case, my solution is to start the container as root and use an entrypoint that calls a script like:
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The above is from a fix-perms script that I include in my base image. What's happening there is the uid of the user inside the container is compared to the uid of the file or directory that is mounted into the container (as a volume). When those id's do not match, the user inside the container is modified to have the same uid as the volume, and any files inside the container with the old uid are updated. The last step of my entrypoint is to call something like:
exec gosu app_user "$#"
Which is a bit like an su command to run the "CMD" value as the app_user, but with some exec logic that replaces pid 1 with the "CMD" process to better handle signals. I then run it with a command like:
$ docker run -it --rm --user=0:0 -v /host/vol:/container/vol \
-e RUN_AS app_user --entrypoint /entrypoint.sh \
my-image:latest bash
Have a look at the base image repo I've linked to, including the example with nginx that shows how these pieces fit together, and avoids the need to run containers in production as root (assuming production has known uid/gid's that can be baked into the image, or that you do not mount host volumes in production).
It's strange to me that there's no built-in command-line option to simply run a container with the "same" user as the host so that file permissions don't get messed up in the mounted directories. As mentioned by OP, the -u $(id -u):$(id -g) approach gives a "cannot find name for group ID" error.
I'm a docker newb, but here's the approach I've been using in case it helps others:
# See edit below before using this.
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && su - $USER"
I.e. add a user (useradd) with a matching name, make it sudo (usermod), then open a terminal with that user (su -).
Edit: I've just found that this causes a E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) error when trying to use apt. Using sudo gives the error -su: sudo: command not found because sudo isn't install by default on the image I'm using. So the command becomes even more hacky and requires running an apt update and apt install sudo at launch:
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && apt update && apt install sudo && passwd -d $USER && su - $USER"
Not ideal! I'd have hoped there was a much more simple way of doing this (using command-line options, not creating a new image), but I haven't found one.
1) Make sure that the user 999 has right privilege on the current directory, you need to try something like this in your docker file
FROM
RUN mkdir /home/999-user-dir && \
chown -R 999:998 /home/999-user-dir
WORKDIR /home/999-user-dir
USER 999
try to spin up the container using this image without the user argument and see if that works.
2) other reason could be permission issue on the below files, make sure your group 998 has read permission on these files
-rw-r--r-- 1 root root 690 Jan 2 06:27 /etc/passwd
-rw-r--r-- 1 root root 372 Jan 2 06:27 /etc/group
Thanks
So, on your host you probably see your user and group:
$ cat /etc/passwd
sdp:x:999:998::...
But inside the container, you will not see them in /etc/passwd.
This is the expected behavior, the host and the container are completely separated as long as you don't mount the /etc/passwd file inside the container (and you shouldn't do it from security perspective).
Now if you specified a default user inside your Dockerfile, the --user operator overrides the USER instruction, so you left without a username inside your container, but please notice that specifying the uid:gid option means that the container have the permissions of the user with the same uid value in the host.
Now for your request not to specify a user in the Dockerfile - that shouldn't be a problem. You can set it on runtime like you did as long as that uid matches an existing user uid on the host.
If you have to run some of the containers in privileged mode - please consider using user namespace.
In a container, I am trying to start mysqld.
I was able to create an image and push to the registry but when I want to start it, the /var/lib/mysql volume can't be initialized as I try to do a chown mysql on it and it is not allowed.
I checked docker specific solutions but for now I couldn't make any work.
Is there a way to set the right permissions on a bind-mounted folder from bluemix? Or is the option --volumes-from supported, I can't seem to make it work.
The only solution I can see right now is running mysqld as root, but I would rather not.
Try with mount-bind
created a volume on bluemix using cf ic volume create database
try to run mysql_install_db on my db container to initialize it's content
docker run --name init_vol -v database:/var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
mysql_install_db is supposed to populate the /var/lib/mysql and set the rights to the owner set in the --user option, but I get:
chown: changing ownership of '/var/lib/mysql': Permission denied.
I also tried the above in different ways, using sudo or a script. I tried with mysql_install_db --user=root, which does setup my folder correctly, except it is owned by the root user, and I would rather keep mysql running as the mysql user.
Try with volumes-from data container
I create a data container with a volume /var/lib/mysql
docker run --name db_data -v /var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
I run my db container with the option --volumes-from
docker run --name db_srv --volumes-from=db_data registry.ng.bluemix.net/<namespace>/<image>:<tag> sh -c 'mysqld_safe & tail -f /var/log/mysql.err'
docker inspect db_srv shows:
[{ "BluemixApp": null, "Config": {
...,
"WorkingDir": "",
... } ... }]
cf ic logs db_srv shows:
150731 15:25:11 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql 150731 15:25:11 [Note] /usr/sbin/mysqld (mysqld
5.5.44-0ubuntu0.14.04.1-log) starting as process 377 .. /usr/sbin/mysqld: File './mysql-bin.index' not found (Errcode: 13)
150731 15:25:11 [ERROR] Aborting
which is due to --volumes-from not being supported, and to data created in the first not staying in the second run.
In IBM Containers, the user namespace is enabled for docker engine. The "Permission denied " issue appears to be because the NFS is not allowing mapped user, from container, to perform the operation.
On my local setup, on the docker host, mounted a NFS (exported with no_root_squash option). and attached the volume to container using -v option. When the container is
spawned from docker with disabled user namespace, I am able to change the ownership for bind-mount inside the container. But With user namespace enabled docker, I am getting
chown: changing ownership of ‘/mnt/volmnt’: Operation not permitted
The volume created by cf (cf ic volume create ...) is a NFS, to verify just try mount -t nfs4 from container.
When, the user namespace is enabled for docker engine. The effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container.
Here is the work-around, you may want to try
In the Dockerfile
1.1 Create user mysql with UID 1010, or any free ID, before MySql installation.
Other Container or new Container can access mysql data files on Volume with UID 1010
RUN groupadd --gid 1010 mysql
RUN useradd --uid 1010 --gid 1010 -m --shell /bin/bash mysql
1.2 Install MySqlL but do not initialize database
RUN apt-get update && apt-get install -y mysql-server && rm -rf /var/lib/mysql && rm -rf /var/lib/apt/lists/*
In the entry point script
2.1 Create mysql Data directory under bind-mount as user mysql and then link it as /var/lib/mysql
Suppose the volume is mounted at /mnt/db inside the container (ice run -v <volume name>:/mnt/db --publish 3306... or cf ic run --volume <volume name>:/mnt/db ...).
Define mountpath env var
MOUNTPATH="/mnt/db"
Add mysql to group "root"
adduser mysql root
Set permission for mounted volume so that root group members can create directory and files
chmod 775 $MOUNTPATH
Create mysql directory under Volume
su -c "mkdir -p /mnt/db/mysql" mysql
su -c "chmod 700 /mnt/db/mysql" mysql
Link the directory to /var/lib/mysql
ln -sf /mnt/db/mysql /var/lib/mysql
chown -h mysql:mysql /var/lib/mysql
Remove mysql from group root
deluser mysql root
chmod 755 $MOUNTPATH
2.2 For first time, initialize database as user mysql
su -c "mysql_install_db --datadir=/var/lib/mysql" mysql
2.3 Start the mysql server as user mysql
su -c "/usr/bin/mysqld_safe" mysql
You have multiple questions here. I will try to address some. Perhaps that will get you a step further in the right direction.
--volumes-from is not supported yet in IBM Containers. You can get around that by using the same --volume (-v) option on the first and subsequent containers, instead of using -v on the first container creation command and --volumes-from on the subsequent ones.
--user option is not supported also by IBM Containers.
I see your syntax for using --user (I suppose on localhost docker) is not correct. All options for the docker run command must come before the image name. Anything after the image name is considered a command to run inside the container. In this case "--user=mysql" will be considered as a command that the system will attempt to run and fail.
The last error message you shared shows that there is some file not found in the working dir which causes the app to abort. You may work around that by using a script as the command to run in the container which changes dir to the right location.