I'm following https://www.redhat.com/sysadmin/podman-inside-container, targeting rootless Podman running a rootless Podman container. podman info works, but shows the warning message
WARN[0000] "/" is not a shared mount, this could cause issues or
missing mounts with rootless containers
mount --make-rshared /, run as root, solves that. My question is, how I can prepare my image so that I don't need root privileges everytime I run the image.
I tried RUN mount --make-rshared / in the Dockerfile. That failed with "Permission denied", which confuses me because permisison is not denied for everything else, like yum, useradd and file access.
I also tried to put the command in an init script, saved it in /etc/init.d and added it with chkconfig. It doesn't seem to be executed when I run the image, although I added all runlevels: 123456 . I can execute it directly, but of course only as root.
I'm rather new to Docker/Podman. There are some other question marks for me, like the missing shell commands login and runlevel. The important goal is to get rid of the warning message though.
Related
when running docker run -v /sys:/sys:rw $NAME $COMMAND , the container tries to write to a file in /sys, but this causes a "Permission denied" error.
The container user is root. In host, root has write permission to the files.
this issue is fixed with --privileged, but that option has serious security drawbacks.
how can a container have write access to sysfs without using --privileged?
I tried adding all the security capabilities with --cap-add, but it wasn't helpful.
Docker runs /sys filesystem in read-only mode. In the docker forum was said that another container runtime named Sysbox can help, but I didn't try it.
https://forums.docker.com/t/unable-to-mount-sys-inside-the-container-with-rw-access-in-unprivileged-mode/97043
I'm trying to mount an NFS(version 3) share, which exists on the host, into a docker container, but it's not working.
Example:
docker run -v /mnt/test001/:/mnt/test001/ hello-world
This mount is an NFS share with squashed root, which I suspect is the reason for this problem. Docker seems to run as root(that's what ps -ef shows me anyway).
Error message states:
dockerd[...]: time="..." level=error msg="Handler for POST
/.../containers/.../start returned error: error while creating mount
source path '/mnt/.../': mkdir /mnt/.../: permission denied"
What is required here to allow for this mount to occur? Removing the root squash seems unsafe and overkill. Is it possible to allow my local root to have read rights, while still squashing root?
EDIT: The permissions were dwrxwrx---
The problem was that world/other didn't have read right.
As such the subdirectory I wanted to mount could not be read by my local root, as that root got squashed -> needed world/other rights and there were none.
Docker cp is behaving bizzarely for a certain folder in my container. It's only copying certain files and folders, and ommitting the rest.
I have a container called "script-server" running an nginx reverse proxy. The relative configuration files are stored in /etc/nginx, which I want to pull out and mount them so I can reconfigure easily from the host and restart the container.
Here's the container's folder:
I then issued the docker cp command from the host to try to pull these files out:
docker cp script-server:/etc/nginx ./etc
This is what I got out onto the host system:
I've read the "docker cp" docs and found nothing useful. I tried the -L option to follow the symlink but still the same result. I'm assuming what's happening is docker is starting to copy everything, then runs into the symlink (that doesnt transfer properly on the host obviously), and then it exits. I removed the symlink and it worked and copied all files.
The underlying problem however is I don't always want to manually bash in, delete or temporarily move symlinks, then copy then restore all symlinks. The other issue is that I can't trust the docker cp command to properly transfer all files. What if I need to install a symlink (I will later).
The weird part is that I've copied from folders before in which the symlinks didnt translate 1:1, and everything transferred to the host, just with broken symlinks which was fine.
Has anyone else experienced anything similar? Is this desired behaviour, and I'm missing something? Any ways to fix it?
Incase it matters, host is ubuntu:19.10 and the docker container is running off of nginx:alpine
Thanks
If I use:
FROM jenkins/jenkins:lts
RUN ls -l /var/jenkins_home/; touch /var/jenkins_home/isthisworking; echo "================================"; ls -l /var/jenkins_home;
I actually see isthisworking in my final ls -l command during the building of the image. It is upon running the container that this file gets removed. Why?
Use 'USER jenkins' if you want to modify ssh resources for that user
You try to reach out ip from network that your docker container isn't part of.your host machine on your docker containers are two sparted networks
I think I know what is going on. Here is the Dockerfile I was using to figure out what was going on (copied from above):
FROM jenkins/jenkins:lts
RUN ls -l /var/jenkins_home/; touch /var/jenkins_home/isthisworking; echo "================================"; ls -l /var/jenkins_home;
As mentioned, with this file, at build time, I would see the file isthisworking but when I run the container, that file is no longer there.
So I went to jenkins/jenkins:lts github page and looked at their Dockerfile. I saw this on line 26:
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME $JENKINS_HOME
Here, $JENKINS_HOME is /var/jenkins_home/. So as a Docker noob, I asked myself what is VOLUME (I know what it is from the command line but not inside a Dockerfile)? With googling, I found this and this, which basically say:
The docker run command initializes the newly created volume with any
data that exists at the specified location within the base image.
Since that location is a docker VOLUME at that point in the Dockerfile, no matter what and how files are copied into it, at container runtime that location will be reinitialized to how the base image has it "defined."
To have the file stay when the container is run, make the modification/addition to the directory prior to making it a VOLUME.
I have a docker-based build environment - in order to build my project, I run a docker container with the --volume parameter, so it can access my project directory and build it.
The problem is that the files created by the container cannot be deleted by the host machine. The only workaround I currently have is to start an interactive container with the directory mounted and delete it.
Bottom line question: It is possible to make docker write to the mounted area files with permissions such that the host can later delete them?
This has less to do with Docker and more to do with basic Unix file permissions. Your docker containers are running as root, which means any files created by the container are owned by root on your host. You fix this the way you fix any other file permission problem, by either (a) ensuring that that the files/directories are created with your user id or (b) ensuring that permissions allow you do delete the files even if they're not owned by you or (c) using elevated privileges (e.g., sudo rm ...) to delete the files.
Depending on what you're doing, option (a) may be easy. If you can run the contanier as a non-root user, e.g:
docker run -u $UID -v $HOME/output:/some/container/path ...
...then everything will Just Work, because the files will be created with your userid.
If the container must run as root initially, you may be able to take care of root actions in your ENTRYPOINT or CMD script, and then switch to another uid to run the main application. To do this, you would need to pass your user id into the container (e.g., as an environment variable), and then later use something like runuser to switch to the new userid:
exec runuser -u $TARGE_UID /some/command
If neither of the above is an option, then sudo rm -rf mydirectory should work just as well as spinning up an interactive container.
If you need your build artifacts just to put them to the docker image on the next stage then it is probably worth to use multi-stage build option.