Docker container write permissions - docker

I have some docker containers running on my machine, one of them being container_1
I am able to access container_1's cli using
ant#ant~/D/m/l/db> docker exec -it container_1 bash
daemon#1997cc093b24:/$
This allows me to go to container_1's cli but with no write permissions. The following commands give a permission denied error
ant#ant~/D/m/l/db> docker exec -it container_1 touch test.txt
bash: test.txt: Permission denied
ant#ant~/D/m/l/db>docker exec -it container_1 bash
daemon#1997cc093b24:/$ touch test.txt
bash: test.txt: Permission denied
Also tried using --previleged option but the problem persisted
ant#ant~/D/m/l/db> docker --previleged=true exec -it container_1 touch test.txt
bash: test.txt: Permission denied
So I have 2 questions
How do permissions in docker work?
Is this kind of modification to a docker filesystem recommended? If not why?
I have recently started using docker. Please tolorate the amature question. Thanks in advance :)

Docker runs commands as a linux user which is bound to linux filesystem permissions. So the answer to this question depends on:
The uid you are running commands as (this defaults to root, but can be overridden in your image with a USER command in the Dockerfile, or on the docker run cli, or within your docker-compose.yml file).
The location where your command runs since you are using a relative path. This will default to /, but again can be overridden by changing the working directory in various ways, most often with the WORKDIR within the Dockerfile.
The directory and file permissions at that location.
Use ls -al inside the container to see the current permissions. Use id to see the current uid. With docker exec you can pass a flag to change the current user. And to change permissions, you can use chmod to change the permissions themselves, chown to change the user ownership, and chgrp to change the group ownership. e.g.:
docker exec -u root container_1 chmod 777 .
That command will allow any user to read or write to the current folder inside the container by running as the root user.
This assumes you haven't enabled any other security with SE Linux or AppArmor.

Related

Disable root login into the docker container [duplicate]

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Bind mounts created using rootless docker have a weird uid on the host machine. How can I delete these folders?

I have the following docker-compose.yml file which creates a bind mount located in $HOME/test on the host system:
version: '3.8'
services:
pg:
image: postgres:13
volumes:
- $HOME/test:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pass
- PGUSER=postgres
I bring up the container and inspect the permissions of the bind mount directory:
$ docker-compose up -d
$ ls -l ~
drwx------ 19 4688518 usertest 4096 Mar 11 17:06 test
The folder ~/test is created with a different uid in order to prevent accidental manipulation of this folder outside of the container. But what if I really do want to manipulate it? For example, if I try to delete the folder, I get a permission denied error as expected:
$ rm ~/test -rf
rm: cannot remove '/home/usertest/test': Permission denied
I suspect that I need to change uids using the newuidmap command somehow, but I'm not sure how to go about that.
How can I delete these folders?
But what if I really do want to manipulate it?
Using Docker, you can:
Run a command in the container as a specific user using the same UID (such as rm or sh), for example:
# Run shell session using your user with docker-compose
# You can then easily manipulate data
docker-compose exec -u 4688518 pg sh
# Run command directly with docker
# Docker container name may vary depending on your situation
# Use docker ps to see real container name
docker exec -it -u 4688518 stack_pg_1 rm -rf /var/lib/postgresql/data
Similar to previous one, you can run a new container with:
# Will run sh by default
docker run -it -u 4688518 -v $HOME/test:/tmp/test busybox
# You can directly delete data with
docker run -it -u 4688518 -v $HOME/test:/tmp/test busybox rm -rf /tmp/test/*
This may be suitable if your pg container is stopped or deleted. Docker image itself does not need to be the same as the one run by Docker Compose, you only need to specify proper user UID.
Note: you may not be able to delete folder using rm -rf /tmp/test as user 4688518 may not have writing permission on /tmp folder to do so, hence the use of /tmp/test/*
Use any of the above, but using root user such as -u 0 or -u root
Without using Docker, you can effectively run sudo command as suggested by other answer, or even temporarily change permission of said folder then change it back. However, from experience, when manipulating Docker-related data it's easier and less error-prone to user Docker itself.
Dealing with user ids in docker is tricky business because docker containers share the same kernel with the host operating system (at least on linux). Consequently, any files that the container creates in the bind mount with a given uid will have the same uid on the host system.
Whenever the uid used by the container (let's say it's 2222) is different from your own uid (or you don't have write access to files owned by 2222), you won't be able to delete the folder. The easy workaround is to run sudo rm -rf ~/test.
Edit: If the user does not have admin rights, you can still give them rights to modify the generated files like so.
# Create a directory that the users can write in.
mkdir workspace
# Change the owner to the group of users that should have access (3333).
sudo chown -R 2222:3333 workspace
# Give group write access.
sudo chmod -R g+w workspace
# Make sure that all users that should have write access are in group 3333.
Then you can run the container using
docker run --rm -u `id -u`:3333 -v `pwd`/workspace:/workspace \
-w /workspace alpine:latest touch myfile
which creates myfile in the workspace folder with the right permissions so your users can delete the file again.

Permisson Denied when tried to copied file from location to cointainer

I'm follow steps in Here to set up a distributed test by Jmeter but in copy my local jmeter test into the master container I got a permission denied error, specifically
sh: 2: /jmeter/apache-jmeter-3.3/bin/: Permission denied
I'm not clear on what you're trying to do.
If you're trying to copy a file from your Host to Docker container, why not just mount the file/directory in to the container during runtime using --mount or -v. For example: docker run -v <local path>:<dst path on docker container> <ImageName>
Edit: This works between multiple containers as well. You can use SharedVolumes to share storage between 2 or more containers. Read more here: https://docs.docker.com/storage/volumes/
Execute the following commands:
docker exec -t master chmod +x /jmeter/apache-jmeter-3.3/bin/jmeter.sh
docker exec -t slave01 chmod +x /jmeter/apache-jmeter-3.3/bin/jmeter.sh
etc.
This will make jmeter.sh script executable via chmod command
Also be aware that according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.1 (or whatever is the latest version available at JMeter Downloads page) on next available opportunity.

Docker volume and host permissions

When I run a docker image for example like
docker run -v /home/n1/workspace:/root/workspace -it rust:latest bash
and I create a directory in the container like
mkdir /root/workspace/test
It's owned by root on my host machine. Which leads to I have to change the permissions everytime after I turn of the container to be able to operate with that directory.
Is there a way how to tell Docker to handle directories and files from my machine (host machine) point of view under a certain user?
You need to run your application as the same uid inside the container as you do on the host to get file ownership to match. My own solution for this is to start the container as root, adjust the uid of the user inside the container to match the volume mount, and then su to the user to run the app. Scripts for this can be found in this repo: https://github.com/sudo-bmitch/docker-base
The in that repo, the fix-perms script handles the change in uid/gid inside the container, and the entrypoint script has an exec gosu $username "$#" that runs the app as the selected user.
Sure, because Docker uses root as a default user. You should create user in your docker container, switch to that user and then make folder, then you will get them without root permissions on you host machine.
Dockerfile
FROM rust:latest
...
RUN useradd -ms /bin/bash myuser
USER myuser

Permission denied inside Docker container

I have a started container gigantic_booth and I want to create the directory /etc/test:
# docker exec -it gigantic_booth /bin/bash
$ mkdir /etc/test
$ mkdir: cannot create directory '/etc/test': Permission denied
And sudo command is not found. I don't want to create this directory in image-build-time but once is started.
How can I do?
Thanks :)
I'm using jenkins image and I have just read that it has root access disabled for security reasons. https://github.com/jenkinsci/docker#installing-more-tools
I have re-built the image with this Dockerfile:
FROM jenkins
USER root
and now it works properly, it is not so secure, though.
Or just use docker exec -u thejenkinsuser.

Resources