Rootless docker image - docker

I tried running the latest builds of debian and alpine but seems to run as root user.
I expected echo $USER should not return root if it returns empty; then I need to verify with the command whoami if that also returns root we have logged into docker container in root mode which can lead to a vulnerability.

The usual way to deal with this is to override this in your Dockerfile (you can do docker run --user, but that can be confusing to programs since e.g. there won't be a home directory setup).
FROM ubuntu
RUN useradd --create-home appuser
WORKDIR /home/appuser
USER appuser
More details, and some other things you can do to secure your container: https://pythonspeed.com/articles/root-capabilities-docker-security/

According to this StackOverflow answer, you need to pass the parameter --user <user> in order to login as non-root user.
Example: docker run -it --user nobody alpine

Related

Disable root login into the docker container [duplicate]

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Starting docker container inside a docker with non-root permission [duplicate]

I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).

How to execute something as a root user (initially) for an otherwise non-root user container?

I want to sync the host machine's user/group with the docker machine to enable (developers) to edit the files inside or outside the container. There are some ideas like this: Handling Permissions with Docker Volumes which creates a new user.
I would like to try a similar approach, but instead of creating a new user, I would like to modify the existing user using usermod:
usermod -d /${tmp} docker # avoid `usermod` from modifying permissions automatically.
usermod -u "${HOST_USER_ID}" docker
groupmod -g "${HOST_GROUP_ID}" docker
usermod -d ${HOME} docker
This idea seems to work, but when the container is run as docker user (which is what I want), usermod complains that "this user has a process running and so it can't change the user id".
If add sudo, it will change the user id, but it will break on the next sudo will the following exception: sudo: unknown uid 1000: who are you? as a consequence of side-stepping the above problem.
sudo usermod -d /${tmp} docker
sudo usermod -u "${HOST_USER_ID}" docker
sudo groupmod -g "${HOST_GROUP_ID}" docker # `sudo: unknown uid 1000: who are you?`
sudo usermod -d ${HOME} docker # `sudo: unknown uid 1000: who are you?`
Is it possible to run something as a root when the container is started, along with a bootstrap script as a normal user? It seems like the Dockerfile's CMD doesn't executes two commands; nor can I club multiple commands into one script sine I need to run as two users - or can I? I know I can create a different image, but wondering if there are cleaner alternatives.
You can start your container as root, allow the ENTRYPOINT script to perform any changes you want, and then switch to an unprivileged user when you execute the container CMD. E.g., use an ENTRYPOINT script something like this:
#!/bin/sh
usermod -d /${tmp} docker
usermod -u "${HOST_USER_ID}" docker
groupmod -g "${HOST_GROUP_ID}" docker
exec runuser -u docker -- "$#"
If you don't have the runuser command, you can get similar behavior using su.

Why did my Docker Container did not work when the container UID is missing on the host?

I have created a docker image for running a wildfly server. In the docker image I define a separate user called 'imixs'. This user is running the service (wildfly server) inside the container.
As I have seen in several examples I created my service user with UID 1000.
RUN groupadd -r imixs -g 1000 && useradd -u 1000 -r -g imixs -m -d /home/imixs -s /sbin/nologin -c "imixs user" imixs && \
chmod 755 /opt
Now I have the situation, that on a Host, where this UID is missing, the container can not be executed correctly, because the service (wildfly) claims about missing write permissions.
Is it recommended to start the container in this case with the option -u to run the container with a different user from the host system?
docker run ... -u myuser ....
How can I tell docker-compose to run the containers with a different user from the host system?
Is it in general recommended to create images with users others then root?
My dokerfile can be seen here: https://hub.docker.com/r/imixs/wildfly/~/dockerfile/
I think I must withdraw my own question. I solved this behavior after upgrading from docker version 1.6 to the current release 1.13.0 and upgrading docker-compose form 1.5 to 1.10.0.
Maybe there was something wrong in general with my old installation and reinstalling fixed this problem.
In any case I can confirm that there is no read/write permission issue if the container runs a service with a non-privileged-user, not defined by the host system.
To answer my own questions:
Yes it is recommended to run services in a container with a non-privileged-user other than root (uid 0)
I still did not know if docker-compose can change the user to be used to run a specific container
No it is not recommended to create images running with root user

Changing the user's uid in a pre-build docker container (jenkins)

I am new to docker, so if this is a fairly obvious process that I am missing, I do apologize for the dumb question up front.
I am setting up a continuous integration server using the jenkins docker image. I did a docker pull jenkins, and created a user jenkins to allow me to mount the /var/jenkins_home in the container to my host's /var/jenkins_home (also owned by jenkins:jenkins user).
the problem is that the container seems to define the jenkins user with uid 102, but my host has the jenkins user as 1002, so when I run it I get:
docker run --name jenkins -u jenkins -p 8080 -v /var/jenkins_home:/var/jenkins_home jenkins
/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
I would simply make the uid for the host's jenkins user be 102 in /etc/passwd, but that uid is already taken by sshd. I think the solution is to change the container to use uid 1002 instead, but I am not sure how.
Edit
Actually, user 102 on the host is messagebus, not sshd.
Please take a look at the docker file I just uploaded:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/Dockerfile .
Here the UID is extracted from a mounted volume (host directory), with
stat -c '%u' <VOLUME-PATH>
Then the UID of the container user is changed to the same value with
usermod -u <UID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real UID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the UID, there might be some other files no longer accessible for the process in the container, so you might need a
chown -R <USERNAME> <SOME-PATH>
before the gosu command.
You can also change the GID, see my answer here
Jenkins in docker with access to host docker
and maybe you want to change both to increase security.
You can simply change the UID in /etc/passwd, assuming that no other user has UID 1002.
You will then need to change the ownership of /var/jenkins_home on your host to UID 1002:
chown -R jenkins /var/jenkins_home
In fact, you don't even need a jenkins user on the host to do this; you can simply run:
chown -R 1002 /var/jenkins_home
This will work even if there is no user with UID 1002 available locally.
Another solution is to build your own docker image, based on the Jenkins image, that has an ENTRYPOINT script that looks something like:
#!/bin/sh
chown -R jenkins /var/jenkins_home
exec "$#"
This will (recursively) chown /var/jenkins_home inside the container to whatever UID is used by the jenkins user (this assumes that your Docker contains is starting as root, which is true unless there was a USER directive in the history of the image).
Update
You can create a new image, based on (FROM ...) the jenkins image, with a Dockerfile that performs the necessary edits to the /etc/passwd file. But that seems a lot of work for not much gain. It's not clear why you're creating jenkins user on the host or if you actually need access to the jenkins home directory on the host.
If all you're doing is providing data persistence, consider using a data volume container and --volumes-from rather than a host volume, because this will isolate the data volume from your host so that UID conflicts don't cause confusion.
I had the same error, I turned SELinux off (on CEntOS) and it works.
Otherwise, it woukd be better to tune SElinux with SEManage commands.
The ideal is to change the user UID in your Dockerfile used by jenkins with the same UID used by the Host (remember that it must be done for non-root users, if root create a new user and configure the service inside the container to that user).
Assuming the user's UID on the host is 1003 and the user is called jenkins (use $id to get the user and group id).
Add to your Dockerfile
# Modifies the user's UID and GID
RUN groupmod -g 1003 jenkins && usermod -u 1003 -g 1003 jenkins
# I use a group (docker) on my host to organize the privileges,
#if that's your # case add the user to this group inside the container.
RUN groupadd -g 998 docker && usermod -aG docker nginx

Resources