Run a command as root with docker-compose? - docker

While in the process of painstakingly sculpting a Dockerfile and docker-compose.yml, what is THE RIGHT WAY to run root shell in work-in-progress containers (without actually starting their services!) in order to debug issues? I need to be able to run the shell as root, because only root has full access to files containing the information that I need to examine.
I can modify Dockerfile and docker-compose.yml in order to achieve this goal; as I wrote above, I am in the process of sculpting those anyway.
The problem however is that about the only way I can think of is putting USER root in Dockerfile or user: root in docker-compose.yml, but those SimplyHaveNoEffectâ„¢ in the docker-compose run <service> bash scenario. whoami in the shell thus started says neo4j instead of root, no matter what I try.
I might add sudo to the image, which doesn't have sudo, but this should be considered last resort. Also using docker directly instead of docker-compose is less than preferable.

All of the commands that launch shells in containers (including, for example, docker-compose run have a --user option and so you can specify an arbitrary user for your debugging shell.
docker-compose run -u root <service> bash
If you're in the process of debugging your image build, note that each build step produces an image, and you can run a debugging shell on that image. (For example, examine the step before a RUN step to see what the filesystem looks like before it executes, or after to see its results.)
$ docker build .
...
Step 7/9 : RUN ...
---> Using cache
---> 55c91a5dca05
...
$ docker run --rm -it -u root 55c91a5dca05 bash
In both of these cases the command (bash) overrides the CMD in the Dockerfile. If you have an ENTRYPOINT wrapper script that will still run, but the standard exec "$#" command will launch your debugging shell. If you've put your default command to run as ENTRYPOINT, change it to CMD to better support this use case (and also the wrapper entrypoint pattern, should you need it).
If you really can't change the Dockerfile, you can override the ENTRYPOINT too, but it's a little awkward.
docker run --rm -it -u root --entrypoint ls myimage -al /app

You can also use it this way:
version: '3'
services:
jenkins:
user: root
image: jenkins/jenkins:lts
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /jenkins:/var/jenkins_home
you can refer to How to configure docker-compose.yml to up a container as root

Related

Disable root login into the docker container [duplicate]

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Docker-Compose script not running

I'm pretty new to Docker and especially to docker-compose and I'm running into an issue I can't seem to fix.
I have a docker-compose.yml file that looks like
version: '3.7'
services:
backup:
build:
context: .
dockerfile: Dockerfile
command: sh -c "while :;do sleep 5; done"
tty: true
stdin_open: true
volumes:
- ./data:/app/data
and I have a file called start.sh that looks simple like
python3 -u ./upload_to_s3.py > log/upload_to_s3.f9beb4d9.out 2>&1 &
When I run docker-compose exec backup /bin/sh I can get onto the docker image and I can run ./start.sh and it will run my processes which I can verify through a simple ps aux. However, when I run
docker-compose exec backup sh start.sh
it doesn't seem to run at all.
I try to verify by getting back onto the image and running ps aux and, in fact, the python script is not running.
What's going on? Why can't I seem to run my start.sh file using docker-compose?
EDIT: I've also tried to run this using docker-compose run --rm --detach --entrypoint="sh" backup -c "/app/start.sh"and I get the exact same issue
The script you show starts a background process. But if that's run in the context of a docker exec debugging shell, as soon as the docker exec command completes, any background processes that are still running will get terminated.
I might run this in a temporary container instead of a docker exec session. The important thing is to run this as a foreground process instead of launching a background job. For example:
docker-compose run backup \
./upload_to_s3.py
docker-compose run will inherit many of the settings from the backup container, like its image: and volumes: mounts, but you get to specify the command: to run at the command line. This also saves you the trouble of keeping a meaningless container alive so that you can docker exec into it later; just run a new container for these one-off tasks.
(Note, the specific invocation I've shown here assumes that the Python script is marked executable, with chmod +x if required; that it begins with a "shebang" line like #!/usr/bin/env python3; and that the image sets an environment variable ENV PYTHONUNBUFFERED=1.)

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

How to `docker-compose run` outside the working directory

Suppose I have the following dockerfile:
FROM node:alpine
WORKDIR /src/mydir
Now, suppose I want to run docker-compose from the src/ folder as opposed to src/mydir as happens by default.
I tried the following:
docker-compose run my_container ../ my-task
However the above failed.
Any guidance is much appreciated!
You want to use the --workdir (or -w) option of the docker-compose run command.
See the official documentation of the command here: https://docs.docker.com/compose/reference/run/
For instance, given your above example:
docker-compose run my_container -w /src my-task

Check the File in Exited Container

I have an issue invoking the script to start the container. I think I'd better first find a way to tell if the script is actually located in the right place. But neither docker exec nor docker attach seems to allow me to get into an exited container.
I also tried docker run -it --volumes-from [exited_container_id] ubuntu. I thought I might be able to see the file system in ubuntu but I cannot find the mounting point. Is there any way for me to login to an exited container and see the files that I ADDed?
You can check if the script is located in the right place adding a RUN ls -l / line in your Dockerfile and building the image
FROM frolvlad/alpine-oraclejdk8:slim
ADD build/libs/zuul*.jar /app.jar
ADD src/main/script/startup.sh /startup.sh
RUN ls -lah /
EXPOSE 8080 8999
ENTRYPOINT ["/startup.sh"]
Then just build the Dockerfile
docker build -t myapp .
You should see the result of that ls in the output of the build

Resources