Reach host's folders from docker container - docker

I have a folder in host machine's this directory /files/username/. username is variable.
And this is my Dockerfile's CMD directive:
CMD ./entrypoint.sh
I want to get contents of /files/username/ folder in entrypoint.sh. And I can get username variable as an environmental variable like this:
$ docker run -e username="User 1" ...
In this way is it possible to reach host device's folders inside entrypoint.sh?

There are two ways to do it.
Share the main folder
docker run -v /files:/files -e username="User 1"
This way your entrypoint script will be able to work on any user
Share only user folder
docker run -v /files/user:/files/user -e username="User 1"
This would only give access for that particular user

Related

Disable root login into the docker container [duplicate]

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Docker bind-mount not working as expected within AWS EC2 Instance

I have created the following Dockerfile to run a spring-boot app: myapp within an EC2 instance.
# Use an official java runtime as a parent image
FROM openjdk:8-jre-alpine
# Add a user to run our application so that it doesn't need to run as root
RUN adduser -D -s /bin/sh myapp
# Set the current working directory to /home/myapp
WORKDIR /home/myapp
#copy the app to be deployed in the container
ADD target/myapp.jar myapp.jar
#create a file entrypoint-dos.sh and put the project entrypoint.sh content in it
ADD entrypoint.sh entrypoint-dos.sh
#Get rid of windows characters and put the result in a new entrypoint.sh in the container
RUN sed -e 's/\r$//' entrypoint-dos.sh > entrypoint.sh
#set the file as an executable and set myapp as the owner
RUN chmod 755 entrypoint.sh && chown myapp:myapp entrypoint.sh
#set the user to use when running the image to myapp
USER myapp
# Make port 9010 available to the world outside this container
EXPOSE 9010
ENTRYPOINT ["./entrypoint.sh"]
Because I need to access myapp's logs from the EC2 host machine, i want to bind-mount a folder into the logs folder sitting within "myapp" container here: /home/myapp/logs
This is the command that i use to run the image in the ec2 console:
docker run -p 8090:9010 --name myapp myapp:latest -v home/ec2-user/myapp:/home/myapp/logs
The container starts without any issues, but the mount is not achieved as noticed in the following docker inspect extract:
...
"Mounts": [],
...
I have tried the followings actions but ended up with the same result:
--mount type=bind instead of -v
use volumes instead of bind-mount
I have even tried the --privileged option
In the Dockerfile: I tried to use the USER root instead of myapp
I believe that, this has nothing to do with the ec2 machine but my container. Since running other containers with bind-mounts on the same host works like a charm.
I am pretty sure i am messing up with my Dockerfile.
But what am i doing wrong in that Dockerfile ?
or
What am i missing out ?
Here you have the entrypoint.sh if needed:
#!/bin/sh
echo "The app is starting ..."
exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar -Dspring.profiles.active=${SPRING_ACTIVE_PROFILES} "${HOME}/myapp.jar" "$#"
I think the issue might be the order of the options on the command line. Docker expects the last two arguments to be the image id/name and (optionally) a command/args to run as pid 1.
https://docs.docker.com/engine/reference/run/
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You have the mount options (-v in the example you provided) after the image name (myall:latest). I'm not sure but perhaps the -v ... is being interpreted as arguments to be passed to your entrypoint script (which are being ignored) and docker run isn't seeing as a mount option.
Also, the source of the mount here (home/ec2-user/myapp) doesn't start with a leading forward slash (/), which, I believe, will make it relative to where the docker run command is executed from. You should make sure the source path starts with a forward slash (i.e. /home/ec2-user/myapp) so that you're sure it will always mount the directory you expect. I.e. -v /home/ec2-user...
Have you tried this order:
docker run -p 8090:9010 --name myapp -v /home/ec2-user/myapp:/home/myapp/logs myapp:latest

Docker volume and host permissions

When I run a docker image for example like
docker run -v /home/n1/workspace:/root/workspace -it rust:latest bash
and I create a directory in the container like
mkdir /root/workspace/test
It's owned by root on my host machine. Which leads to I have to change the permissions everytime after I turn of the container to be able to operate with that directory.
Is there a way how to tell Docker to handle directories and files from my machine (host machine) point of view under a certain user?
You need to run your application as the same uid inside the container as you do on the host to get file ownership to match. My own solution for this is to start the container as root, adjust the uid of the user inside the container to match the volume mount, and then su to the user to run the app. Scripts for this can be found in this repo: https://github.com/sudo-bmitch/docker-base
The in that repo, the fix-perms script handles the change in uid/gid inside the container, and the entrypoint script has an exec gosu $username "$#" that runs the app as the selected user.
Sure, because Docker uses root as a default user. You should create user in your docker container, switch to that user and then make folder, then you will get them without root permissions on you host machine.
Dockerfile
FROM rust:latest
...
RUN useradd -ms /bin/bash myuser
USER myuser

how to map a local folder as the volume the docker container or image?

I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.

Why when I switch to a different user environment variable is lost?

1st container sets the PATH for the user docker
FROM ubuntu:15.10
USER root
RUN groupadd -r docker && useradd -r -g docker docker
USER docker
ENV PATH /hello-world:$PATH
2nd container
FROM step_1
USER root
RUN echo $PATH
When I go into the second container and switch to user docker PATH variable is reset. If in the second container, I do not switch to the root user, variable stay saved.
Why is this happening? How do I for all users docker save variable PATH?
Commands log:
docker build -t step_1 step_1/
docker build -t step_2 step_2/
docker run -it step_2 bash
root#0784c73a84e2:/# echo $PATH
/hello-world:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
su docker
echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
You set the $PATH variable for multiple accounts. Why do you want to use multiple system users in a Docker container? I'm not sure what you try to achieve but I think this would be against the concept of single purpose containers.
If you only intent to execute some of the commands as a privileged user during the build process you don't have to switch users nor use sudo. Every command from a Dockerfile is executed as root unless specified otherwise.
FROM ubuntu:15.10
USER root
Doesn't doesn't do anything, you're already root inside the container.
Per the docs, ENV variables set do persist between images:
The environment variables set using ENV will persist when a container is run from the resulting image. You can view the values using docker inspect...
Ignoring all the above, I can't replicate this issue, it works fine for me. Can you paste your full Dockerfiles and the commands you're running to build etc...?

Resources