I am using a CentOS 6.9 system of High performance computation platform and I wanna use docker with non-root user. Is there a method that I can build docker from source and do not need root privilege?
This shouldn't be possible as it would be a major security concern.
When docker is installed on a machine, users with docker access (not necessarily root) can start containers. In particular, they can start containers in priviliged mode, giving the container access to all host devices.
More importantly, A user with access to docker can mount directories owned exclusively by machine root. Since by default, a root user inside the container will have access to mounted root-owned directories inside the container, this will allow any Docker container started by a non-root user to access critical machine stuff.
Therefore, the sequence of having a non-root user install Docker and start containers should not be allowed as it can compromise the whole machine.
Check this explicit comment from one of the docker maintainers.
Update to the yamenk's answer:
There is now an official rootless mode for Docker: Run the Docker daemon as a non-root user
Here's an explanation of how it works from one of Docker engineers:
Experimenting with Rootless Docker
Related
I've been reading about security issues with building docker images within a docker container by mounting the docker socket.
In my case, I am accessing docker via an API , docker-py.
Now I am wondering, are there security issues with building images using docker-py on a plain ubuntu host (not in a docker container) since it also communicates on the docker socket?
I'm also confused as to why there would be security differences between running docker from the command line vs this sdk, since they both go through the socket?
Any help is appreciated.
There is no difference, if you have access to the socket, you can send a request to run a container with access matching that of the dockerd engine. That engine is typically running directly on the host as root, so you can use the API to get root access directly on the host.
Methods to lock this down include running the dockerd daemon inside of a container, however that container is typically privileged which itself is not secure, so you can gain root in the other container and use the privileged access to gain root on the host.
The best options I've seen include running the engine rootless, and an escape from the container would only get you access to the user the daemon is running as. However, realize rootless has it's drawbacks, including needing to pre-configure the host to support this, and networking and filesystem configuration being done at the user level which has functionality and performance implications. And the second good option is to run the build without a container runtime at all, however this has it's own drawbacks, like not having a 1-for-1 replacement of the Dockerfile RUN syntax, so your image is built mainly from the equivalent of COPY steps plus commands run on the host outside of any container.
Motivation
Running DDEV for a diverse team of developers (front-end / back-end) on various operating systems (Windows, MacOS and Linux) can become time-consuming, even frustrating at times.
Hoping to simplify the initial setup, I started working on an automated VS Code Remote Container setup.
I want to run DDEV in a VS Code Remote Container.
To complicate things, the container should reside on a remote host.
This is the current state of the setup: caillou/vs-code-ddev-remote-container#9ea3066
Steps Taken
I took the following steps:
Set up VS Code to talk to a remote Docker installation over ssh. You just need to add the following to VS Code's settings.json: "docker.host": "ssh://username#host".
Install Docker and create a user with UID 1000 on said host.
Add docker-cli, docker-compose, and and ddev to the Dockerfile, c.f. Dockerfile#L18-L20.
Mount the Docker socket in the container and use the remote user with UID 1000. In the example, this user is called node: devcontainer.json
What Works
Once I launch the VS Code Remote Container extension, an image is build using the Dockerfile, and a container is run using the parameters defined in the devcontainer.json.
I can open a terminal window and run sudo docker ps. This lists the container I am in, and its siblings.
My Problem
DDEV needs to create docker containers.
DDEV can not be run as root.
On the host, the user with UID 1000 has the privilege to run Docker.
Within the container, the user with UID 1000 does not have the privilege to run Docker.
The Question
Is there a way to give an unprivileged user access to Docker within Docker?
On my computer (Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-119-generic x86_64),
Docker version 18.03.0-ce, build 0520e24), if user A launches a container (docker run), user B may stop it (docker stop) .
How can configure Docker so that a user may only stop the containers they started?
There is no way to do this in docker. This is because docker containers by nature run as root. This means that someone could deploy a container where they have root access, give themselves access to the host environment through volume maps, and bam, they have root access on the host.
If you want to have multiple users have docker environments, and they should be segregated, you will either have to roll-your-own solution for this (that involves not giving access to the docker machine OR containers via command line), or you will need to have multiple machines (one per user) that they can be segregated to.
Here is a good article on docker root escalation: http://container-solutions.com/docker-security-admin-controls-2/ (mirror).
By exposing the Docker CLI to a user by adding the user to the docker usergroup or make him an admin user, he can create, remove and kill containers. He can even change the configuration of the Docker deamon.
If CLI access is not required, you can change this by deny access for all users except the admin user.
User management can managed by using solutions as portainer or docker ucp
I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu
I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md