Can a docker instance cause harm to the host? - docker

Is the docker host completely protected for anything the docker instance can do?
As long as you don't expose a volume to the docker instance, are there other ways it can actually connect into the host and 'hack' it?
For example, say I allow customers to run code inside of a server that I run. I want to understand the potential security implications of allowing a customer to run arbitrary code inside of an docker instance.

All processes inside Docker are isolated from the host machine. The cannot by default see or interfere with other processes. This is guranteed by the process namespaces used by docker.
As long as you don't mount crucial stuff (example: docker.sock) onto the container, there are no security risks associated with running a container, and even with allowing code execution inside the container.
For a list of security features in docker, check Docker security.

The kernel is shared between the host and the docker container. This is less separation than lets say a VM has.
Runing any untrusted container is NOT SECURE. There are kernel vulnerabilities, which can be abused and ways to break out of containers.
Thats why its a best practice for example to either not use root user in containers or have a separate user namespace for containers.

Related

Docker container - how non-root user secured?

There are many sites which preach that we should not run docker containers as root users.
For example:
By default, Docker gives root permission to the processes within your
containers, which means they have full administrative access to your
container and host environments.
I do not understand that how a container can access host environment & cause security vulnerabilities if I do not do volume/port mapping.
Can someone give an example of such security risk?
By default, docker tries to do very strong isolation between containers and host. If you need to have a root user (you can't avoid it) docker offers a security mechanism which maps the root user from the container to a random virtual high UUID on a host which is nothing if someone manages to escape.
Leaving root inside the container can leave the "attacker" option to install additional packages they wish, they see other containers/resources to which container has access (for instance they can try to NMAP around the container), .. well .. they are after all root inside container.
As of example of security risk. There was one "big one" called dirty cow.
Hope I pushed you in the right direction for further research.
docker and other containerization technology build on the namespaces feature of the Linux kernel to confine and isolated processes, limiting their view on available resources such as the filesystem, access to other processes or to the network.
By default docker uses a really strong isolation of processes limiting their access to a minimum. This leads many people to believe that running any untrusted process/docker image within a container is perfectly safe - it is not!
Because albeit the strong isolation of such processes they still run directly on the kernel of the host system. And when they run as root within the container (and not using a user namespace) they are actually root on the host, too. The only thing then preventing a malicious container from completely overtaking the host system is that it has no direct/straight-forward access to critical resources. If, though, it somehow gets ahold on an handle to a resource outside of its namespaces it may be used to break out of the isolation.
It is easy for an incautious user to unintentionally provide such a handle to outside resources to a container. For example:
# DON'T DO THIS
# the user intends to share the mapping of user names to ids with the container
docker run -v /etc/passwd:/etc/passwd untrusted/image
With the process within the container running as root the container would not only be able to read all users in /etc/passwd but also to edit that file, since it has also root access on the host. In this case - should you really need something like this - best practice would be to bind-mount /etc/passwd read-only.
Another example: some applications require extended access to system capabilities which requires to loosen some of the strict isolation docker applies by default, e.g.:
# DON'T DO THIS
docker run --privileged --cap-add=CAP_ALL untrusted/image
This would remove most of the limitations and most notably would allow the container to load new kernel modules, i.e., inject and execute arbitrary code into the kernel, which is obviously bad.
But besides giving access to external resources by mistake there also exists the possibility of bugs in the Linux kernel that could be exploited to escape the isolation of the container - which are much easier to use when the process already has root privileges inside the container.
Therefor best practice with docker is to limit the access of containers as much as possible:
drop/do not add any capabilities/privileges that are not required
bind-mount only files/directories that are really required
use a non-root user when possible
Although starting containers as root is the default in docker most applications do not actually require being started as root. So why give root privileges when they are not really required?

Is it safe to run docker in docker on Openshift?

I built Docker image on server that can run CI-CD for Jenkins. Because some builds use Docker, I installed Docker inside my image, and in order to allow the inside Docker to run, I had to give it --privilege.
All works good, but I would like to run the docker in docker, on Openshift (or Kubernetes). The problem is with getting the --privilege permissions.
Is running privilege container on Openshift is dangerous, and if so why and how much?
A privileged container can reboot the host, replace the host's kernel, access arbitrary host devices (like the raw disk device), and reconfigure the host's network stack, among other things. I'd consider it extremely dangerous, and not really any safer than running a process as root on the host.
I'd suggest that using --privileged at all is probably a mistake. If you really need a process to administer the host, you should run it directly (as root) on the host and not inside an isolation layer that blocks the things it's trying to do. There are some limited escalated-privilege things that are useful, but if e.g. your container needs to mlock(2) you should --cap-add IPC_LOCK for the specific privilege you need, instead of opening up the whole world.
(My understanding is still that trying to run Docker inside Docker is generally considered a mistake and using the host's Docker daemon is preferable. Of course, this also gives unlimited control over the host...)
In short, the answer is no, it's not safe. Docker-in-Docker in particular is far from safe due to potential memory and file system corruption, and even mounting the host's docker socket is unsafe in effectively any environment as it effectively gives the build pipeline root privileges. This is why tools like Buildah and Kaniko were made, as well as build images like S2I.
Buildah in particular is Red Hat's own tool for building inside containers but as of now I believe they still can't run completely privilege-less.
Additionally, on Openshift 4, you cannot run Docker-in-Docker at all since the runtime was changed to CRI-O.

Running multiple docker containers in same host

I am new to docker. I have a doubt regarding docker. Based on the understanding of docker, Docker will help to create the container of the application we can to deploy along with application dependencies.
My question is that if i have web application inside docker container, is it possible to run multiple containers inside single host? If yes, How will i make sure the request be directed to each app?.
Will there be any change in performance depending on number of core of host?
Is it possible to run multiple containers inside single host?
Yes, you can run many.
If yes, How will direct requests to the right container?
You have many options, the simplest is just to run the container with port forwarding (which is built in to docker), but you could also run a load balancer or proxy on the host.
Will there be any change in performance depending on number of core of host?
There can be, of course. It depends on whether or not you're already reaching a performance bottleneck of some sort before adding another container. All the containers are making use of the same hardware.

How do I do docker clustering or hot copy a docker container?

Is it possible to hotcopy a docker container? or some sort of clustering with docker for HA purposes?
Can someone simplify this?
How to scale Docker containers in production
Docker containers are not designed to be VMs and are not really meant for hot-copies. Instead you should define your container such that it has a well-known start state. If the container goes down the alternate should start from the well-known start state. If you need to keep track of state that the container generates at run time this has to be done externally to docker.
One option is to use volumes to mount the state (files) on to the host filesystem. Then use RAID, NTFS or any other means, to share that file system with other physical nodes. Then you can mount the same files on to a second docker container on a second host with the same state.
Depending on what you are running in your containers you can also have to state sharing inside your containers for example using mongo replication sets. To reiterate though containers are not as of yet designed to be migrated with runtime state.
There is a variety of technologies around Docker that could help, depending on what you need HA-wise.
If you simply wish to start a stateless service container on different host, you need a network overlay, such as weave.
If you wish to replicate data across for something like database failover, you need a storage solution, such as Flocker.
If you want to run multiple services and have load-balancing and forget on which host each container runs, given that X instances are up, then Kubernetes is the kind of tool you need.
It is possible to make many Docker-related tools work together, we have a few stories on our blog already.

Is it possible to restrict access to lxc containers?

I would like to run a docker or LXC container but restrict access to the container itself. Specifically, is it possible to prevent even the root (root on the host) from accessing the container?
From access, I mean SSH in to the container, tcpdump the tx/rx puts to the container, profiling the application etc.
Thanks!
It is not possible to effectively restrict a privileged user on the host from inspecting or accessing the container. If that were the case, it's hard to imagine how it would be possible for the root user to even start the container in the first place.
In general, it's useful to remember that containerization is used to confine processes to a restricted space: it's used to keep a process from getting out to the host, not to prevent other processes from getting in.

Resources