If a user had access to the terminal in a docker container can they do anything to destroy the hard drive its on? - docker

If a user had access to a root Ubuntu terminal in a docker container, can they do anything to destroy the hard drive or SSD it is on?
Link: gitlab.com/pwnsquad/term

Docker by default gives root access to containers.
Container can damage your host system only if you bypassed the container isolation mechanisms of Docker, otherwise the only damage can be done to the container itself, not host.
The simplest ways to break the isolation mechanisms are following:
using Dockers' bind mounts, when you map host's path into container' path. In this case this path may be completely cleaned from inside container. Avoid bind mounts (use volumes) or mount in ro mode to avoid that
using networking, specially network=host guarantees container access to all host's active network services and thus probably making host vulnerable to attacks on them. In this case you can connect to services, which are bound locally (to 127.0.0.1/localhost) thus not expecting remote connections and as a result could be less protected.

Related

Docker container - how non-root user secured?

There are many sites which preach that we should not run docker containers as root users.
For example:
By default, Docker gives root permission to the processes within your
containers, which means they have full administrative access to your
container and host environments.
I do not understand that how a container can access host environment & cause security vulnerabilities if I do not do volume/port mapping.
Can someone give an example of such security risk?
By default, docker tries to do very strong isolation between containers and host. If you need to have a root user (you can't avoid it) docker offers a security mechanism which maps the root user from the container to a random virtual high UUID on a host which is nothing if someone manages to escape.
Leaving root inside the container can leave the "attacker" option to install additional packages they wish, they see other containers/resources to which container has access (for instance they can try to NMAP around the container), .. well .. they are after all root inside container.
As of example of security risk. There was one "big one" called dirty cow.
Hope I pushed you in the right direction for further research.
docker and other containerization technology build on the namespaces feature of the Linux kernel to confine and isolated processes, limiting their view on available resources such as the filesystem, access to other processes or to the network.
By default docker uses a really strong isolation of processes limiting their access to a minimum. This leads many people to believe that running any untrusted process/docker image within a container is perfectly safe - it is not!
Because albeit the strong isolation of such processes they still run directly on the kernel of the host system. And when they run as root within the container (and not using a user namespace) they are actually root on the host, too. The only thing then preventing a malicious container from completely overtaking the host system is that it has no direct/straight-forward access to critical resources. If, though, it somehow gets ahold on an handle to a resource outside of its namespaces it may be used to break out of the isolation.
It is easy for an incautious user to unintentionally provide such a handle to outside resources to a container. For example:
# DON'T DO THIS
# the user intends to share the mapping of user names to ids with the container
docker run -v /etc/passwd:/etc/passwd untrusted/image
With the process within the container running as root the container would not only be able to read all users in /etc/passwd but also to edit that file, since it has also root access on the host. In this case - should you really need something like this - best practice would be to bind-mount /etc/passwd read-only.
Another example: some applications require extended access to system capabilities which requires to loosen some of the strict isolation docker applies by default, e.g.:
# DON'T DO THIS
docker run --privileged --cap-add=CAP_ALL untrusted/image
This would remove most of the limitations and most notably would allow the container to load new kernel modules, i.e., inject and execute arbitrary code into the kernel, which is obviously bad.
But besides giving access to external resources by mistake there also exists the possibility of bugs in the Linux kernel that could be exploited to escape the isolation of the container - which are much easier to use when the process already has root privileges inside the container.
Therefor best practice with docker is to limit the access of containers as much as possible:
drop/do not add any capabilities/privileges that are not required
bind-mount only files/directories that are really required
use a non-root user when possible
Although starting containers as root is the default in docker most applications do not actually require being started as root. So why give root privileges when they are not really required?

Can I build a docker container based on the host file system?

I want to use docker for its network isolation, but that's all.
More specifically, I want to run two programs and only allow network access a certain port on the one program if the connection is relayed through the second program. The one program is a VNC server and the second program is a Websocket relay with a custom authentication scheme.
So, I'm thinking about putting them both in a container and using docker port mappings to control their network access.
Can I setup docker so that I use the host's file system directly? I'd like to do things like access an .Xauthority file and create UNIX domain sockets (the VNC server does this). I know that I could mount the host filesystem in the container, but it'd be simpler to just use it directly as the container's filesystem. I think.
Is this possible? Easy?
No, every container is based on an image that packages the filesystem layers. The filesystem namespace cannot be disabled in docker (unlike the network, pid, and other namespaces you can set to "host").
For your requirements, if you do not want to use host volume mounts, and do not want to package the application in an image, then you would be better off learning network namespaces in the Linux kernel which docker uses to implement container isolation. The ip netns command is a good place to start.

Set host -> IP mapping from INSIDE DOCKER CONTAINER

I want to set a hosts-file like entry for a specific domain name FROM MY APPLICATION INSIDE a docker container. As in, I want the app in the container to resolve x.com:192.168.1.3, automatically, without any outside input or configuration. I realize this is unconventional compared to canonical docker use-cases, so don't # me about it :)
I want my code to, on a certain branch, use a different hostname:ip mapping for a specific domain. And I want it to do it automatically and without intervention from the host machine, docker daemon, or end-user executing the container. Ideally this mapping would occur at the container infrastructure level, rather than some kind of modification to the application code which would perform this mapping itself.
How should I be doing this?
Why is this hard?
The /etc/hosts file in a docker container is read-only and is not able to be modified from inside the container. This is by design and it's managed by the docker daemon.
DNS for a docker container is linked to the DNS of the underlying host in a number of ways and it's not smart to mess with it too too much.
Requirements:
Inside the container, domain x.com resolves to non-standard, pre-configured IP address.
The container is a standard Docker container running on a host of indeterminate configuration.
Constraints:
I can't pass the configuration as a runtime flag (e.g. --add-host).
I can't expose the mapping externally (e.g. set a DNS entry on the host machine).
I can't modifying the underlying host, or count on it to be configured a certain way.
open questions:
is it possible to set DNS entries from inside the container and override host DNS for the container only? if so, what's a good light-weight low-management tool for this (e.g. DNSmasq, coredns)?
is there some magic by which I can impersonate the /etc/hosts file or add a pre-processed file before it in the resolution chain?

Can a docker instance cause harm to the host?

Is the docker host completely protected for anything the docker instance can do?
As long as you don't expose a volume to the docker instance, are there other ways it can actually connect into the host and 'hack' it?
For example, say I allow customers to run code inside of a server that I run. I want to understand the potential security implications of allowing a customer to run arbitrary code inside of an docker instance.
All processes inside Docker are isolated from the host machine. The cannot by default see or interfere with other processes. This is guranteed by the process namespaces used by docker.
As long as you don't mount crucial stuff (example: docker.sock) onto the container, there are no security risks associated with running a container, and even with allowing code execution inside the container.
For a list of security features in docker, check Docker security.
The kernel is shared between the host and the docker container. This is less separation than lets say a VM has.
Runing any untrusted container is NOT SECURE. There are kernel vulnerabilities, which can be abused and ways to break out of containers.
Thats why its a best practice for example to either not use root user in containers or have a separate user namespace for containers.

Is it possible to restrict access to lxc containers?

I would like to run a docker or LXC container but restrict access to the container itself. Specifically, is it possible to prevent even the root (root on the host) from accessing the container?
From access, I mean SSH in to the container, tcpdump the tx/rx puts to the container, profiling the application etc.
Thanks!
It is not possible to effectively restrict a privileged user on the host from inspecting or accessing the container. If that were the case, it's hard to imagine how it would be possible for the root user to even start the container in the first place.
In general, it's useful to remember that containerization is used to confine processes to a restricted space: it's used to keep a process from getting out to the host, not to prevent other processes from getting in.

Resources