Get id of user on host, from a docker - docker

From inside a container, I would like to get the id of a user on the host machine (what the command id -u username would output, from the host).
Is there a way to accomplish this?
I thought I could mount /etc/passwd in the container and grep inside, but unfortunately the users are not listed in this file on our server (possibly related to the LDAP authentication mechanism?).
Thanks

I ended up solving this by mounting host folder /home on my container, and getting the id of the owner of user's home dir /home/<user>.

There's no way to get information about host users from inside a container. A design goal of Docker is that the host and containers are isolated from each other. A container has no concept of a host user; from the Docker daemon point of view, Docker doesn't even really know which user requested that a container be launched.
(This is doubly true if your host authentication system is something more complicated like an LDAP setup: a container simply may not have the tools or credentials required to query it, and the isolation means there's no way to somehow delegate to the host.)
If a principal goal of your application is to interact with host users, or the host filesystem, or you otherwise actively don't want Docker's isolation features, it's better to run your program outside of Docker.

Related

Running rootless docker containers with different users

I've just recently started exploring rootless docker and there are some things that I don't fully grasp. Below is my understanding of the concept and some questions. Please correct me if something's wrong
With rootless, the daemon and containers can be run as non-root users to mitigate potential vulnerabilities, e.g. if someone were to gain access to a container running as root, then he could also have root if he got outside of the container (and into the host system). So, if someone were to gain access to a rootless container, then he'd only be able to act as the non-root user running the container.
I want to run multiple containers that don't need any network between them, so I'm thinking that it would probably make sense to not run them as the same user, but is that correct? Also, in such a scenario, would I need to install and run the daemon multiple times for each user?
What about the user inside the container. I've tried out pihole/pihole and the default user inside the container is root (id: 0). Is that now ok, as the container is otherwise rootless? I've tried setting it to a different user by using user: "1005:1005" (in docker-compose.yml), but then the container is not able to start as it's missing permissions to do some tasks).

Protecting Docker daemon

Am I understanding correctly that the docs discuss how to protect the Docker daemon when commands are issued (docker run,...) with a remote machine as the target? When controlling docker locally this does not concern me.
Running Docker swarm does not require this step either as the security between the nodes is handled by Docker automatically. For example, using Portainer in a swarm with multiple agents does not require extra security steps due to overlay network in a swarm being encrypted by default.
Basically, when my target machine will always be localhost there are no extra security steps to be taken, correct?
Remember that anyone who can run any Docker command can almost trivially get unrestricted root-level access on the host:
docker run -v/:/host busybox sh
# vi /host/etc/passwd
So yes, if you're using a remote Docker daemon, you must run through every step in that document, correctly, or your system will get rooted.
If you're using a local Docker daemon and you haven't enabled the extremely dangerous -H option, then security is entirely controlled by Unix permissions on the /var/run/docker.sock special file. It's common for that socket to be owned by a docker group, and to add local users to that group; again, anyone who can run docker ps can also trivially edit the host's /etc/sudoers file and grant themselves whatever permissions they want.
So: accessing docker.sock implies trust with unrestricted root on the host. If you're passing the socket into a Docker container that you're trusting to launch other containers, you're implicitly also trusting it to not mount system directories off the host when it does. If you're trying to launch containers in response to network requests, you need to be insanely careful about argument handling lest a shell-injection attack compromise your system; you are almost always better off finding some other way to run your workload.
In short, just running Docker isn't a free pass on security concerns. A lot of common practices, if convenient, are actually quite insecure. A quick Web search for "Docker cryptojacking" can very quickly find you the consequences.

If a user had access to the terminal in a docker container can they do anything to destroy the hard drive its on?

If a user had access to a root Ubuntu terminal in a docker container, can they do anything to destroy the hard drive or SSD it is on?
Link: gitlab.com/pwnsquad/term
Docker by default gives root access to containers.
Container can damage your host system only if you bypassed the container isolation mechanisms of Docker, otherwise the only damage can be done to the container itself, not host.
The simplest ways to break the isolation mechanisms are following:
using Dockers' bind mounts, when you map host's path into container' path. In this case this path may be completely cleaned from inside container. Avoid bind mounts (use volumes) or mount in ro mode to avoid that
using networking, specially network=host guarantees container access to all host's active network services and thus probably making host vulnerable to attacks on them. In this case you can connect to services, which are bound locally (to 127.0.0.1/localhost) thus not expecting remote connections and as a result could be less protected.

Can a docker instance cause harm to the host?

Is the docker host completely protected for anything the docker instance can do?
As long as you don't expose a volume to the docker instance, are there other ways it can actually connect into the host and 'hack' it?
For example, say I allow customers to run code inside of a server that I run. I want to understand the potential security implications of allowing a customer to run arbitrary code inside of an docker instance.
All processes inside Docker are isolated from the host machine. The cannot by default see or interfere with other processes. This is guranteed by the process namespaces used by docker.
As long as you don't mount crucial stuff (example: docker.sock) onto the container, there are no security risks associated with running a container, and even with allowing code execution inside the container.
For a list of security features in docker, check Docker security.
The kernel is shared between the host and the docker container. This is less separation than lets say a VM has.
Runing any untrusted container is NOT SECURE. There are kernel vulnerabilities, which can be abused and ways to break out of containers.
Thats why its a best practice for example to either not use root user in containers or have a separate user namespace for containers.

Is it possible to restrict access to lxc containers?

I would like to run a docker or LXC container but restrict access to the container itself. Specifically, is it possible to prevent even the root (root on the host) from accessing the container?
From access, I mean SSH in to the container, tcpdump the tx/rx puts to the container, profiling the application etc.
Thanks!
It is not possible to effectively restrict a privileged user on the host from inspecting or accessing the container. If that were the case, it's hard to imagine how it would be possible for the root user to even start the container in the first place.
In general, it's useful to remember that containerization is used to confine processes to a restricted space: it's used to keep a process from getting out to the host, not to prevent other processes from getting in.

Resources