SSH a user inside docker container - docker

I would like to know if it is possible to use ssh over a container in order to access a local user (over the same container).
"ssh user#localhost"
I used ssh-keygen to generate a new key over root and over user. Also i copied the root public key towards the authorized-keys file of user but this isn’t working.
Also i changed the SSH keys permissions.
Thanks in advance

You can get command line access to the docker container from the machine it's running on by using
docker exec -it CONTAINER_ID /bin/bash
You can get the container id with docker ps
Once on the machine you should be able to change users with su - username

Related

Switch to Docker Root User

I am working on Docker and before i execute any command on Docker CLI , I need to switch to root used using the command
sudo su - root
Can anyone please tell me why we need to switch to root user to perform any operation on Docker Engine?
you don't need to switch to root for docker cli commands and it is common to add your user to the docker group
sudo groupadd docker
sudo usermod -aG docker $USER
see: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
the reason why docker is run as root:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
Using docker commands, you can trivially get root-level access to any part of the host filesystem. The very most basic example is
docker run --rm -v /:/host busybox cat /host/etc/shadow
which will get you a file of encrypted passwords that you can crack offline at your leisure; but if I wanted to actually take over the machine I'd just write my own line into /host/etc/passwd and /host/etc/shadow creating an alternate uid-0 user with no password and go to town.
Docker doesn't really have any way to limit what docker commands you can run or what files or volumes you can mount. So if you can run any docker command at all, you have unrestricted root access to the host. Putting it behind sudo is appropriate.
The other important corollary to this is that using the dockerd -H option to make the Docker socket network-accessible is asking for your system to get remotely rooted. Google "Docker cryptojacking" for some more details and prominent real-life examples.

ssh from container to remote host without openssh-client setup

I have a scenario where Host H1 is running a docker container C1 and Host H2 (within the same network) is running a docker container C2. SSH between H1 and H2 is setup with public-key authentication. My use case is to be able to run a script on C2 by invoking a command from C1. I'm able to achieve this by setting up ssh on C1 (openssh-client), which involves copying the private key from H1 into the .ssh directory on C1, assigning it appropriate permissions and then running ssh -t H2 docker exec C2 sh <script_name>.
Is there a way to achieve this without setting up the ssh client in C1?
I tried creating the same user U in C1 as H1 that owns the private key, with the same groupID and userID and then tried ssh'ing from C1 after logging as that user, but that didn't work.
I'm not sure if copying the private key to a container image from the running host is along the best practices for dockers/vms.
Ok, based on the question's comments, I'd suggest the following.
First, you definitely need some private/public key pair that the container can use, in one or the other way. Without this, SSH obviously won't work.
However, instead of copying the private key into the container, you could mount your SSH_AUTH_SOCK environment variable from your host machine into your container where the SSH client is installed. If your host machine is authorized to connect to your target, the container will then be, too. Minimum example:
docker run --rm -it -v $SSH_AUTH_SOCK:/ssh-agent -e "SSH_AUTH_SOCK=/ssh-agent" --entrypoint sh panubo/sshd -c "ssh -o StrictHostKeyChecking=no [REMOTE_MACHINE_IP]"

SSH keys keep getting deleted from Google Compute Engine VM

Background:
I am running a Google Compute Engine VM, called host.
There is a Docker container running on the machine called container.
I connect to the VM using an account called user#gmail.com.
I need to connect through ssh from the container to the host, without being prompted for the user password.
Problem:
Minutes after successfully connecting from the container to the host, the user/.ssh/authorized_keys gets "modified" by some process from Google itself. As far as I understood this process appends some ssh keys needed to connect to the VM. In my case though, the process seems to overwrite the key that I generated from the container.
Setup:
I connect to host using Google Compute Engine GUI, pressing on the SSH button.
Then I follow the steps described in this answer on AskUbuntu.
I set the password for user on host:
user#host:~$ sudo passwd user
I set PasswordAuthentication to yes in sshd_config, and I restart sshd:
user#host:~$ sudo nano /etc/ssh/sshd_config
user#host:~$ sudo systemctl restart sshd
I enter in the Docker container using bash, I generate the key, and I copy it on the host:
user#host:~$ docker exec -it container /bin/bash
(base) root#container-id:# ssh-keygen
(base) root#container-id:# ssh-copy-id user#host
The key is successfully copied to the host, the host is added to the known_hosts file, and I am able to connect from the container to the host without being prompted for the password (as I gave it during the ssh-copy-id execution).
Now, if I detach from the host, let some time pass, and attach again, I find that the user/.ssh/authorized_keys file contains some keys generated by Google, but there is no trace of my key (the one that allows the container to connect to the host).
What puzzles me more than everything is that we consistently used this process before and we never had such problem. Some accounts on this same host have still keys from containers that no longer exist!
Does anyone has any idea about this behavior? Do you know about any solutions that let me keep the key for as long as it is needed?
It looks like the accounts daemon is doing this task. You could refer this discussion thread for more details about this.
You might find the OS Login API a easier management option. Once enabled, you can use a single gcloud command or API call to add SSH keys.
In case anyone has trouble with this even AFTER adding SSH keys to the GCE metadata:
Make sure your username is in the SSH key description section!
For example, if your SSH key is
ssh-rsa AAAA...zzzz
and your login is ubuntu, make sure you actually enter
ssh-rsa AAAA...zzzz ubuntu
since it appears Google copies the key to the authorized_keys of the user specified inside the key.
In case anyone is still looking for solution for this, I solved this issue by storing the SSH Keys in Compute Engine Metadata https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys

How can I see which user launched a Docker container?

I can view the list of running containers with docker ps or equivalently docker container ls (added in Docker 1.13). However, it doesn't display the user who launched each Docker container. How can I see which user launched a Docker container? Ideally I would prefer to have the list of running containers along with the user for launched each of them.
You can try this;
docker inspect $(docker ps -q) --format '{{.Config.User}} {{.Name}}'
Edit: Container name added to output
There's no built in way to do this.
You can check the user that the application inside the container is configured to run as by inspecting the container for the .Config.User field, and if it's blank the default is uid 0 (root). But this doesn't tell you who ran the docker command that started the container. User bob with access to docker can run a container as any uid (this is the docker run -u 1234 some-image option to run as uid 1234). Most images that haven't been hardened will default to running as root no matter the user that starts the container.
To understand why, realize that docker is a client/server app, and the server can receive connections in different ways. By default, this server is running as root, and users can submit requests with any configuration. These requests may be over a unix socket, you could sudo to root to connect to that socket, you could expose the API to the network (not recommended), or you may have another layer of tooling on top of docker (e.g. Kubernetes with the docker-shim). The big issue in that list is the difference between the network requests vs a unix socket, because network requests don't tell you who's running on the remote host, and if it did, you'd be trusting that remote client to provide accurate information. And since the API is documented, anyone with a curl command could submit a request claiming to be a different user.
In short, every user with access to the docker API is an anonymized root user on your host.
The closest you can get is to either place something in front of docker that authenticates users and populates something like a label. Or trust users to populate that label and be honest (because there's nothing in docker validating these settings).
$ docker run -l "user=$(id -u)" -d --rm --name test-label busybox tail -f /dev/null
...
$ docker container inspect test-label --format '{{ .Config.Labels.user }}'
1000
Beyond that, if you have a deployed container, sometimes you can infer the user by looking through the configuration and finding volume mappings back to that user's home directory. That gives you a strong likelihood, but again, not a guarantee since any user can set any volume.
I found a solution. It is not perfect, but it works for me.
I start all my containers with an environment variable ($CONTAINER_OWNER in my case) which includes the user. Then, I can list the containers with the environment variable.
Start container with environment variable
docker run -e CONTAINER_OWNER=$(whoami) MY_CONTAINER
Start docker compose with environment variable
echo "CONTAINER_OWNER=$(whoami)" > deployment.env # Create env file
docker-compose --env-file deployment.env up
List containers with the environment variable
for container_id in $(docker container ls -q); do
echo $container_id $(docker exec $container_id bash -c 'echo "$CONTAINER_OWNER"')
done
As far as I know, docker inspect will show only the configuration that
the container started with.
Because of the fact that commands like entrypoint (or any init script) might change the user, those changes will not be reflected on the docker inspect output.
In order to work around this, you can to overwrite the default entrypoint set by the image with --entrypoint="" and specify a command like whoami or id after it.
You asked specifically to see all the containers running and the launched user, so this solution is only partial and gives you the user in case it doesn't appear with the docker inspect command:
docker run --entrypoint "" <image-name> whoami
Maybe somebody will proceed from this point to a full solution (:
Read more about entrypoint "" in here.
If you are used to ps command, running ps on the Docker host and grep with parts of the process your process is running. For example, if you have a Tomcat container running, you may run the following command to get details on which user would have started the container.
ps -u | grep tomcat
This is possible because containers are nothing but processes managed by docker. However, this will only work on single host. Docker provides alternatives to get container details as mentioned in other answer.
this command will print the uid and gid
docker exec <CONTAINER_ID> id
ps -aux | less
Find the process's name (the one running inside the container) in the list (last column) and you will see the user ran it in the first column

Can one docker user hide data from another?

Alice and Bob are both members of the docker group on the same host. Alice wants to run some long-running calculations in a docker container, then copy the results to her home folder. Bob is very nosy, and Alice doesn't want him to be able to read the data that her calculation is using.
Is there anything that the system administrator can do to keep Bob out of Alice's docker containers?
Here's how I think Alice should get data in and out of her container, based on named volumes and the docker cp command, as described in this question and this one.
$ pwd
/home/alice
$ date > input1.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -v sandbox1:/data alpine echo OK
OK
$ docker cp input1.txt run1:/data/input1.txt
$ docker run --rm -v sandbox1:/data alpine sh -c "cp /data/input1.txt /data/output1.txt && date >> /data/output1.txt"
$ docker cp run1:/data/output1.txt output1.txt
$ cat output1.txt
Thu Oct 5 16:35:30 PDT 2017
Thu Oct 5 23:36:32 UTC 2017
$ docker container rm run1
run1
$ docker volume rm sandbox1
sandbox1
$
I create an input file, input1.txt and a named volume, sandbox1. Then I start a container named run1 just so I can copy files into the named volume. That container just prints an "OK" message and quits. I copy the input file, then run the main calculation. In this example, it copies the input to the output and adds a second timestamp to it.
After the calculation finishes, I copy the output file, then remove the container and the named volume.
Is there any way to stop Bob from loading his own container that mounts the named volume and shows him Alice's data? I've set up Docker to use a user namespace, so Alice and Bob don't have root access to the host, but I can't see how to make Alice and Bob use different user namespaces.
Alice and Bob have been granted virtual root access to the host by being in the docker group.
The docker group grants them access to the Docker API via a socket file. There is no facility in Docker at the moment to differentiate between users of the Docker API. The Docker daemon runs as root and by virtue of what the Docker API allows, Alice and Bob will be able to work around any barriers that you did try to put in place.
User Namespaces
The use of the user namespace isolation stops users inside a container breaking out of a container as a privileged or different user, so in effect the container process is now running as an unprivileged user.
An example would be
Alice is given ssh access to container A running in namespace_a.
Bob is given ssh access to container B in namespace_b.
Because the users are now only inside the container, they won't be able to modify each others files on the host. Say if both containers mapped the same host volume, files without world read/write/execute will be safe from each others containers. As they have no control over the daemon, they can't do anything to break out.
Docker Daemon
The namespace doesn't secure the Docker daemon and API itself, which is still a privileged process. The first way around a user name space is setting the host namespace on the command line:
docker run --privileged --userns=host busybox fdisk -l
The docker exec, docker cp and docker export commands will give someone with access to the Docker API the contents of any created containers.
Restricting Docker Access
It is possible to restrict access to the API but you can't have users with shell access in the docker group.
Allowing a limited set of docker commands via sudo or providing sudo access to scripts that hard code the docker parameters:
#!/bin/sh
docker run --userns=whom image command
For automated systems, access can be provided via an additional shim API with appropriate access controls in front of the Docker API that then passes on the "controlled" request to Docker. dockerode or docker-py can be easily plugged into a REST service and interface with Docker.

Resources