ssh from container to remote host without openssh-client setup - docker

I have a scenario where Host H1 is running a docker container C1 and Host H2 (within the same network) is running a docker container C2. SSH between H1 and H2 is setup with public-key authentication. My use case is to be able to run a script on C2 by invoking a command from C1. I'm able to achieve this by setting up ssh on C1 (openssh-client), which involves copying the private key from H1 into the .ssh directory on C1, assigning it appropriate permissions and then running ssh -t H2 docker exec C2 sh <script_name>.
Is there a way to achieve this without setting up the ssh client in C1?
I tried creating the same user U in C1 as H1 that owns the private key, with the same groupID and userID and then tried ssh'ing from C1 after logging as that user, but that didn't work.
I'm not sure if copying the private key to a container image from the running host is along the best practices for dockers/vms.

Ok, based on the question's comments, I'd suggest the following.
First, you definitely need some private/public key pair that the container can use, in one or the other way. Without this, SSH obviously won't work.
However, instead of copying the private key into the container, you could mount your SSH_AUTH_SOCK environment variable from your host machine into your container where the SSH client is installed. If your host machine is authorized to connect to your target, the container will then be, too. Minimum example:
docker run --rm -it -v $SSH_AUTH_SOCK:/ssh-agent -e "SSH_AUTH_SOCK=/ssh-agent" --entrypoint sh panubo/sshd -c "ssh -o StrictHostKeyChecking=no [REMOTE_MACHINE_IP]"

Related

SSH a user inside docker container

I would like to know if it is possible to use ssh over a container in order to access a local user (over the same container).
"ssh user#localhost"
I used ssh-keygen to generate a new key over root and over user. Also i copied the root public key towards the authorized-keys file of user but this isn’t working.
Also i changed the SSH keys permissions.
Thanks in advance
You can get command line access to the docker container from the machine it's running on by using
docker exec -it CONTAINER_ID /bin/bash
You can get the container id with docker ps
Once on the machine you should be able to change users with su - username

How to run shell script on Host from jenkins docker container?

I know my issue is already discussed in How to run shell script on host from docker container? but i think my issue is a littel bit more complicated.
At first I try to explain my situation. I'm using jenkins 2.x from a docker container in CentOS VM (Host). In jenkins i created a Job which checks out 3 files from SVN (2 Shell scripts and 1 .jar file). these files will be downloaded in jenkins workspace in jenkins docker container and also on host in a mounted directory like that:
volumes:
- ${DATA_HOME}/jenkins/data:/var/jenkins_home
One of these scripts will be executed from jenkins job and that executes the other script. The second script checks out a SVN directory and does much more stuffs.
So I want a new mounted volume in that directory all results of executed second script will be placed on Host. I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that.
I hope I could explain my issue understandable
I will answer regarding "I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that"
Pass Host machine Ip to your run command.
docker run --name redis --env pass=pass_my --add-host="hostmachine:192.168.1.23" -dit redis
Now,
docker exec -it redis ash
and run this command. This will do SSH from the container to host
ssh user_name#hostmachine 'ls; bash /home/user_name/Desktop/test.sh; docker run --name db -dit db; docker ps'
If you want something without password then set ssh-key in a container or you can also try
sshpass -p $pass ssh user_name#hostmachine 'ls;/home/user_name/Desktop/test.sh; docker run --name db -d
it db; docker ps'
or if you want to run the script that is inside container you can also do that just pass the script to ssh.
sshpass -p $pass ssh user_name#hostmachine < ./ab.sh
Note: $pass is password of host from ENV and hostmachine is host the we set during run command.
Based on comments in ans:
We can simply install any SSH plugin (SSH) or (Publish over SSH) and
it will work after providing username/password.
Only thing to watch out is that host name resolution does not work and we will need to provide an IP address.
As pointed out this is not the best approach, but sometimes in migration from older systems, we need to move one step at a time and this is the easiest step to take.

How to add known_hosts for passwordless ssh within docker container in docker-compse.yml?

I want to have passwordless ssh within two docker containers. How to add known_hosts entry for that using docker-compose.yml file
I want to implement ansible on docker env. To deploy and run rpm on deployment node, I need passwordless ssh from container1 to container2. For that I have to add known_hosts key of container1 in container2 node.
How to do this ???
I don't know any solution using docker-compose.yml. The solution I propose implies create a Dockerfile and execute (creating a shellscript as CMD):
ssh-keyscan -t rsa whateverdomain >> ~/.ssh/known_hosts
Maybe you can scan /ect/hosts or pass a variable as ENV.
try to mount it from host to container. .
--volume local/path/to/known_hosts:/etc/ssh/ssh_known_hosts
in case it didnt work, take a look at some similar case related to ssh key in docker like : this
and this

How to mount private SSH key to Docker for Windows container?

Good day everyone.
I have following dev environment:
Win 10 host
Docker Desktop for Windows latest
php5.6 image running in container via docker-compose
How can I mount my private SSH key to this container? Or is there any possibility to tunnel Pageant from host machine to container?
All I want is to run Capifony deploy procedures in my container.
You could use a volume with -v /c/Users/<user>/.ssh/id_rsa:/<home dir>/.ssh/id_rsa:ro.
home is ~ of the user e.g. /root, /, /home/<user>. The :ro will make it readonly, so your key will not be overwritten by accident.
The permissions on key mapped into the container will be too broad but piping the key into ssh-add bypasses this:
cat ~/.ssh/id_rsa | ssh-add -
Depending on your container, ssh-agent may not be already running:
eval `ssh-agent`

How to run docker-compose on remote host?

I have compose file locally. How to run bundle of containers on remote host like docker-compose up -d with DOCKER_HOST=<some ip>?
After the release of Docker 18.09.0 and the (as of now) upcoming docker-compose v1.23.1 release this will get a whole lot easier. This mentioned Docker release added support for the ssh protocol to the DOCKER_HOST environment variable and the -H argument to docker ... commands respectively. The next docker-compose release will incorporate this feature as well.
First of all, you'll need SSH access to the target machine (which you'll probably need with any approach).
Then, either:
# Re-direct to remote environment.
export DOCKER_HOST="ssh://my-user#remote-host"
# Run your docker-compose commands.
docker-compose pull
docker-compose down
docker-compose up
# All docker-compose commands here will be run on remote-host.
# Switch back to your local environment.
unset DOCKER_HOST
Or, if you prefer, all in one go for one command only:
docker-compose -H "ssh://my-user#remote-host" up
One great thing about this is that all your local environment variables that you might use in your docker-compose.yml file for configuration are available without having to transfer them over to remote-host in some way.
If you don't need to run docker container on your local machine, but still on the same remote machine, you can change this in your docker setting.
On the local machine:
You can control remote host with -H parameter
docker -H tcp://remote:2375 pull ubuntu
To use it with docker-compose, you should add this parameter in /etc/default/docker
On the remote machine
You should change listen from external adress and not only unix socket.
See Bind Docker to another host/port or a Unix socket for more details.
If you need to run container on multiple remote hoste, you should configure Docker Swarm
You can now use docker contexts for this:
docker context create dev ‐‐docker “host=ssh://user#remotemachine”
docker-compose ‐‐context dev up -d
More info here: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.
so that we can do
export DOCKER_HOST=tcp://192.168.1.2:2375
docker-compose up
Yet another possibility I discovered recently is controlling a remote Docker Unix socket via an SSH tunnel (credits to https://medium.com/#dperny/forwarding-the-docker-socket-over-ssh-e6567cfab160 where I learned about this approach).
Prerequisite
You are able to SSH into the target machine. Passwordless, key based access is preferred for security and convenience, you can learn how to set this up e.g. here: https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Besides, some sources mention forwarding Unix sockets via SSH tunnels is only available starting from OpenSSH v6.7 (run ssh -V to check), I did not try this out on older versions though.
SSH Tunnel
Now, create a new SSH tunnel between a local location and the Docker Unix socket on the remote machine:
ssh -nNT -L $(pwd)/docker.sock:/var/run/docker.sock user#someremote
Alternatively, it is also possible to bind to a local port instead of a file location. Make sure the port is open for connections and not already in use.
ssh -nNT -L localhost:2377:/var/run/docker.sock user#someremote
Re-direct Docker Client
Leave the terminal open and open a second one. In there, make your Docker client talk to the newly created tunnel-socket instead of your local Unix Docker socket.
If you bound to a file location:
export DOCKER_HOST=unix://$(pwd)/docker.sock
If you bound to a local port (example port as used above):
export DOCKER_HOST=localhost:2377
Now, run some Docker commands like docker ps or start a container, pull an image etc. Everything will happen on the remote machine as long as the SSH tunnel is active. In order to run local Docker commands again:
Close the tunnel by hitting Ctrl+C in the first terminal.
If you bound to a file location: Remove the temporary tunnel socket again. Otherwise you will not be able to open the same one again later: rm -f "$(pwd)"/docker.sock
Make your Docker client talk to your local Unix socket again (which is the default if unset): unset DOCKER_HOST
The great thing about this is that you save the hassle of copying docker-compose.yml files and other resources around or setting environment variables on a remote machine (which is difficult).
Non-interactive SSH Tunnel
If you want to use this in a scripting context where an interactive terminal is not possible, there is a way to open and close the SSH tunnel in the background using the SSH ControlMaster and ControlPath options:
# constants
TEMP_DIR="$(mktemp -d -t someprefix_XXXXXX)"
REMOTE_USER=some_user
REMOTE_HOST=some.host
control_socket="${TEMP_DIR}"/control.sock
local_temp_docker_socket="${TEMP_DIR}"/docker.sock
remote_docker_socket="/var/run/docker.sock"
# open the SSH tunnel in the background - this will not fork
# into the background before the tunnel is established and fail otherwise
ssh -f -n -M -N -T \
-o ExitOnForwardFailure=yes \
-S "${control_socket}" \
-L "${local_temp_docker_socket}":"${remote_docker_socket}" \
"${REMOTE_USER}"#"${REMOTE_HOST}"
# re-direct local Docker engine to the remote socket
export DOCKER_HOST="unix://${local_temp_docker_socket}"
# do some business on remote host
docker ps -a
# close the tunnel and clean up
ssh -S "${control_socket}" -O exit "${REMOTE_HOST}"
rm -f "${local_temp_docker_socket}" "${control_socket}"
unset DOCKER_HOST
# do business on localhost again
Given that you are able to log in on the remote machine, another approach to running docker-compose commands on that machine is to use SSH.
Copy your docker-compose.yml file over to the remote host via scp, run the docker-compose commands over SSH, finally clean up by removing the file again.
This could look as follows:
scp ./docker-compose.yml SomeUser#RemoteHost:/tmp/docker-compose.yml
ssh SomeUser#RemoteHost "docker-compose -f /tmp/docker-compose.yml up"
ssh SomeUser#RemoteHost "rm -f /tmp/docker-compose.yml"
You could even make it shorter and omit the sending and removing of the docker-compose.yml file by using the -f - option to docker-compose which will expect the docker-compose.yml file to be piped from stdin. Just pipe its content to the SSH command:
cat docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose -f - up"
If you use environment variable substitution in your docker-compose.yml file, the above-mentioned command will not replace them with your local values on the remote host and your commands might fail due to the variables being unset. To overcome this, the envsubst utility can be used to replace the variables with your local values in memory before piping the content to the SSH command:
envsubst < docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose up"

Resources