Docker-machine : How to exec an ssh command correctly? - docker

I am running Docker on my OS X local host. I created a dev' machine, and I am trying to run a command upon SSH to my dev machine :
~$ docker-machine ssh dev -- ssh -o 'StrictHostKeyChecking no' \
-i /Users/yves/.docker/machine/machines/dev/id_rsa \
-N -L 5000:localhost:5000 root#harbor.dufour16.net &
I get:
[1] 28171
~$ exit status 255
Then I don't get any prompt back. I need to use CTRL-C , and I get:
[1]+ Exit 1 docker-machine ssh dev -- ssh -o 'StrictHostKeyChecking
no' -i /Users/yves/.docker/machine/machines/dev/id_rsa
-N -L 5000:localhost:5000 root#harbor.dufour16.net
Is there a way to execute it correctly ? (Using boot2docker it was easier as the command to be executed was quoted). Thanks for feedback.

You should be able to just use quotes i.e:
ssh dev "ssh -o 'StrictHostKeyChecking no' \
-i /Users/yves/.docker/machine/machines/dev/id_rsa \
-N -L 5000:localhost:5000 root#harbor.dufour16.net &"

Related

How to handle prompt in Docker Exec

I try to execute the following line:
docker exec --user www-data nextcloud_docker php /var/www/html/occ db:convert-filecache-bigint
which returns a prompt:
This can take up to hours, depending on the number of files in your instance!
Continue with the conversion (y/n)? [n]
Unfortunately the docker exec command ends (returns to shell) and I am not able to start the occ command.
How can I solve this?
Thanks.
You can try setting the -i flag on the docker command and piping a 'y' into it, like this
echo y | docker exec -i --user www-data nextcloud_docker php /var/www/html/occ db:convert-filecache-bigint
or you can run the command fully interactively with the -it flags like this
docker exec -it --user www-data nextcloud_docker php /var/www/html/occ db:convert-filecache-bigint
occ has a -n switch.
I run it from cron, including the update. I have these lines in /home/update-nextcloud-inside-container.sh inside my container:
#!/bin/bash
date
sed -i 's~www-data:/var/www:/usr/sbin/nologin~www-data:/var/www:/bin/bash~g' /etc/passwd
su -c "cd /var/www/nextcloud; php /var/www/nextcloud/updater/updater.phar --no-interaction" www-data
su -c "cd /var/www/nextcloud; ./occ db:add-missing-indices -n" www-data
su -c "cd /var/www/nextcloud; ./occ db:convert-filecache-bigint -n" www-data
sed -i s~www-data:/var/www:/bin/bash~www-data:/var/www:/usr/sbin/nologin~g /etc/passwd
and the host cron launches a script with these lines:
ActiveContainer=$(/home/myusername/bin/lsdocker.sh | grep next )
docker exec -i ${ActiveContainer} /home/update-nextcloud-inside-container.sh
I see now I am missing taking the instance off-line for running convert-filecache. I'll have to add that.
Edit: (lsdocker.sh is a script that uses docker ps to list just the active container names)

SSH Permission denied (publickey,password) - container docker ubuntu 18.04

I've installed Docker on my windows 10 and I'm using my WSL1 in order to create dockerfile, build and run containers and I cannot connect via ssh, I get Permission denied (publickey,password)
My dockerfile is:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
My docker ps is :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b41411ef7a8a eg_sshd "/usr/sbin/sshd -D" 4 minutes ago Up 4 minutes 0.0.0.0:32768->22/tcp test_sshd
The ssh port is this :
➜ root$ docker port test_sshd 22
0.0.0.0:32768
When I'm trying to connet via ssh I get "Permission denied"
➜ root$ ssh root#0.0.0.0 -p 32768
root#0.0.0.0: Permission denied (publickey,password).
The ssh service is up
➜ root$ docker exec b41411ef7a8a service ssh status
* sshd is running
What I'm doing wrong...I don't have any idea.
The problem is in this line:
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
because the original line is:
#PermitRootLogin prohibit-password
So sed works but the option remains commented-out. No doubt you know what to do to fix this but just in case the solution is to add # to the matching part:
RUN sed -Ei 's/#(PermitRootLogin).+/\1 yes/' /etc/ssh/sshd_config
By the way, usually you do not need a ssh server in a container to get inside it. It is possible to open a shell inside a container with docker exec -it <container> sh or (for Kubernetes) kubectl exec -it <pod_name> sh.

Running docker commands on remote machine through ssh

I have two machines:
Ubuntu workstation running docker
Macbook with Mac OS
I want to be able to run docker commands from MacOS through ssh on my Ubuntu workstation.
Docker works fine when running commands on Ubuntu.
SSH works fine (key based with entity saved).
I've tried creating a context:
docker context create ubuntu --docker "host=ssh://myuser#192.168.1.100"
docker context use ubuntu
docker run -it alpine sh
and I get:
docker: Cannot connect to the Docker daemon at http://docker. Is the docker daemon running?.
the same error I get when trying to:
docker -H ssh://myuser#192.168.1.100 run -it alpine sh
Nothing from the solutions I've found seems to be helping.
PS: 192.168.1.100 is only for the question. When running commands I use real IP, which is correct and not colliding with anything. Dirrect SSH is working perfectly.
For your case you can use docker-machine:
Install:
base=https://github.com/docker/machine/releases/download/v0.16.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo mv /tmp/docker-machine /usr/local/bin/docker-machine &&
chmod +x /usr/local/bin/docker-machine
Run/create:
docker-machine create \
--driver generic \
--generic-ip-address=put_here_ip_of_remote_docker \
--generic-ssh-key ~/.ssh/id_rsa \
vm_123
Check:
docker-machine ls
docker-machine ip vm_123
docker-machine inspect vm_123
Use:
docker-machine ssh vm_123
docker run -it alpine sh
exit
exit
eval $(docker-machine env -u)
Extra tips:
Also you can make vm_123 as the active docker machine via this command:
eval $(docker-machine env vm_123)
docker run -it alpine sh
exit
eval $(docker-machine env -u)
and unset docker machine vm_123 as active via this command:
eval $(docker-machine env -u)
https://docs.docker.com/machine/drivers/generic/
https://docs.docker.com/machine/examples/aws/
https://docs.docker.com/machine/install-machine/
https://docs.docker.com/machine/reference/ssh/
Is you sure that ip on your Ubuntu is 192.168.1.1 ?
Because I think that its your router ip :)
Can you post ip a from your Ubuntu, please ?

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

Why exited docker conatiner are not getting removed?

File name: dockerHandler.sh
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run -d "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
docker rm $cont
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
But whenever I check the docker container status after running the the docker image it is giving list of exited docker containers.
command: docker ps -as
Hence to delete those exited containers I am running manually below command
rm $(docker ps -a -f status=exited -q)
You should add the flag --rm to your docker command:
From Docker man:
➜ ~ docker run --help | grep rm
--rm Automatically remove the container when it exits
removed lines
docker kill $cont &> /dev/null
docker rm $cont
docker logs $cont | sed 's/^/\t/'
and used gtimeout instead timeout in Mac, it works fine.
To install gtimeout on Mac:
Installing CoreUtils
brew install coreutils
In line 8 of DockerTimeout.sh change timeout to gtimeout

Resources