How do I ssh into the VM for Minikube? - docker

What is the username/password/keys to ssh into the Minikube VM?

You can use the Minikube binary for this, minikube ssh.

Minikube uses boot2docker as its base image, so the default SSH login to the VM ends up being docker:tcuser1.

I too wanted to login without the Minikube command. I found that it drops the SSH key it generates into ~/.minikube/machines//id_rsa.
My machine was named the default "minikube", and therefore I could do:
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip)

For windows hyper-v the answer was
open "Hyper-V Manager"
right click on the "minikube" VM
user "root"
There was no password.. that got me in.

minikube ssh -v 7
It will show you the output where you can see the full SSH command
/usr/bin/ssh -F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none docker#127.0.0.1 -o IdentitiesOnly=yes -i ~/.minikube/machines/minikube/id_rsa -p 56290

All the files mentioned are AuthOptions, which can be configured in the config.json file:
$HOME\.minikube\machines\minikube\config.json
Generally, the SSH user is: docker.
If you want to ssh into your Minikube node/VM, then use SSH keys. You can use a Windows client application like WinSCP to configure the keys for your VM. If the format of keys is not as expected (.ppk), then use another client called PuttyGen to convert the keys into the expected format.
After you're done, log in using WinSCP, and it will enable you to shh into the desired VM using the configured keys.

docker/tcuser is the username/password to access to it , and it's also an straight way.
if you just want to master the control platform, then minikube ssh is a quick way to login.

Getting user and password for minikube in Mac.
cat ~/.minikube/machines/minikube/config.json
Loggin on SSH
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip)

Give User Name as - docker
Give password as tcuser and press enter :

minikube ssh -v 7
works for me. This will get to in the ssh docker minikube
minikube ssh docker#{IP Address}
doesn't work for me.

Related

How to export docker process results to a local file - while connecting to the host through ssh?

Well considering I have a docker (with postgres) running I could dump the data using pg_dump using:
sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db
I could further send this to a file by adding > export.sql
sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db > export.sql
Finally this works fine in an (interactive) ssh session.
However when using ssh the file is stored on remote host, instead of in my local system, I wish to get the file locally instead of in remote. I know I can send a command directly to the ssh shell and then exporting is done to the local host:
ssh -p 226 USER#HOST 'command' > local.sql
IE:
ssh -p 226 USER#HOST 'echo test' > local.sql
However on try to combine both commands I get an error
ssh -p 226 USER#HOST 'sudo docker exec <DOCKERNAME> pg_dump --data-only --table=some_table some_db' > local.sql
sudo: no tty present and no askpass program specified
And if I dare remove sudo (which would be silly) I get: sh: docker: command not found. How do I solve this? How can I export the pg dump direct to my local pc? With a simple command? Or at least without first creating a copy of the file on the remote system?
I'd avoid sudo or docker exec for this setup.
First, make sure that your database container has a port published to the host. In Docker Compose, for example:
version: '3.8'
services:
db:
image: postgres
ports:
- '127.0.0.1:11111:5432'
The second port number must be the ordinary PostgreSQL port 5432; the first port number can be anything that doesn't conflict; the 127.0.0.1 setting makes the published port only accessible on the local system.
Then, when you connect to the remote system, you can use ssh to set up a port forward:
ssh -L 22222:localhost:11111 -N me#remote.example.com
ssh -L sets up a port forward from your local system to the remote system; ssh -N says to not run a command, just do the port forward.
Now on your local system, you can run psql and other similar client tools. Locally, localhost:22222 connects to the ssh tunnel; that forwards to localhost:11111 on the remote system; and that forwards to port 5432 in the container.
pg_dump --port=22222 --data-only --table=some_table some_db > export.sql
If you have the option of directly connecting to the database host, you could remove 127.0.0.1 from the ports: setting, and then pg_dump --host=remote.example.com --port=11111, without the ssh tunnel. (But I'm guessing it's there for a reason.)
You could forward socket connections over ssh then connect to the container from your host if you have docker installed:
ssh -n -N -T -L ${PWD}/docker.sock:/var/run/docker.sock user#host &
docker -H unix://${PWD}/docker.sock exec ...

Simultaneous Docker and VirtualBox

Is there any way with which docker can be executed alongside virtualbox or vmware workstation. As I understand docker installer in Windows requires Hyper-V which needs to be disabled for VirtualBox or Workstation.
Install latest Docker for Windows, and VirtualBox. When asked from DfW about enabling Hyper-V cancel it, VirtualBox doesn't work with it, but you can use docker-machine with VirtualBox driver to run docker containers. I had some errors starting the docker machine in VB so here's full guide.
Go to Command Prompt and issue
docker-machine create -d virtualbox --virtualbox-ui-type "gui" default
You can look the boot2docker booting and docker-machine still don't see the IP. After 5 minutes it will timeout with error
"Error creating machine: Error in driver during machine creation: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded".
Next command is docker-machine stop default.
Download and start Process Explorer, we will try to see the command that gives out error.
Issue docker-machine start default in console while Process Explorer is open. Quickly search for docker-machine process and click on ssh.exe child it spawned.
Then right click it and select properties and copy Command line text (hint HOME, SHIFT+END, CTRL+C).
Mine looked like:
C:\WINDOWS\System32\OpenSSH\ssh.exe -F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none docker#127.0.0.1 -o IdentitiesOnly=yes -i C:\Users\zvelj\.docker\machine\machines\default\id_rsa -p 55011 "exit 0"
Add the verbose flag -v before "exit 0" so we can see what is the error. In my case it was "WARNING: UNPROTECTED PRIVATE KEY FILE!".
To fix that find that private key, right click, select Properties, in dialog go Security tab, Advanced button, Change owner to you, Disable inheritance (remove all), Add new permission entry for your account with full control.
Now issue docker-machine stop default and docker-machine start default. Then just follow instructions in console.
docker-machine regenerate-certs default
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
To verify run docker-machine ls and look for * in ACTIVE column.
To change it back to headless mode you need to change line in "C:\Users{{YOUR USERNAME}}.docker\machine\machines\default\config.json"
"UIType": "guid", to "UIType": "headless",
You will have to shell variables with
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
to enable docker commands for new consoles. Probably best to create a bash script which does that automatically and offers menu choices for starting/stopping containers
I think that only solution might be to use Docker Toolbox (which uses Virtualbox) instead of Docker for Windows ...
https://github.com/docker/for-win/issues/6
Quote from this issue:
I'm closing this issue. While we understand the background for the request, we currently have no concrete plans to offer other virtualization backends for Docker for Windows for the reasons outlined above. We continue having Toolbox and docker-machine updates for non-Hyper-V users.

SSH tunneling to remote server with docker

I am trying to write a Dockerfile to access a remote mySQL database using ssh tunneling.
Tried with the following Run command:
ssh -f -N username#hostname -L [local port]:[database host]:[remote port] StrictHostKeyChecking=no
and getting this error:
"Host key verification failed" ERROR
Assuming that the Docker container does not have access to any SSH data (i.e.: there is no ~/.ssh/known_hosts), you have two ways to handle this:
Use ssh-keyscan -t rsa server.example.com > ~/.ssh/my_known_hosts from within the container to add the remote host
Or copy the relevant line from an existing my_known_hosts or simply COPY a the whole file to the container.
Either of these approaches should do it.

How to run docker-compose on remote host?

I have compose file locally. How to run bundle of containers on remote host like docker-compose up -d with DOCKER_HOST=<some ip>?
After the release of Docker 18.09.0 and the (as of now) upcoming docker-compose v1.23.1 release this will get a whole lot easier. This mentioned Docker release added support for the ssh protocol to the DOCKER_HOST environment variable and the -H argument to docker ... commands respectively. The next docker-compose release will incorporate this feature as well.
First of all, you'll need SSH access to the target machine (which you'll probably need with any approach).
Then, either:
# Re-direct to remote environment.
export DOCKER_HOST="ssh://my-user#remote-host"
# Run your docker-compose commands.
docker-compose pull
docker-compose down
docker-compose up
# All docker-compose commands here will be run on remote-host.
# Switch back to your local environment.
unset DOCKER_HOST
Or, if you prefer, all in one go for one command only:
docker-compose -H "ssh://my-user#remote-host" up
One great thing about this is that all your local environment variables that you might use in your docker-compose.yml file for configuration are available without having to transfer them over to remote-host in some way.
If you don't need to run docker container on your local machine, but still on the same remote machine, you can change this in your docker setting.
On the local machine:
You can control remote host with -H parameter
docker -H tcp://remote:2375 pull ubuntu
To use it with docker-compose, you should add this parameter in /etc/default/docker
On the remote machine
You should change listen from external adress and not only unix socket.
See Bind Docker to another host/port or a Unix socket for more details.
If you need to run container on multiple remote hoste, you should configure Docker Swarm
You can now use docker contexts for this:
docker context create dev ‐‐docker “host=ssh://user#remotemachine”
docker-compose ‐‐context dev up -d
More info here: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.
so that we can do
export DOCKER_HOST=tcp://192.168.1.2:2375
docker-compose up
Yet another possibility I discovered recently is controlling a remote Docker Unix socket via an SSH tunnel (credits to https://medium.com/#dperny/forwarding-the-docker-socket-over-ssh-e6567cfab160 where I learned about this approach).
Prerequisite
You are able to SSH into the target machine. Passwordless, key based access is preferred for security and convenience, you can learn how to set this up e.g. here: https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Besides, some sources mention forwarding Unix sockets via SSH tunnels is only available starting from OpenSSH v6.7 (run ssh -V to check), I did not try this out on older versions though.
SSH Tunnel
Now, create a new SSH tunnel between a local location and the Docker Unix socket on the remote machine:
ssh -nNT -L $(pwd)/docker.sock:/var/run/docker.sock user#someremote
Alternatively, it is also possible to bind to a local port instead of a file location. Make sure the port is open for connections and not already in use.
ssh -nNT -L localhost:2377:/var/run/docker.sock user#someremote
Re-direct Docker Client
Leave the terminal open and open a second one. In there, make your Docker client talk to the newly created tunnel-socket instead of your local Unix Docker socket.
If you bound to a file location:
export DOCKER_HOST=unix://$(pwd)/docker.sock
If you bound to a local port (example port as used above):
export DOCKER_HOST=localhost:2377
Now, run some Docker commands like docker ps or start a container, pull an image etc. Everything will happen on the remote machine as long as the SSH tunnel is active. In order to run local Docker commands again:
Close the tunnel by hitting Ctrl+C in the first terminal.
If you bound to a file location: Remove the temporary tunnel socket again. Otherwise you will not be able to open the same one again later: rm -f "$(pwd)"/docker.sock
Make your Docker client talk to your local Unix socket again (which is the default if unset): unset DOCKER_HOST
The great thing about this is that you save the hassle of copying docker-compose.yml files and other resources around or setting environment variables on a remote machine (which is difficult).
Non-interactive SSH Tunnel
If you want to use this in a scripting context where an interactive terminal is not possible, there is a way to open and close the SSH tunnel in the background using the SSH ControlMaster and ControlPath options:
# constants
TEMP_DIR="$(mktemp -d -t someprefix_XXXXXX)"
REMOTE_USER=some_user
REMOTE_HOST=some.host
control_socket="${TEMP_DIR}"/control.sock
local_temp_docker_socket="${TEMP_DIR}"/docker.sock
remote_docker_socket="/var/run/docker.sock"
# open the SSH tunnel in the background - this will not fork
# into the background before the tunnel is established and fail otherwise
ssh -f -n -M -N -T \
-o ExitOnForwardFailure=yes \
-S "${control_socket}" \
-L "${local_temp_docker_socket}":"${remote_docker_socket}" \
"${REMOTE_USER}"#"${REMOTE_HOST}"
# re-direct local Docker engine to the remote socket
export DOCKER_HOST="unix://${local_temp_docker_socket}"
# do some business on remote host
docker ps -a
# close the tunnel and clean up
ssh -S "${control_socket}" -O exit "${REMOTE_HOST}"
rm -f "${local_temp_docker_socket}" "${control_socket}"
unset DOCKER_HOST
# do business on localhost again
Given that you are able to log in on the remote machine, another approach to running docker-compose commands on that machine is to use SSH.
Copy your docker-compose.yml file over to the remote host via scp, run the docker-compose commands over SSH, finally clean up by removing the file again.
This could look as follows:
scp ./docker-compose.yml SomeUser#RemoteHost:/tmp/docker-compose.yml
ssh SomeUser#RemoteHost "docker-compose -f /tmp/docker-compose.yml up"
ssh SomeUser#RemoteHost "rm -f /tmp/docker-compose.yml"
You could even make it shorter and omit the sending and removing of the docker-compose.yml file by using the -f - option to docker-compose which will expect the docker-compose.yml file to be piped from stdin. Just pipe its content to the SSH command:
cat docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose -f - up"
If you use environment variable substitution in your docker-compose.yml file, the above-mentioned command will not replace them with your local values on the remote host and your commands might fail due to the variables being unset. To overcome this, the envsubst utility can be used to replace the variables with your local values in memory before piping the content to the SSH command:
envsubst < docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose up"

ssh fails when is being called within windows service

I'm calling cmd file that calls ssh to intercommunicate with Linux machine. I use .NET Process class to accomplish this. But when being called within Windows Service call fails with following error:
C:\test>ssh -o StrictHostKeyChecking=no -i private_linux_key user#host "ls"
0 [main] ssh 9496 fhandler_base::dup: dup(some disk file) failed, handle 0, Win32 error 6
dup() in/out/err failed
Everything works when I start application as Console application.
What may be possible reason of this failure and how to fix this?
EDIT All Windows service has to do - somehow kill predefined daemon on Linux machine
Thanks
EDIT
Similar problem described there: http://www.velocityreviews.com/forums/t714254-executing-commands-from-windows-service.html
Maybe this post will save someones time to struggle similar problem. I've finally found solution that works for me. It is ssh -n key
So instead of
ssh -o StrictHostKeyChecking=no -i private_linux_key user#host "ls"
I've used
ssh -n -o StrictHostKeyChecking=no -i private_linux_key user#host "ls"
It still looks like a magic, but it works!
isn't it a problem of access credentials ?
when running your program as a console applicaiton, you are using the access rights of the currently logged on user. however, a Windows Service executes in a special user account (generally "SYSTEM"), and as such is not granted the same access rights.

Resources