Docker connect to remote daemon via ssh - Permission denied (publickey) - docker

I have a problem with connecting to my remote(DigitalOcean) docker engine. What I've done is
Made a droplet with Docker 19.03.12 on Ubuntu 20.04.
Made a new user myuser and add to docker group on the remote host.
Made a .ssh/authorized_keys for the new user it's home and set the permissions, owner etc.
Restarted both ssh and docker services.
Result
I can ssh from my Mac notebook to my remote host with myuser. (when I run ssh keychain asks for the passphrase for the id_rsa.key.)
After I logged in to remote host via ssh I can run docker ps, docker info without any problem.
Problem
Before I make a new context for the remote engine, I tried to run some docker command from my local client on my Mac laptop. Interesting part for me is none of the commands below asks for the id_rsa passphrase)
docker -H ssh://myuser#droplet_ip ps -> Error
DOCKER_HOST=ssh://myuser#droplet_ip docker ps -> Error
Error
docker -H ssh://myuser#droplet_ip ps
error during connect: Get http://docker/v1.40/containers/json: command [ssh -l myuser -- droplet_ip docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=myuser#droplet_ip: Permission denied (publickey).
What step I missed? How can I connect to a remote docker engine?

It sounds like Docker may not allow ssh to prompt for a key passphrase when connecting. The easiest solution is probably to load your key into an ssh-agent, so that Docker will be able to use the key without requesting a password.
If you want to add your default key (~/.ssh/id_rsa) you can just run:
ssh-add
You can add specific keys by providing a path to the key:
ssh-add ~/.ssh/id_rsa_special
Most modern desktop environments run an ssh-agent process by default.

Related

Getting 'Host key verification failed' when using ssh in docker context

i am setting up docker context like described here and cofigured the ssh key and the context. Unfortunately I keep getting an error from docker while i'm in the new context:
docker context use myhostcontext
docker ps
error during connect: Get "http://docker.example.com/v1.24/containers/json": command [ssh -l user -- myhost docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.
Suprisingly when i ssh into user#myhost the connection is established as it should be.
ssh -vv user#myhost shows that it uses the given key in ~/.ssh/config
Additional Info:
Platform: Ubuntu 20.04
Docker: 20.10.23
OpenSSH_8.2p1 Ubuntu-4ubuntu0.5, OpenSSL 1.1.1f 31 Mar 2020
Here is what i've done:
I've created a docker context with
docker context create myhostcontext --docker "host=ssh://user#myhost"
I also created a new ssh keypair with ssh-keygen (tried with rsa and ecdsa),
executed ssh-add /path/to/key and ssh-copy-id -i /path/to/key user#myhost
I tried with using "id_rsa" as keyname as well as "myhost" to make sure its not just a default naming problem.
Looking at several instructions (e.g. This question) unfortunately did not help. I also checked the authorized_keys on the remote host and the public key on my local machine, they match.
My ~/.ssh/config looks like this
Host myhost
HostName myhost
User user
StrictHostKeyChecking no
IdentityFile ~/.ssh/myhost
Also removing entries from known_host did not help.
Using the remote hosts IP instead of its name did not help either.
Installing ssh-askpass just shows me, that the authenticity could not be established (default message when using ssh on a host for the first time). Since I later want to use docker context in a CI/CD environment i don't want to have any non-cli stuff.
The only other possible "issue" that comes to my mind is that the user of the remote host is different that the one i am using on the client. But - if understood correctly - that should not be an issue and also i would not know how to manage that.
Any help or suggestion is highly appreciated, since I am struggling with this for days.
Thanks in advance :)

Docker in Docker unable to push

I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.

Docker loading local images permission denied while processing tar file

I have access to a server by ssh with docker version 1.13.1 and I'm
just trying to load a local image using docker load -i
and I'm receiving this error message:
docker load -i docker.img
Error processing tar file(exit status 1): permission denied
And by the way:
docker image import docker.img
Error response from daemon: ApplyLayer exit status 1 stdout: stderr: permission denied
The img file has all the permissions:
> ls -l
> -rwxrwxrwx 1 myuser myuser 9278464 Mar 22 19:12 docker.img*
And docker seems to work rigth:
> docker images
> REPOSITORY TAG IMAGE ID CREATED SIZE
The image works perfectly fine in my local machine...
Any idea about what can be happening here ? The host is running ubuntu 16.04, i was looking for an answer about 1 hour...
=======
I could figure it out, the problem was that I was accessing a proxmox container
not fully virtualized, so, docker requires kernel capabilities that I had not. I searched for the correct proxmox configuration and I solve the issue.
You are trying to execute docker commands as which user?, Standard account or as root user?
Note that the docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The docker daemon always runs as the root user.
If you don’t want to use sudo when you use the docker command, create a Unix group called docker and add users to it. When the docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.
First, check if docker group exists on your server. If not, then add it
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated

Docker permission denied for open /etc/docker/daemon.json: permission denied

I am trying to setup remote host configuration for docker. After setting up certificates i ran dockerd command which is giving error:
dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem -H=0.0.0.0:2376
>>> unable to configure the Docker daemon with file /etc/docker/daemon.json: open /etc/docker/daemon.json: permission denied
I am running from non root user and I've already added my user as part of Docker group. Docker version i am using is:
Docker version 17.12.0-ce, build c97c6d6
I have tried below but still getting same error:
1. the /etc/docker/daemon.json file is having {}
2. I also removed the /etc/docker/daemon.json
3. I also changed ownership but same issue.
Permissions of daemon.json were: -rw-r--r--
The dockerd daemon must be run as root. It is creating networking namespaces, mounting filesystems, and other tasks that cannot be done with a user account. You'll need to run these commands with something like sudo.
The docker socket (/var/run/docker.sock) is configured to allow the docker client to access the api by users in the docker group. This is the client, not the daemon, so you can't get away with running the daemon as a user.

Unable to ssh to master node in mesos local cluster installed system

I am a newbie to Mesos. I have installed a DCOS cluster locally in one system (Centos 7).
Everything went up properly and I am able to access the GUI of DCOS but when I am trying to connect through CLI, it is asking me for password.
I have not been prompted for any kind of password during local installation through vagrant.
But when I issue the following command:
[root#blade7 dcos-vagrant]# dcos node ssh --master-proxy --leader
Running `ssh -A -t core#192.168.65.90 ssh -A -t core#192.168.65.90 `
core#192.168.65.90's password:
Permission denied, please try again.
core#192.168.65.90's password:
I don’t know the password to be given.
Kindly help me in resolving this issue
Since the local installation bases on vagrant, you can use the following convenient workaround: directly log into the virtual machines by using vagrant's ssh.
open a terminal and enter vagrant global-status to see a list of all running vagrant environments (name/id)
switch into your dcos installation directory (e.g., cd ~/dcos-vagrant), which contains the file Vagrantfile
run vagrant ssh <name or (partial) id> in order to ssh into the virtual machine. For example, vagrant ssh m1 connects to the master/leader node, which gives you essentially the same shell as dcos node ssh --master-proxy --leader would do.
Two more tips:
within the virtual machine, the directory /vagrant is mounted to the current directory of the host machine, which is nice for transferring files into/from the VM
you may try to find out the correct ssh credentials of the default vagrant user and then add these (rather than the pem file retrieved from a cloud service provider) via ssh-add to your host machine. This should give you the ability to login via dcos node ssh --master-proxy --leader --user=vagrant without a password
The command shows that you are trying to login to the server using the userid "core". If you do not know the password of user "core", I suggest reset "core" user password and try it again.

Resources