I'm trying from CloudSigma provider to create a docker machine and shows this error at the connect and execute "docker ps":
could not read CA certificate "/root/.docker/machine/machines/default/ca.pem": open /root/.docker/machine/machines/default/ca.pem: permission denied
Any solution???
Thank you.
Related
I installed docker desktop on mac, when run
docker-compuse up
return this error
open /Users/enad/.docker/buildx/current: permission denied
I am trying to add permission
I have a problem with connecting to my remote(DigitalOcean) docker engine. What I've done is
Made a droplet with Docker 19.03.12 on Ubuntu 20.04.
Made a new user myuser and add to docker group on the remote host.
Made a .ssh/authorized_keys for the new user it's home and set the permissions, owner etc.
Restarted both ssh and docker services.
Result
I can ssh from my Mac notebook to my remote host with myuser. (when I run ssh keychain asks for the passphrase for the id_rsa.key.)
After I logged in to remote host via ssh I can run docker ps, docker info without any problem.
Problem
Before I make a new context for the remote engine, I tried to run some docker command from my local client on my Mac laptop. Interesting part for me is none of the commands below asks for the id_rsa passphrase)
docker -H ssh://myuser#droplet_ip ps -> Error
DOCKER_HOST=ssh://myuser#droplet_ip docker ps -> Error
Error
docker -H ssh://myuser#droplet_ip ps
error during connect: Get http://docker/v1.40/containers/json: command [ssh -l myuser -- droplet_ip docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=myuser#droplet_ip: Permission denied (publickey).
What step I missed? How can I connect to a remote docker engine?
It sounds like Docker may not allow ssh to prompt for a key passphrase when connecting. The easiest solution is probably to load your key into an ssh-agent, so that Docker will be able to use the key without requesting a password.
If you want to add your default key (~/.ssh/id_rsa) you can just run:
ssh-add
You can add specific keys by providing a path to the key:
ssh-add ~/.ssh/id_rsa_special
Most modern desktop environments run an ssh-agent process by default.
I have been working at setting up a docker notary on a Centos 8 machine. I followed the README.md for the notary project which tells me to use the testing certificate the project
comes with by moving it to the .notary folder in my home directory. My hope here is that when my docker client is setup for it and when I properly tag the image a docker push to my private docker repo (jFrog Artifactory) would result in a published image that is signed by the notary.
My private repo is running on its own machine and not on the machine where the notary server is running.
But every time I go for the push I get this error:
Signing and pushing trust metadata
Error: error contacting notary server: x509: certificate signed by unknown authority
One of the ways I tried to fix this is by copying over the test certificates from fixtures/root-ca.crt to /etc/pki/ca-trust/source/anchors/ after which I ran update-ca-trust.
$ sudo cp fixtures/root-ca.crt /etc/pki/ca-trust/source/anchors/
$ update-ca-trust
But doing this also didn't help. Why is the notary server throwing this error? Help to resolve this would be greatly appreciated.
With docker content trust, you can add the CA to the user's home directory in a subdirectory under ~/.docker/tls:
mkdir -p ~/.docker/tls/${content_trust_hostname}
cp ca.pem ~/.docker/tls/${content_trust_hostname}/ca.crt
export DOCKER_CONTENT_TRUST=1
docker push ${content_trust_hostname}/${your_repo}:${tag}
Note that the certificate likely needs to end with "crt" and if you don't override the content trust server, the hostname will match the registry name.
I haven’t had issues working on Azure container registry.
Working on Jfrog registry, I had same error
Your work around helped me
“
$ sudo cp fixtures/root-ca.crt /etc/pki/ca-trust/source/anchors/
$ update-ca-trust
“
If it helps I can post my steps
Thanks #RijoSimon
notary server: x509: certificate is valid for 127.0.0.1, not xx.xx.xx.xx(notaryIP)
This error is because the certificate that delivered with notary server is only valid for notary-server, notaryserver, localhost. To make it work with your remote domain, you have to get a CA that work for your ip/domain.
Rijo my solution is not complete because This doesn’t work on remote server, facing an error
Error: error contacting notary server: x509: certificate is valid for 127.0.0.1, not xx.xx.xx.xx(notaryIP)
Here is my solution where was able to sign image locally on the notary server and push it
Docker login artifactoryurl
username:
password:
Login successful
docker trust key generate keyname
export DOCKER_CONTENT_TRUST=0
docker build -f Dockerfile -t artrifactoryurl/reponame:tag .
export DOCKER_CONTENT_TRUST_SERVER=http://127.0.0.1:4443
export DOCKER_CONTENT_TRUST=1
docker trust signer add —key keyname.pub name artifactoryurl/repo
docker trust sign artifactoryurl/reponame:tag
docker inspect artifactoryurl/reponame:tag
Hope it helps 😊
I am trying to deploy a docker registry in my server to manage our images.
I have created it with TLS authentication.
When I run command docker login -u username [registry_domain]:[port] in localhost docker login is successful.
When I'm running the same command from another machine I get:
Error response from daemon: Get [registry_domain]:[port] x509: certificate signed by unknown authority
I have added the file /etc/docker/daemon.json with the next line, but it only solved the problem for localhost
{ "insecure-registries": ["registry:8443"] }
When checking the logs for the registry I can see the error:
http: TLS handshake error from [remoteComputerIp]: remote error: tls: bad certificate
Has anyone encountered this situation? Or maybe could point me to the right direction? Can't seem to find a solution for this
Found the solution.
In order to get the remote machines to be able to login to my registry, I had to copy the client.crt I generated when creating the registry into the default machine I was connecting from. This is because I was signing the certificates myself.
You can ssh into it by using docker-machine ssh [name of the machine] (in my case name was "default")
You copy the certificate to /etc/docker/certs.d/<registry-domain>:<port>/ca.crt
No need to restart anything. Once its working you can easily test by using the command docker login -u username <registry-domain>:<port>
Try regenerating certificates:
docker-machine regenerate-certs machine-name
From: https://forums.docker.com/t/docker-private-registry-x509-certificate-signed-by-unknown-authority/21262/3
I am trying to setup remote host configuration for docker. After setting up certificates i ran dockerd command which is giving error:
dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem -H=0.0.0.0:2376
>>> unable to configure the Docker daemon with file /etc/docker/daemon.json: open /etc/docker/daemon.json: permission denied
I am running from non root user and I've already added my user as part of Docker group. Docker version i am using is:
Docker version 17.12.0-ce, build c97c6d6
I have tried below but still getting same error:
1. the /etc/docker/daemon.json file is having {}
2. I also removed the /etc/docker/daemon.json
3. I also changed ownership but same issue.
Permissions of daemon.json were: -rw-r--r--
The dockerd daemon must be run as root. It is creating networking namespaces, mounting filesystems, and other tasks that cannot be done with a user account. You'll need to run these commands with something like sudo.
The docker socket (/var/run/docker.sock) is configured to allow the docker client to access the api by users in the docker group. This is the client, not the daemon, so you can't get away with running the daemon as a user.