I am tesing docker on Windows 7 machine by installing docker toolbox.
It works for a while. Then its IP changed from 192.168.99.100 to 192.168.99.101.
When I run $docker image ls. It tells me that my certs is signed by unknown authority.
I resolved this by running $docker-machine regenerate-certs and then copy those files from .docker/machines/machine/default to .docker/machine/certs. It recovers.
But the problem is, the IP changes every now and then. I have to go through this process all over again.
Is there a way to config and avoid this from happening?
Related
I can see my machine … Windows 10 Home
usuario#DESKTOP-GTCQCAR MINGW64 /c/Program Files/Docker Toolbox
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.101:2376 v18.05.0-ce
But when I try to list the images it tries to connect to a different IP ending in 100, instead of 101 where the docker machine is:
usuario#DESKTOP-GTCQCAR ~
$ docker image ls
error during connect: Get https://192.168.99.100:2376/v1.37/images/json: dial tcp 192.168.99.100:2376: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
It can not connect. How can I fix it?
I also faced similar problem after updating from Docker toolbox to Docker for windows.
I solved this problem by deleting all the environment variables starting with Docker.
I am not sure if it will solve your problem as well, but may be it will help someone.
This can also be helpful
Issue can be of having Docker Toolbox installed before changing to Docker for Windows
Uninstalled Docker for windows (make sure Docker Toolbox and VirtualBox are uninstalled as well)
Go to C:\users[USER] directory and remove .docker directory if it is there.
Remove Environmental Variables:
DOCKER_TLS_VERIFY
DOCKER_CERT_PATH
DOCKER_HOST
DOCKER_TOOLBOX_INSTALL_PATH
You might want to restart you computer just to be safe.
Reference: https://forums.docker.com/t/docker-starts-but-trying-to-do-anything-results-in-error-during-connect/49007/5
Check out this great guide: https://docs.docker.com/toolbox/faqs/troubleshoot/
Good luck
I created a docker stack to deploy to a swarm. Now I´m a bit confused how the proper way looks like to deploy it to a real server?
Of course I can
scp my docker-stack.yml file to a node of my swarm
ssh into the node
run docker stack deploy -c docker-stack.yml stackname
So there is the docker-machine tool I thought.
I tried
docker-machine -d none --url=tcp://<RemoteHostIp>:2375 node1
what only seems to work if you open the port without TLS?
I received following:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.178.49:2375": dial tcp 192.168.178.49:2375: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I already tried to generate a certificate & copy it over to the host:
ssh-keygen -t rsa
ssh-copy-id myuser#node1
Then I ran
docker-machine --tls-ca-cert PathToMyCert --tls-client-cert PathToMyCert create -d none --url=tcp://192.168.178.49:2375 node1
With the following result:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "node1:2375": There was an error reading certificate
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I also tried it with the generic driver
$ docker-machine create -d generic --generic-ssh-port "22" --generic-ssh-user "MyRemoteUser" --generic-ip-address 192.168.178.49 node1
Running pre-create checks...
Creating machine...
(node1) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Error creating machine: Error detecting OS: OS type not recognized
How do I add the remote docker host with docker-machine properly with TLS?
Or is there a better way to deploy stacks to a server/into production?
I read often that you shouldn´t expose the docker port but not once how to do it. And I can´t believe that they doesn´t provide a simple way to do this.
Update & Solution
I think both answers have there qualification.
I found Deploy to Azure Offical Doc (it´s the same for AWS).
The answer from #Tarun Lalwani pointed me into the right direction and it´s almost the official solution. Thats the reason I accepted his answer.
For me the following commands worked:
ssh -fNL localhost:2374:/var/run/docker.sock myuser#node1
Then you can run either:
docker -H localhost:2374 stack deploy -c stack-compose.yml stackname
or
DOCKER_HOST=localhost:2374
docker stack deploy -c stack-compose.yml stackname
The answer from #BMitch is also valid and the security concern he mentioned shouldn´t be ignored.
Update 2
The answer from #bretf is a awesome way to connect to your swarm. Especially if you have more than one. It´s still beta but works for swarms which are available to the internet and don´t have a ARM architecture.
I would prefer not opening/exposing the docker port even if I am thinking of TLS. I would rather use a SSH tunnel and then do the deployment
ssh -L 2375:127.0.0.1:2375 myuser#node1
And then use
DOCKER_HOST=tcp://127.0.0.1:2375
docker stack deploy -c docker-stack.yml stackname
You don't need docker-machine for this. Docker has the detailed steps to configure TLS in their documentation. The steps involve:
creating a CA
create and sign a server certificate
configuring the dockerd daemon to use this cert and validate client certs
create and sign a client certificate
copy the ca and client certificate files to your client machine's .docker folder
set some environment variables to use the client certificates and remote docker host
I wouldn't use the ssh tunnel method on a multi-user environment since any user with access to 127.0.0.1 would have root access to the remote docker host without a password or any auditing.
If you're using Docker for Windows or Docker for Mac, Docker Cloud has a more automated way to setup your TLS certs and get you securely connected to a remote host for free. Under "Swarms" there's "Bring your own Swarm" which runs an agent container on your Swarm managers to let you easily use your local docker cli without manual cert setup. It still requires the Swarm port open to internet, but this setup ensures it has TLS mutual auth enabled.
Here's a youtube video showing how to set it up. It can also support group permissions for adding/removing other admins from remotely accessing the Swarm.
I'm trying to run this command :
docker daemon --insecure-registry 192.168.99.100:5000
but i'm gettin ghe following error:
exec: "dockerd": executable file not found in %PATH%
I'm using win7 and docker-toolbox 1.12.2 with VM Virtual Box.
What is the problem here?
there is a way to run this command?
That is indeed what issue 27102 report:
Docker Daemon command dockerd not found on latest stable Docker for Mac and Docker Toolbox
(this is for mac but also applies on Windows)
Docker for Mac should probably print a different message, also, we may need to check if the CLI is on the same "host" as the daemon, and print a different message based on that (as running dockerd wont work if the daemon is on a remote server).
the daemon runs in a Linux virtual machine, so you do not need to (and cannot) run it manually. It is already running of the whale is in the top bar.
Conclusion: (Aug. 2021):
I'm closing this ticket, because the current behaviour is as expected.
I think this was originally opened when the docker cli still had a daemon subcommand (during the transition from a single binary to separate binaries for the cli and daemon), which is no longer the case.
The dockerd binary, which is the docker daemon, is not available for macOS (and unlikely will be), because it's a Linux binary that (on Docker Desktop for Mac) runs inside the Docker Desktop VM.
In 2022:
I'm having this exact same issue on the most recent MacOs version (Monterey, Version 12.3.1 (21E258)).
I've uninstalled Docker & reinstalled several times, if I run docker ps or docker run hello-world as paulinechi describes, I get that same error:
docker: Cannot connect to the Docker daemon at `tcp://35.215.110.128:2375`.
Is the docker daemon running?...
Answer:
Make sure you don't have a DOCKER_HOST environment variable set; from that error, it looks like either you have a DOCKER_HOST env-var, or possibly a docker context that defines a non-standard location to connect to the daemon.
The default should be to connect with the Engine API using a unix-socket (unix:///var/run/docker.sock)
Confirmation:
I forgot I was pointing to a DOCKER_HOST on a remote machine that has since shut down.
I have installed the latest version of Docker for Windows (1.12.1-stable, build 7135) on my Windows 10 Pro-64 bit. I was able to successfully execute docker run hello-world. However, when I do docker run busybox, an error is thrown as below.
C:\Users\testuser>docker run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
8ddc19f16526: Pulling fs layer
docker: error pulling image configuration: Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/2b/2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749/data?Expires=1474617209&Signature=HRDYuDqnI3ERPonW9vj0HtP3hzIQoB1j7d-kWzR0iDXozoDknq0n4wIfkw2H73K5xaBBmVNy2ZoOqOQTm9LFP44MGfgS1pNthOLuEMSKrVUJmuaQNvckxuznuqffhkMCmTmQ7-~WMBjyLh7Si9sLdYR8oLVwN6sDRn5wKRa7f4I_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: dial tcp: i/o timeout.
See 'docker run --help'.
The same error occurs for several other images. I do not have a proxy and have a stable internet connection. I have tried this with windows firewall enabled and disabled. I have also restarted the docker service.
Let me know if I am missing something. Thanks in advance.
This is a known issue with the networking stack in the current version of Docker for Windows.
The workaround is detailed in remove stale network adapters: open the Network settings in Docker for Windows, and select the 'Fixed' DNS setting, using Google's DNS server 8.8.8.8.
I was also facing the similar issue while running Docker on Windows 10.
The issue got resolved by changing the DNS settings.
(Settings -> Network -> DNS Server -> 8.8.8.8 ( Automatic)
I observed that when the DNS server option was set to manual, the timeout issue still remains.
After making these changes, Docker service was restarted and I was able to pull the Docker image successfully.
regards,
dattatray.
Simply setting the DNS to fixed (and setting the target to 8.8.8.8) fixed it for me (after Docker restarted).
0
Setting up proxies and changing stale DNS settings were of no use in my case.
I had to reset the Virtual machine using below steps in docker-toolbox bash:
Stop the host docker virtual machine:
$ docker-machine stop default
Delete the host:
$ docker-machine rm default
Create new VirtualBox machine named default:
$ docker-machine create --driver virtualbox default
Verify if the machine is runnning. ACTIVE attribute should be marked *:
$ docker-machine ls
If the machine is not running, run the machine:
$ docker-machine run default
Then, on docker run mysql:8.0, you will get below screen in your bash
Hope it help you guys and save your time!
I have my private docker registry running on a remote machine, which is secured by TLS and uses HTTPS. Now I want to access it from my local docker-machine installed on Windows 7. I have copied the certificates to "/etc/docker/certs.d/" in the docker-machine VM and restarted docker.
After this I can successfully login to my private registry using credentials, but when I try to push an image to it, it gives me a certificate signed by unknown authority error. After researching a little I restarted the docker daemon with docker -d --insecure-registry https://<registry-host>, and it worked.
My question is: if I have copied my certificates to the host machine, why do I need to start the registry with the --insecure-registry option?
I can only access the registry from another host with certificates as well as restarting docker with --insecure-registry , which looks a little wrong to me.
Docker version: 1.8.3
Any pointers on this would be really helpful.
certificate signed by unknown authority
The error message gives it away - your certificates are self-signed (as in not trusted by a known CA).
See here.
If you would like to access your registry with HTTP, follow the instructions here
Basically (do this on the machine from which you try to access the registry):
edit the file /etc/default/docker so that there is a line that reads: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (or add that to existing DOCKER_OPTS)
restart your Docker daemon: on ubuntu, this is usually service docker stop && service docker start