I'm running Docker on Windows 10, and I noticed that by default, DOCKER_TLS_VERIFY=0.
I have a few questions about it:
Does it mean that I'm pulling images from Docker Hub insecurly?
Does Docker pull images through https when it is set to 0?
Is it recommended to set it to 1?
Communication with external registry servers like docker hub will default to TLS, this option is for something very different.
DOCKER_TLS_VERIFY tells the docker client (aka the docker command) whether to communicate with the docker daemon (dockerd) with any TLS verification. If set to 1, the server needs to have a private CA signed key pair, and the client also needs to have a key pair signed by the same CA. This setting tells the client to verify that server key it receives is signed by the private CA. The daemon/server will have a similar setting to verify client certificates.
If you are communicating with a remote docker engine over the network, this would be very bad since it implies that the remote docker engine allows anyone to hit the API (which gives root level access) without any client credentials. When communicating to a local socket that is protected with filesystem permissions, this feature is not needed.
This variable is documented here: https://docs.docker.com/engine/reference/commandline/cli/
Steps to configure the daemon and clients with TLS keys are documented here: https://docs.docker.com/engine/security/https/
For windows, many of the above steps would need to be translated (e.g. different locations for the daemon.json file). Have a look at the following:
https://docs.docker.com/engine/reference/commandline/dockerd/
https://docs.docker.com/docker-for-windows/
Related
Is it possible to run publicly available containers as-is when running reverse proxies that sign traffic with a custom root CA?
Example: Zscaler internet security
Corporate environments often run proxies.
While it is possible to install a custom root CA certificate file into a custom-built docker image and successfully run the container (e.g. COPY ... custom certificate ... and RUN ... install custom certificate ...) and it is also possible to mount the certificate into a container and then run a custom "entrypoint" command to install the certificate, it does not seem possible to simply tell Docker to trust what the host trusts.
For example, when Zscaler signs responses with their root CA, docker container network requests will fail to validate the response, because they do not recognize the Zscaler root CA.
Scenario:
Run a public docker image on a Windows computer with Zscaler Client installed
When the container starts, if it makes network requests, they are routed through Zscaler
Most and perhaps all network requests will fail to process the response, because the container OS and the tools do not trust the Zscaler certificate
This problem is highlighted when tools like Docker Compose or Kubernetes Helm attempt to run multiple containers at a time and many of them require network (of course).
In the distant future, it might be possible to use something like OCI hooks.
I'm using haproxy as a bastion server / cluster gateway, as only some of the nodes in my network have direct access to the external network. My internal nodes are part of a kubernetes cluster, and need to be able to pull images from a private registry external to my cluster which requires certificate identities.
k8s cluster internal node -> haproxy on edge node -> docker registry
I'm trying to configure my back_end in my haproxy.cfg to route to the docker registry, and update the request with the certificate identity of the edge node. (Internal node does not have certs acceptable to the docker registry, and I've not been allowed to host the external node's certs on the internal node.) What I have right now looks like the below...
frontend ft_ssl
bind <boxIP>:443
mode http
default_backend bk_global_import_registry_certs
backend bk_global_import_registry_certs
mode http
balance roundrobin
server registry_alias-0 <registryIP>:443 ssl ca-file fullyqualified/ca.crt crt fullyqualified/file.pem check
server registry_alias-1 <registryIP2>:443 ssl ca-file fullyqualified/ca.crt crt fullyqualified/file.pem check
I currently have the HTTPS_PROXY setting in /etc/systemd/system/docker.service.d/http-proxy.conf and am getting a 400 Bad Request. Scrubbed log message below, with only changes as removal of IPs or typos.
InternalIP:randomPort [09/Jul/2019:13:28:08.659] ft_ssl bk_global_import_registry_certs 0/0/10/5/15 400 350 - - ---- 1/1/0/0/0 0/0 {} "CONNECT externalFQDN:443 HTTP/1.1"
For those looking at this via the kubernetes or docker tags, I also considered setting up a pull-through cache, but realized this only works with Docker's public registry - see open Docker GitHub issue 1431 for other folks trying to find ways to get past that, as well.
Posting the answer that resolved the situation for us, in case it helps others...
All traffic from the internal node now routes through to the HAProxy.
I'm no longer using the HTTPS_PROXY setting in /etc/systemd/system/docker.service.d/http-proxy.conf. Proxying through was not appropriate, since I could not use my internal node's certificate to authenticate against the docker registry.
From the internal node, I'm now treating the docker registry as an insecure registry by adding its address to /etc/docker/daemon.json in insecure-registries: [].
We can now access our internal private registry using the certificates passed along on the forward on the HAProxy backend. We do verify the certificate on the front-end, but because we have the registry listed in insecure-registries, Docker accepts the cert name mismatch.
One side effect noticed: pulling an image from "default" docker registry without specifying a prefix does not succeed using our solution. To pull from Docker, for example, you'd pull from registry-1.docker.io/imageName:imageVersion, rather than just imageName:imageVersion.
I might be mistaking but as far as I know Haproxy cannot be used as a direct HTTP proxy, it only acts as a reverse HTTP proxy. You should use Squid or something like that to be a direct HTTP proxy.
If you want Haproxy to terminate SSL for you like you do, then you will need to change hostname part of your Docker image to be the hostname of your Haproxy node. You should also make sure your Docker daemons trust the Haproxy certificate or add Haproxy to the list of INSECURE_REGISTRIES on all Kube nodes.
I'm trying to create a secure private Docker registry that would work only in the local network and be accessed by IP address.
I have read a lot of articles regarding this issue but most of them talk about the need to have a registered domain name that points to a valid public IP address ( where the registry is ) , and then obtaining a certificate for such domain.
I'd like to know if there's a way of creating a docker registry with the following properties:
accessible only from the local network
secured with a valid certificate ( not the self signed certificate which is still considered "insecure" by docker )
How would I obtain a valid certificate for such a registry? I understand that certificates cannot be created for IP addresses alone, but can I generate a certificate for a domain that is registered but doesn't point to any public IP ( I've read something about dns-01 challenge, so I belive it's possible ) and then use that certificate, provided I map the said domain to the local IP of the server in my hosts file.
If this isn't possible, what is the best alternative for creative a secure, local, private docker registry?
Use Nginx to secure your Docker Registry.
The relevant documentation is here:
https://github.com/docker/distribution/tree/master/contrib/compose
You can use self-signed certificates if you add the root CA to /usr/local/share/ca-certificates and run update-ca-certificates command on the clients.
I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error:
Error response from daemon: v1 ping attempt failed with error: Get
https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout.
If this private registry supports only HTTP or HTTPS with an unknown CA
certificate, please add `--insecure-registry 172.22.22.11:5000` to the
daemon's arguments. In the case of HTTPS, if you have access to the registry's
CA certificate, no need for the flag; simply place the CA certificate at
/etc/docker/certs.d/172.22.22.11:5000/ca.crt
both machine's docker executable is v1.6.2. Why is it that one works and is pushing to v2 but the other is v1?
Here's the repo for the registry: https://github.com/docker/distribution
You need to secure the registry before you can access it remotely, or explicitly allow all your Docker daemons to access insecure registries.
To secure the registry the easiest choice is to buy an SSL certificate for your server, but you can also self-sign the certificate and distribute to clients.
To allow insecure access add the argument --insecure-registry myregistrydomain.com:5000 to all the daemons who need to access the registry. (Obviously replace the domain name and port with yours).
The full instructions (including an example of your error message) are available at: https://github.com/docker/distribution/blob/master/docs/deploying.md
Regarding the error message, I guess Docker tries to use v2 first, fails because of the security issue then tries v1 and fails again.
This may be due to an env variable being set. I had a very similar issue when using a system with this env variable set.
export DOCKER_HOST="tcp://hostname:5000"
Running docker login http://hostname:5000 did not work and gave the same v1 behaviour. I did not expect the env variable to take precedence over an argument passed directly to the command.
Go to /etc/docker/daemon.json. If the file is not present, create a file and add the following
{
"insecure-registries": ["hosted-registry-IP:port"]
}
After that restart the docker service by
service docker restart
Docker has a relatively coherent guide regarding running a docker daemon with remote access over HTTPS: https://docs.docker.com/articles/https/
What it does not address, is, how can the daemon be exposed so that clients of varying hosts may access it, rather than just one static known-before-hand host?
The first use-case is that I want to use/try/test stuff against the daemon from my local machine. I expect that sooner or later different production hosts will also use the daemon such as if our organization decides to use (or trial run) different build systems that have to interact with the daemon (which is what we are going to do with CircleCI as it builds our projects).
Thanks.
It is not limited to one host.
Follow the instructions, then copy the certificate to each client that will connect.