Using edge / proxy node certificate identity - docker

I'm using haproxy as a bastion server / cluster gateway, as only some of the nodes in my network have direct access to the external network. My internal nodes are part of a kubernetes cluster, and need to be able to pull images from a private registry external to my cluster which requires certificate identities.
k8s cluster internal node -> haproxy on edge node -> docker registry
I'm trying to configure my back_end in my haproxy.cfg to route to the docker registry, and update the request with the certificate identity of the edge node. (Internal node does not have certs acceptable to the docker registry, and I've not been allowed to host the external node's certs on the internal node.) What I have right now looks like the below...
frontend ft_ssl
bind <boxIP>:443
mode http
default_backend bk_global_import_registry_certs
backend bk_global_import_registry_certs
mode http
balance roundrobin
server registry_alias-0 <registryIP>:443 ssl ca-file fullyqualified/ca.crt crt fullyqualified/file.pem check
server registry_alias-1 <registryIP2>:443 ssl ca-file fullyqualified/ca.crt crt fullyqualified/file.pem check
I currently have the HTTPS_PROXY setting in /etc/systemd/system/docker.service.d/http-proxy.conf and am getting a 400 Bad Request. Scrubbed log message below, with only changes as removal of IPs or typos.
InternalIP:randomPort [09/Jul/2019:13:28:08.659] ft_ssl bk_global_import_registry_certs 0/0/10/5/15 400 350 - - ---- 1/1/0/0/0 0/0 {} "CONNECT externalFQDN:443 HTTP/1.1"
For those looking at this via the kubernetes or docker tags, I also considered setting up a pull-through cache, but realized this only works with Docker's public registry - see open Docker GitHub issue 1431 for other folks trying to find ways to get past that, as well.

Posting the answer that resolved the situation for us, in case it helps others...
All traffic from the internal node now routes through to the HAProxy.
I'm no longer using the HTTPS_PROXY setting in /etc/systemd/system/docker.service.d/http-proxy.conf. Proxying through was not appropriate, since I could not use my internal node's certificate to authenticate against the docker registry.
From the internal node, I'm now treating the docker registry as an insecure registry by adding its address to /etc/docker/daemon.json in insecure-registries: [].
We can now access our internal private registry using the certificates passed along on the forward on the HAProxy backend. We do verify the certificate on the front-end, but because we have the registry listed in insecure-registries, Docker accepts the cert name mismatch.
One side effect noticed: pulling an image from "default" docker registry without specifying a prefix does not succeed using our solution. To pull from Docker, for example, you'd pull from registry-1.docker.io/imageName:imageVersion, rather than just imageName:imageVersion.

I might be mistaking but as far as I know Haproxy cannot be used as a direct HTTP proxy, it only acts as a reverse HTTP proxy. You should use Squid or something like that to be a direct HTTP proxy.
If you want Haproxy to terminate SSL for you like you do, then you will need to change hostname part of your Docker image to be the hostname of your Haproxy node. You should also make sure your Docker daemons trust the Haproxy certificate or add Haproxy to the list of INSECURE_REGISTRIES on all Kube nodes.

Related

Docker login issue with Harbor exposed as NodePort service

I am trying to deploy Harbor on a k8s cluster without much efforts and complexity. So, I followed the Bitnami Harbor Helm chart and deployed a healthy running Harbor instance that is exposed as a NodePort service. I know that the standard is to have a LoadBalancer type of service but as I don't have required setup to provision a requested load balancer automatically, I decided to stay away from that complexity. This is not a public cloud environment where the LB gets deployed automatically.
Now, I can very well access the Harbor GUI using https://<node-ip>:<https-port> URL. However, despite several attempts I cannot connect to this Harbor instance from my local Docker Desktop instance. I have also imported the CA in my machine's keychain, but as the certificate has a dummy domain name in it rather than the IP address, Docker doesn't trust that Harbor endpoint. So, I created a local DNS record in my /etc/hosts file to link the domain name in Harbor's certificate and the node IP address of the cluster. With that arrangement, Docker seems to be happy with the certificate presented but it doesn't acknowledge the port required to access the endpoint. So, in the subsequent internal calls for authentication against Harbor, it fails with below given error. Then I also tried to follow the advice given here on Harbor document to connect to Harbor over HTTP. But this configuration killed Docker daemon and does not let it even start.
~/.docker » docker login -u admin https://core.harbor.domain:30908
Password:
Error response from daemon: Get "https://core.harbor.domain:30908/v2/": Get "https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry": EOF
As you can see in above error, the second Get URL does not have a port in it, which would not work.
So, is there any help can I get to configure my Docker Desktop to connect to a Harbor service running on the NodePort interface? I use Docker Desktop on MacOS.
I got the similar Error when I used 'docker login core.harbor.domain:30003' from another Host. The Error likes 'Error response from daemon: Get https://core.harbor.domain:30003/v2/: Get https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp 1...*:443: connect: connection refused'.
However,docker can login harbor from the host on which the helm chart installed harbor.

Docker connect to local secure registry

I have set up a private registry and secured it according to the recipe with an nginx reverse proxy. nginx listens on port 5000 using SSL.
docker pull myregistry:5000/foo:latest from a remote machine to that registry works fine.
However, that same command on myregistry itself results in docker trying to access the registry (through nginx) via HTTP, not HTTPS.
Since nginx listens using SSL, it returns an error ("The plain HTTP request was sent to HTTPS port").
According to the Docker documentation, local registries are automatically considered as insecure.
In my case, I want the local registry also to be considered as secure, so that docker pull myregistry:5000/foo:latest works on the same machine. How to achieve that?
There is only an option to mark remote registries as insecure, but not to mark a specific registry as secure.
Obviously, I cannot use a different port to listen for plain HTTP, since that would change the image name. I also did not find a way to make nginx accept HTTP traffic on the same port based on IP address.

Proxy Docker Hub with HAproxy

I'm running Sonatype Nexus as Proxy registry for Quay.io and docker.io.
I'm pulling the images with a custom domain proxy-hub.example.com and proxy-quay.example.com.
When Nexus is down obviously I can't download any images, so I thought I can use HAproxy to set the original URL.
backend registry_quay
balance roundrobin
server-template Nexus_nexus 1 Nexus_nexus:8085 check resolvers docker resolve-prefer ipv4 init-addr libc,none
server quay quay.io:443 check backup ssl verify none
This backend works fine as when nexus is down the backup takes over.
With the same settings docker.io fails with error 503 when I turn off Nexus.
backend registry_hub
balance roundrobin
server-template Nexus_nexus 1 Nexus_nexus:8083 check resolvers docker resolve-prefer ipv4 init-addr libc,none
server hub registry-1.docker.io:443 check backup ssl verify none
I'm quite sure that something needs to be rewritten in the requests but I don't know what.

How to determine forwarded-allow-ips for uvicorn server from docker network

I use Traefik as proxy and also have react frotend and fastAPI as backend. My backend is publicly exposed. The problem is unvicorn server redirects you from non trailing slash to url with a slash https://api.mydomain.com/posts -> http://api.mydomain.com/posts/ but it doesn't work for ssl. So, I'm getting errors in frontend about CORS and mixing content. Based on this topic FastAPI redirection for trailing slash returns non-ssl link I added --forwarded-allow-ips="*" to uvicorn server and now ssl redirection works, but as I understand it's not secure. I tried to add --forwarded-allow-ips="mydomain.com" but it doesn't work, I have no idea why as "mydomain.com" is an ip of the server so then ip of my proxy. I assume that's because my api get proxy IP from docker network, don't know how to solve this.
If you're positive that the Uvicorn/Gunicorn server is only ever accessible via an internal IP (is completely firewalled from the external network such that all connections come from a trusted proxy [ref]), then it would be okay to use --forwarded-allow-ips="*".
However, I believe the answer to your original question would be to inspect your Docker container(s) and grab their IPs. Since traefik is routing/proxying requests, you probably just need to grab the traefik container ID and run an inspect:
~$ docker ps`
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ids_here traefik:v2.3 "/entrypoint.sh --pr…" 1 second ago Up 1 second 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp traefik
~$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ids_here
172.18.0.1
Try whitelisting that IP.
From what I've read, it is not possible to whitelist a subnet inside --forwarded-allow-ips="x". If your server re-starts, your container's internal IPs are subject (likely) to change. A hack-ish fix for this would be forcing the container start order using depends_on. The depends_on container will start first and have a lower/first IP assignment.
However, it is possible in Docker like this [ref]:
labels:
- "traefik.http.middlewares.testIPwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.7"
- "traefik.http.middlewares.testIPwhitelist.ipwhitelist.ipstrategy.depth=2"

How to set up docker with no_proxy settings

I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks
Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.

Resources