Load balancing Docker Registry v2 with HAProxy on Tutum - docker

Did the following on Tutum:
Registry
Started a registry:2.1.1 service
Published the port 5000 and configured the registry service with:
VIRTUAL_HOST=https://my-registry.my-host.net
TCP_PORTS=5000/ssl
SSL_CERT="..."
Now, pointed the my-registry.my-host.net DNS to the registry service endpoint and testing the registry with:
docker login my-registry.my-host.net:5000
Works just fine, including the SSL!
HAProxy
Started a tutum/haproxy:latest service, published the 443 port, added API access and linked to the registry service, everything else is default
Pointed my-registry.my-host.net DNS to the haproxy service endpoint and tested the registry login with:
docker login my-registry.my-host.net
This time, the request fails with:
503 Service Unavailable
No server is available to handle this request.
What am I missing?
Note: everything was done from Tutum's Dashboard web UI.
Additionally, here's the generated haproxy.cfg from the HAProxy service container, for those who have experience with HAProxy, but not necessarily with Tutum:
https://gist.github.com/lazabogdan/3bf52984faa092b1a50b (note: the registry service ID has been masked with XXXXXXXX and the real FQDN has been replaced with my-registry.my-host.net)

Solved it.
I had to do the following:
Update the environment variable for the registry service from TCP_PORTS=5000/ssl to TCP_PORTS=5000
On the haproxy service, expose port 5000 on the container AND publish it on the host to port 443.
Now, I can successfully do:
docker login my-registry.my-host.net

Related

Export docker container through cloudflared

I have a NAS where I am running various web apps in docker containers through docker-compose. I want some of these web apps to be accessible through the internet, not only when I am connected to my home network.
The problem I'm currently facing is that while cloudflare is able to expose the default web apps (default NAS management 192.168.1.135:80 can be mapped to subdomain.domain.com, for instance), it is unable to expose any docker container I try to run (192.168.1.135:4444 cannot be mapped to subdomain2.domain.com), and I receive a 502 bad gateway error with every app I have tried so far.
The configuration shouldn't be the issue, and it's definitely not the NoTLSVerify flag because the apps run on HTTP and I have configured it that way, so I am out of options to know what is going on and how to solve it.
Looks like the apps you're running on your NAS are proxied through the docker runtime. Consequently, the IP:port you need to add to the cloudflare tunnel config is the one that is reachable from the Host (not the IP of the host itself).
If the host is 192.168.1.135, you need to know which the the IP (internal to the docker network) of the app that you want to access from the outside, typically in the 172.0.0.1/24 range.
Example: If the containers running the apps you want to access are running on 172.0.0.2:4444 for app1 and 172.0.0.3:5555 for app2, the cloudflare config would look like this:
tunnel: the_ID_of_the_tunnel
credentials-file: /root/.cloudflared/the_ID_of_the_tunnel.json
ingress:
- hostname: yourapp1.example.com
service: http://172.0.0.2:4444
- hostname: ypurapp2.example.com
service: http://172.0.0.3:5555
- service: http_status:404
See more details and a video here: How to redirect subdomain to port (docker)
Turns out the problem is due to how docker works with networks, not with how Cloudflare accesses them. I first had to create a network that connected both containers, since adding cloudflare to my docker-compose file didn't work for some reason.
Create a docker network docker network create tunnel
Run docker without specifying the network docker run -d --name cloudflare cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
Add the docker to the network docker network connect tunnel cloudflare
Run the container (note the container should have, as you specified, the network name identical to the one you created earlier, but cloudflare should not be in your docker-compose file) docker-compose up
In the cloudflare tunnel config, you will have to specify the docker internal address of your container (as #lu4t suggested). You can identify the address with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container

Docker login issue with Harbor exposed as NodePort service

I am trying to deploy Harbor on a k8s cluster without much efforts and complexity. So, I followed the Bitnami Harbor Helm chart and deployed a healthy running Harbor instance that is exposed as a NodePort service. I know that the standard is to have a LoadBalancer type of service but as I don't have required setup to provision a requested load balancer automatically, I decided to stay away from that complexity. This is not a public cloud environment where the LB gets deployed automatically.
Now, I can very well access the Harbor GUI using https://<node-ip>:<https-port> URL. However, despite several attempts I cannot connect to this Harbor instance from my local Docker Desktop instance. I have also imported the CA in my machine's keychain, but as the certificate has a dummy domain name in it rather than the IP address, Docker doesn't trust that Harbor endpoint. So, I created a local DNS record in my /etc/hosts file to link the domain name in Harbor's certificate and the node IP address of the cluster. With that arrangement, Docker seems to be happy with the certificate presented but it doesn't acknowledge the port required to access the endpoint. So, in the subsequent internal calls for authentication against Harbor, it fails with below given error. Then I also tried to follow the advice given here on Harbor document to connect to Harbor over HTTP. But this configuration killed Docker daemon and does not let it even start.
~/.docker ยป docker login -u admin https://core.harbor.domain:30908
Password:
Error response from daemon: Get "https://core.harbor.domain:30908/v2/": Get "https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry": EOF
As you can see in above error, the second Get URL does not have a port in it, which would not work.
So, is there any help can I get to configure my Docker Desktop to connect to a Harbor service running on the NodePort interface? I use Docker Desktop on MacOS.
I got the similar Error when I used 'docker login core.harbor.domain:30003' from another Host. The Error likes 'Error response from daemon: Get https://core.harbor.domain:30003/v2/: Get https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp 1...*:443: connect: connection refused'.
However,docker can login harbor from the host on which the helm chart installed harbor.

How to set up docker with no_proxy settings

I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks
Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.

Cannot login to Nexus 3 docker registry

I have set up an AWS EC2 instance with Docker, Nexus3 and a Docker repository in Nexus with HTTP port 8123 and all the necessary settings so that I can see it from Docker. I have added after a lengthy research the right options in my docker config file so that when I run docker info I can see my insecure registry set to the right IP address. I can access the url of the Nexus manager from my machine without any problems and I can create repositories etc.
I then try to do a docker login from within my EC2 instance like this:
docker login -u admin -p admin123 my_ip_address:8123
And after a while I get this:
Error response from daemon: Get http://my_ip_address/v1/users/: dial tcp my_ip_address:8123: i/o timeout
I have tried so many things to fix this and nothing seems to work. I spent so far an entire day trying to understand why docker login cannot see my Nexus3 registry.
Any ideas?

I can not access my Container Docker Image by HTTP

I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.

Resources