Connection refused when trying to push to Harbor registry - docker

We have an internal Harbor registry setup on a CentOS server for storing the Docker images. It is accessed through a SSL URL.
The SSL configuration is done on the load balancer. The server where Harbor registry is setup just has the basic application configuration with the http port defined as 5044.
When we try to push any images from any remote machine using the ip:port combination, it works fine.
However, when using the SSL URL, it fails with the below error:
dial tcp <load balancer ip>:80: connect: connection refused
We haven't configured port 80 to be used explicitly. We checked the Harbor and nginx configuration files but doesn't seem like we have port 80 mentioned anywhere.
The load balancer is configured to use and port 5044.
Also, the pull works as expected with ip:port and SSL URL both.
Can someone help on what might we be missing in the configuration?
Thank you!

Related

PhpStorm remote development access to remote http with browser

I setup remove development with PhpStorm - works fine. But how can I connect to the website provided with remote machine?
Is it possible to make some proxy / tunnel or something ?
On the remote host I have docker containers that provide hosts like http://myapp.demo.
i see in /etc/hosts on remote i have
172.25.0.3 myapp.demo
so its local subnetwork for remote.
When i create proxy its not accessible for local.
Also i try ssh connection with -X forward and try run browser example firefox but get
X11 connection rejected because of wrong authentication.
X11 connection rejected because of wrong authentication.
Error: cannot open display: localhost:10.0
So remote desktop like vnc only way ?
Since 2022.3, you can forward ports in Backend Status Details:

Docker connect to local secure registry

I have set up a private registry and secured it according to the recipe with an nginx reverse proxy. nginx listens on port 5000 using SSL.
docker pull myregistry:5000/foo:latest from a remote machine to that registry works fine.
However, that same command on myregistry itself results in docker trying to access the registry (through nginx) via HTTP, not HTTPS.
Since nginx listens using SSL, it returns an error ("The plain HTTP request was sent to HTTPS port").
According to the Docker documentation, local registries are automatically considered as insecure.
In my case, I want the local registry also to be considered as secure, so that docker pull myregistry:5000/foo:latest works on the same machine. How to achieve that?
There is only an option to mark remote registries as insecure, but not to mark a specific registry as secure.
Obviously, I cannot use a different port to listen for plain HTTP, since that would change the image name. I also did not find a way to make nginx accept HTTP traffic on the same port based on IP address.

Proxy Docker Hub with HAproxy

I'm running Sonatype Nexus as Proxy registry for Quay.io and docker.io.
I'm pulling the images with a custom domain proxy-hub.example.com and proxy-quay.example.com.
When Nexus is down obviously I can't download any images, so I thought I can use HAproxy to set the original URL.
backend registry_quay
balance roundrobin
server-template Nexus_nexus 1 Nexus_nexus:8085 check resolvers docker resolve-prefer ipv4 init-addr libc,none
server quay quay.io:443 check backup ssl verify none
This backend works fine as when nexus is down the backup takes over.
With the same settings docker.io fails with error 503 when I turn off Nexus.
backend registry_hub
balance roundrobin
server-template Nexus_nexus 1 Nexus_nexus:8083 check resolvers docker resolve-prefer ipv4 init-addr libc,none
server hub registry-1.docker.io:443 check backup ssl verify none
I'm quite sure that something needs to be rewritten in the requests but I don't know what.

Cannot access webserver after ovpn client connected (on the server) to a remote network

I have a cloud VM (debian 11) where I run some docker stuff. I have an nginx for reverese proxy, and some web application behind it.
I protect the webserver with cloudflare. I already set up the origin cert, and its works like a charm. When I reach the nginx its use the cf cert ..
BUT, I have to reach this server from my home and from the internet. I have a mikrotik router with ovpn server, and when the cloud server is connected to my network, nginx doesn't to serve web request through cloudflare. :(
When I run systemctl stop openvpn on cloud server, the webserver is reachable again. (when openvpn is connected, I can connect to nginx on private ip, but not from cloudflare.)
Do you have any idea what's happening?
The ovpn server is accessible in port 1194 on my mikrotik router.
The nginx is reachable in https://domain:443 when ovpn client is not connected. When connected is not reachable.
I was able to resolve this issue.
I had to comment out the redirect-gateway line from clinet.conf. Now everything works as I expect.
I can reach the server through openvpn and the server is reachable from the internet.

Source client having trouble connecting to serverless Icecast server on Cloud Run

Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.

Resources