Changing ISP traefik doesn't work anymore - docker

I just moved my server from one place to another and while everything is working on my reverse proxy. (Traefik generates ACME Certificate, docker containers are up and healthy)
I can't seem to access traefik from my domain name, whereas I can access it with the static IP (WAN) or even local IP (LAN). I changed the A record in my registrar's DNS (porkbun) to the new static IP and can ping through it, even access ssh, but I can't reach traefik. Logs (level DEBUG) don't even show an attempt to connect, so I don't know what's happening.
If you have any idea what could go wrong, or what I could be missing. Everything worked before.
Thanks.

Related

Azeroth-Core/Docker - LAN Setup

I am looking to setup AzerothCore for LAN only use.
I am using an ESXi install with an Ubuntu 20.4.3 instance with latest Docker and Portainer for management. I am able to walk through the install process and it works great.
I switched the realmlist in the database to the LAN IP via HeidiSQL, setting both address and localaddress to LAN IP. I have tried just address and localaddress, leaving the other at 127.0.0.1.
I am using a fresh client install and set realmlist there too. I have tried both the dns and the IP, currently set to LAN IP.
I have not touched the compose file or modified authserver or worldserver config files. I am not certain where to look or what to change.
I am able to login with the ID I created all the way to see the Realm, which I select and hit enter. After a short pause the client screen returns to the realm selection screen. Not knowing the backend, I am not sure what is missing now.

Regarding Tailscale's compatibility with Ngnix Proxy Manager and Duck DNS

How to recreate: Install Ngnix proxy manager and any self-hosted web app [Nextcloud, Owncloud, Portainer] in my case Portaniner WebGUI and put Ngnix proxy manager in front of it by using Duck DNS as a dynamic DNS client and setting the record to the Tailscale IP of the machine then after doing all of this when I write the domain name in browser bar it keeps on loading forever with about: blank on the other hand if I write Tailscale IP a with the correct port it loads in a second
I think this was asked on https://github.com/tailscale/tailscale/issues/3428
It looks like Duck DNS rejects IP addresses in the CGNAT range 100.x.y.z because they are not publicly reachable. https://tailscale.com/kb/1081/magicdns/ can likely do what you're looking for, and can set up TLS certificates as well: https://tailscale.com/blog/tls-certs/.
I have solved the problem it was not of Duck DNS not working with CGNAT but I forgot to add Portainer to the Ngnix Proxy Manager network so it lead to Ngnix proxy manager not being able to connect to portainer every one who put the effort in solving the problem are appreciated

Asp.net APIs have wrong certificate, Blazor website refuses to connect

I have the following setup: One Blazor Server Side Website, an ASP.net API Gateway using Ocelot and a few Microservices also using ASP.net. All of these things are run in individual Docker Containers.
First of, everything works, the connections work, I can fetch data, but only on http. I have the dev-certs enabled for ASP.net, and they also "work" but the problem is, they are signed for the wrong host.
When I navigate to a gateway with the browser, it works as long as I call "localhost" and the port. The problem is, because they run in Containers, localhost does not mean the same thing for them. That means I have to use my local IP and the port to send them to the correct service. But the certificate is signed for "localhost" meaning it thinks the cert is invalid because the host is no longer "localhost".
This means that my blazor app sees a certificate that does not match the host and gets an exception because it can't validate the cert. I looked up a lot of stuff and found nothing, but basically I need a way to either change the certificates in the containers or tell the blazor server to accept those certificates.
I have not found a thing on this topic, so I would really appreciate some help.

Jenkins Server - Issues with setting URL

I am trying to set up an internal Jenkins server for our QA team and facing some issues with the server URL. This is inside a corporate network and all sort of firewall and proxy settings are in place, however we need to access the server only with in our internal network. This server runs from a Mac Mini. I was able to install and access the server without any issues using localhost:8080.
I tried to set a custom URL (something like testjenkins.local:8080)under the Manage Jenkins option and never was able to access the server. The only option worked for me is with the IP address (IP:8080). I was able to access the server from other machines in the network using this URL.
The real problem with the above setup is that the machine IP changes(I am not able to make it static), and hence wont be able to get an always working URL.
Highly appreciate if any one guide me in the wright direction.
Given you have a dynamic IP on your server, a good alternative would be using ngrok. Ngrok can expose the port 8080 of that server to the internet via secure tunnels, and you can access it via an URL, so changes in the IP won't affect it.
However, ngrok exposes the server to the whole Internet. To make it accessible only for your team you can add authentication in both ngrok tunnel and Jenkins server (would it work for you?).

Site being cloned or deploy to domain I don't own

I'm deploying a Rails app to EC2 using Elastic Beanstalk. I just found out, while doing a search in google that whenever I deploy to my domain, my site seems to also get deployed to a domain I don't own. I use Route 53 for my DNS as well.
Has anyone ever run into this situation or have any idea what might be happening here?
It could be a simple DNS issue. Someone's DNS A record is pointing to the ip address of your EC2 instance
Amazon recycles ip addresses. It is possible that your current ip address was allocated to someone else earlier, and they have not deleted their DNS entry when releasing the ip address.
you can run ping command to confirm both domain names resolve to same ip address
ping domain1.com
ping domain2.com

Resources