Cant Use a Subdomain in NGINX Proxy Manager - docker

I'm trying to set up the NGINX Reverse Proxy Manager on my Docker.
Now I have a DynDNS address and I work with the proxy manager because I can reach the default page of nginx proxy manager over the dyndns address.
When i try to connect a port with the standard dyndns name that I have over the proxy manager it works fine, also with SSL. But when I try to use a subdomain like subdomain1.laptopsimon.net nothing works: I can't create an SSL certificate and I can't even connect over http to the side.
Does anybody have an idea why I can not use Subdomains?
Also, I get this Letsencrypt Error in the LOG:
[12/25/2022] [1:50:45 PM] [SSL ] › ℹ info Requesting Let'sEncrypt certificates for Cert #9: subdomain1.laptopsimon.ddns.net
[12/25/2022] [1:50:45 PM] [SSL ] › ℹ info Command: certbot certonly --config "/etc/letsencrypt.ini" --cert-name "npm-9" --agree-tos --authenticator webroot --email "simon.hauber#outlook.de" --preferred-challenges "dns,http" --domains "subdomain1.laptopsimon.ddns.net"
[12/25/2022] [1:50:49 PM] [Nginx ] › ℹ info Reloading Nginx
[12/25/2022] [1:50:49 PM] [Express ] › ⚠ warning Command failed: certbot certonly --config "/etc/letsencrypt.ini" --cert-name "npm-9" --agree-tos --authenticator webroot --email "simon.hauber#outlook.de" --preferred-challenges "dns,http" --domains "subdomain1.laptopsimon.ddns.net"
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.

Your DDNS provider, No-IP, doesn't support "fourth-level" subdomains (see also this answer).
You could have example.ddns.net working fine and DNS A record pointing to an IP address you have chosen, but they won't resolve e.g. test.example.ddns.net.
You can verify this yourself with command nslookup subdomain1.laptopsimon.ddns.net.
IIRC, that was the reason why I stopped using them and continued with duckdns.org.
There, every subdomain, even test1.test2.example.duckdns.org will resolve to same IP address.
From let'sencrypt log excerpt it's not clear to me what could be the problem. You will need to check

Related

GitLab can't reach PlantUml in docker container

So I have GitLab EE server (Omnibus) installed and set up on Ubuntu 20.04.
Next, following official documentation found on GitLab PlantUML integration, I started PlantUML in a docker container which I did with the following command:
docker run -d --name plantuml -p 8084:8080 plantuml/plantuml-server:tomcat
Next, I also configured /etc/gitlab/gitlab.rb file and added next line for redirection as my GitLab server is using SSL:
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
In the GitLab server GUI in admin panel, in Settings -> General, when I expand PlantUML, I set the value of PlantUML URL to (two ways):
1st approach:
https://HOSTNAME:8084/-/plantuml
Then, when trying to reach it through the browser through this address(https://HOSTNAME:8084/-/plantuml), I get
This site can’t provide a secure connection.
HOSTNAME sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
2nd approach:
Also I tried to put before that I tried different value in in Settings -> General -> PlantUML -> PlantUML URL:
https://HOSTNAME/-/plantuml
Then, when trying to reach it through the browser through this address (https://HOSTNAME/-/plantuml), I get
502
Whoops, GitLab is taking too much time to respond
In both cases when I trace logs with gitlab-ctl tail I get the same errors:
[crit] *901 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: CLIENT_IP, server: 0.0.0.0:443
[error] 1123593#0: *4 connect() failed (113: No route to host) while connecting to upstream
My question is which of the above two ways is correct to access PlantUML with the above configuration and is there any configuration I am missing?
I believe the issue is that you are running the plantuml in a docker container and then trying to reach it via gitlab (on localhost) with name.
In order to check if that is the issue please change
proxy_pass http://plantuml:8080/
to
proxy_pass http://localhost:8080/
and trying again with the first approach.
Your second approach seems to be missing the container port in the url.

How Install SSL Certificate on ha proxy dockerfile using letsencrypt

I am using haproxy to route the domains and subdomains and it is deployed on port 80. I want that all the domains should be https or using SSL certificate.
global
log xx.xx.90.28 local0
log xx.xx.90.28 local1 notice
maxconn 2048
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
option forwardfor
option http-server-close
retries 3
timeout connect 5000
timeout client 10000
timeout server 10000
frontend balancer
bind *:80
mode http
stats enable
stats uri /stats
stats refresh 15s
stats show-node
stats auth admin:admin
acl domain hdr_dom(host) -i www.example.com
acl subdomain hdr_dom(host) -i app.example.com
acl subdomain1 hdr_dom(host) -i examplecom
use_backend go_app_1 if domain
use_backend go_app_2 if subdomain
use_backend go_app_3 if subdomain1
backend go_app_1
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8081 check
backend go_app_2
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8082 check
backend go_app_3
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8081 check
And here is the DockerFile
FROM haproxy:2.1
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Now I want to use letsencrypt to secure these URL please guide me how I can do this?
I think we need to setup letsencrypt locally first.
And inject the result setting to the container via volume mount.
Example command from Docker Hub:
docker run -d --name my-running-haproxy -v /path/to/etc/haproxy:/usr/local/etc/haproxy:ro haproxy:1.7
Something like below:
-v /my/letsencrypt/setting/:/etc/ssl/

Multiple tunnel request for serveo.net not working

I have two things running on localhost
A]Jenkins server on localhost:8080
B]Another App on localhost:3000
I wanted to expose both localhost URL so that these applications can be accessed remotely by anyone.
I found option serveo.net I tried firing command:
ssh -R 80:localhost:8080 -R 80:localhost:4200 serveo.net
Result:
I get a response
Forwarding HTTP traffic from https://subdomain1.serveo.net
Forwarding HTTP traffic from https://subdomain2.serveo.net
My Jenkins server is running properly on URL https://subdomain1.serveo.net
But I am having a problem for URL https://subdomain2.serveo.net
How to solve this issue?
Is there any change required in below command for serveo.net?
ssh -R 80:localhost:8080 -R 80:localhost:3000 serveo.net.
you need to run it in 2 separate instances:
ssh -R 80:localhost:8080 serveo.net
ssh -R 80:localhost:3000 serveo.net
Then you will have access to both URL's from serveo to access them.

JHipster - Docker-compose - enable SSL

I'm trying to get SSL working on my JHipster app.
I'm using docker and docker-compose, and have the following:
app.yml
ports:
- 443:443
Dockerfile
ADD /keystore.p12 /keystore.p12
EXPOSE 443
application-prod.yml
server:
port: 443
ssl:
key-store: /keystore.p12
key-store-password: <password>
keyStoreType: PKCS12
keyAlias: <alias>
I generated a self certified key via keytool -genkey as mentioned in application-prod.yml and copied this (using ADD DockerFile) into the app image. (I'm aware this probably isn't best practice ~ but it is for dev purposes).
./mvnw package -Pprod docker:build and then docker-compose -f src/main/docker/app.yml up runs without error.
When I try to connect via https://localhost:443 I get connection refused.
I should mention when the ssl entry is removed from application-prod.yml everything works as expected i.e. site loads ok in http.
Thanks,
Reading in the comments, this was a user misconfiguration...

Logging in to private docker registry v2 behind haproxy

I am trying to set up a new Docker Registry (v2) with HAProxy. For the Docker Registry I am using the image from the docker hub and running it with docker run -d -p 5000:5000 -v /path/to/registry:/tmp/registry registry:2.0.1. And this is a subset of my HAProxy configuration:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
userlist auth_list
group docker_registry users root
user root password ***PASSWORD***
backend docker-registry
server 127.0.0.1:5000_localhost 127.0.0.1:5000 cookie 127.0.0.1:5000_localhost
frontend shared-frontend
mode http
bind *:80
bind *:443 ssl crt *** CERT FILES ***
option accept-invalid-http-request
acl domain_d.mydomain.com hdr(host) -i d.mydomain.com
acl auth_docker_registry_root http_auth(auth_list) root
redirect scheme https if !{ ssl_fc } domain_d.mydomain.com
http-request auth realm Registry if !auth_docker_registry_root { ssl_fc } domain_d.mydomain.com
use_backend docker-registry if domain_d.mydomain.com
The important things to note are that I am using HAProxy to do SSL termination and HTTP auth rather than the registry.
My issue occurs when I try to login to the new registry. If I run docker login https://d.mydomain.com/v2/ then enter the user root and password I get the following error messages:
Docker Client:
FATA[0009] Error response from daemon: invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
Docker Daemon:
ERRO[0057] Handler for POST /auth returned error: invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
ERRO[0057] HTTP Error: statusCode=500 invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
So I try adding --insecure-registry d.mydomain.com to:
/etc/default/docker with DOCKER_OPTS= -H unix:///var/run/docker.sock --insecure-registry d.mydomain.com
the arguments of starting docker manually with docker -d --insecure-registry d.mydomain.com
neither of these, or any other I have found online, work. Each time, after restarting docker and attempting to log in again gives me the same error message.
A few other things I have tried:
In a browser going to d.mydomain.com results in a 404
In a browser going to d.mydomain.com/v2/ results in: {}
Replacing https://d.mydomain.com/v2/ in the login command with all of these with no success
http://d.mydomain.com/v2/
d.mydomain.com/v2/
http://d.mydomain.com/
d.mydomain.com/
This setup with HAProxy doing the SSL termination and HTTP auth has worked in the past using the first version of the registry and older versions of docker. So has anything in Docker registry v2 changed? Does this still work? If it hasn't changed, why won't the --insecure-registry flag do anything anymore?
Also, I have been working on getting this working for a while so I may have forgotten all the things I have tried. If there is something that may work, let me know and I will give it a try.
Thanks,
JamesStewy
Edit
This edit has been moved to the answer below
I have got it working. So here is my new config:
haproxy.cfg
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
userlist auth_list
group docker_registry users root
user root password ***PASSWORD***
backend docker-registry
server 127.0.0.1:5000_localhost 127.0.0.1:5000 cookie 127.0.0.1:5000_localhost
backend docker-registry-auth
errorfile 503 /path/to/registry_auth.http
frontend shared-frontend
mode http
bind *:80
bind *:443 ssl crt *** CERT FILES ***
option accept-invalid-http-request
acl domain_d.mydomain.com hdr(host) -i d.mydomain.com
redirect scheme https if !{ ssl_fc } domain_d.mydomain.com
acl auth_docker_registry_root http_auth(auth_list) root
use_backend docker-registry-auth if !auth_docker_registry_root { ssl_fc } domain_d.mydomain.com
rsprep ^Location:\ http://(.*) Location:\ https://\1
use_backend docker-registry if domain_d.mydomain.com
registry_auth.http
HTTP/1.0 401 Unauthorized
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Docker-Distribution-Api-Version: registry/2.0
WWW-Authenticate: Basic realm="Registry"
<html><body><h1>401 Unauthorized</h1>
You need a valid user and password to access this content.
</body></html>
The differences being the http-request auth line has been replaced with use_backend docker-registry-auth. The backend docker-registry-auth has no servers to it will always give a 503 error. But the 503 error file has been changed to registry_auth.http. In registry_auth.http the error code is overridden to 401, the header WWW-Authenticate is set to Basic realm="Registry", the basic HAProxy 401 error page is supplied and, most importantly, the header Docker-Distribution-Api-Version is set to registry/2.0.
As a result this hacky work-around setup works exactly the same as the old http-request auth line except the custom header Docker-Distribution-Api-Version is now set. This allows this set up to pass the test which starts on line 236 of https://github.com/docker/docker/blob/v1.7.0/registry/endpoint.go.
So now when I run docker login d.mydomain.com, login is successful and my credentials are added to .docker/config.json.
The second issue was that I couldn't push to the new repository even through it logged in. This was fixed by adding the rsprep line in the frontend. What this line does is modify the Location header (if it exists) to turn all http:// to https://.
I also found this bit of documentation for future reference.
As a small clarification to the previous answer: I had to change this line:
WWW-Authenticate: Basic realm="Registry"
To this:
WWW-Authenticate: Basic realm="Registry realm"
and then everything worked...
BTW, hashing the pass can be done using mkpasswd (part of whois deb package)

Resources