Nextcloud in docker behind traefik on unraid - docker

I'm running traefik as a reverse proxy on my unraid (6.6.6)
Apps like, sonarr/radarr, nzbget, organizr, all work fine. But that's mostly due to the fact that these are super easy to set up. You only need 4 traefik specific labels and that's it. 
traefik.enable=true
traefik.backend=radarr
traefik.frontend.rule=PathPrefix: /radarr
traefik.port=7878
traefik.frontend.auth.basic.users=username:password
So far so good, everything is using ssl and working great. 
But as soon as I have to configure some extra stuff for the containers to work behind a reverse proxy I get lost. I've read dozens of guides regarding nextcloud, but I can't get it to work. 
Currently I'm using the linuxserver/nextcloud docker and from my internal network it's working great. I got everything set up, added users and smb shares and everybody can connect fine. But I can't get it to work behind traefik using a subdirectory. It's probably just some traefik labels I need to add to the nextcloud container, but I'm simply too much of a newb to know which ones I need. 
My first issue was that nextcloud forces https, which traefik doesn't like unless you configure some stuff. So for now I'm just using the traefik.frontend.auth.forward.tls.insecureSkipVerify=true label to work around this. I know it's potentially a security issue, but if I'm not mistaken it only opens up the possibility of a man in the middle attack. Which shouldn't be too much of an issue since both traefik and nextcloud are running on the same machine (and besides everything else is going over http). 
So now that I got that working I get a Error 500 message when I try to open mydomain.tld/nextcloud. 
The traefik log says "Error calling . Cause: Get : unsupported protocol scheme \"\""
I tried adding some labels I found in a guide (https://www.smarthomebeginner.com/traefik-reverse-proxy-tutorial-for-docker/#NextCloud_Your_Own_Cloud_Storage)
"traefik.frontend.headers.SSLRedirect=true"
"traefik.frontend.headers.STSSeconds=315360000"
"traefik.frontend.headers.browserXSSFilter=true"
"traefik.frontend.headers.contentTypeNosniff=true"
"traefik.frontend.headers.forceSTSHeader=true"
"traefik.frontend.headers.SSLHost=mydomain.tld"
"traefik.frontend.headers.STSPreload=true"
"traefik.frontend.headers.frameDeny=true"
I just thought I'd try it, maybe I get lucky.
Sadly I didn't. Still Error 500. 

In your traefik logs enable using:
loglevel = "DEBUG"
More info here:https://docs.traefik.io/configuration/logs/
After doing this I realized that my docker label was not correctly applying the InsecureSkipVerify = true line in my config. The error I was able to see in the logs was:
500 Internal Server Error' caused by: x509: cannot validate certificate for 172.17.0.x because it doesn't contain any IP SANs"
To work around this I had to add InsecureSkipVerify = true directly to the traefik.toml file for this to work correctly.

Related

How to edit bad gateway page on traefik

I want to edit the Bad Gateway page from traefik to issue a command like
docker restart redis
Does anyone have an idea on how to do this?
A bit of background:
I have a somewhat broken setup of Traefik v2.5 and Authelia on my development server, where sometimes I get a Bad Gateway Error when accessing a page. Usually this is fixed by clearing all sessions from redis. I tried to locate the bug, but the error logs aren't helpful and I don't have the time and skills to make the bug reproduceable or find the broken configuration. So instead I always use ssh into the maschine and reset redis manually

Get Visitor IP or a Custom header in Jaeger docker behind docker traefik (v2,x)

we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks

Cannot access Keycloak account-console in Kubernetes (403)

I have found a strange behavior in Keycloak when deployed in Kubernetes, that I can't wrap my head around.
Use-case:
login as admin:admin (created by default)
click on Manage account
(manage account dialog screenshot)
I have compared how the (same) image (quay.io/keycloak/keycloak:17.0.0) behaves if it runs on Docker or in Kubernetes (K3S).
If I run it from Docker, the account console loads. In other terms, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=account-console
From the same image deployed in Kubernetes, the same request fails with error 403. However, on this same application, I get a success (204) for the request
GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console
Since I can call security-admin-console, this does not look like an issue with the Kubernetes Ingress gateway nor with anything related to routing.
I've then thought about a Keycloak access-control configuration issue, but in both cases I use the default image without any change. I cross-checked to be sure, it appears that the admin user and the account-console client are configured exactly in the same way in both the docker and k8s applications.
I have no more idea about what could be the problem, do you have any suggestion?
Try to set ssl_required = NONE in realm table in Keycloak database to your realm (master)
So we found that it was the nginx ingress controller causing a lot of issues. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted. Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
annotations:
kubernetes.io/ingress.class: haproxy
...

Add Certifikate in traefik for Service discovered from rancher

i have successfully configured traefik 1.5.4 to work and talk with rancher.
I'd like to add a few more services to rancher by configuring the services labels.
One service has a different domain (not mine) as the others with a SSL Cert i get from the owner of that domain.
So how do i configure that with Rancher Labels.
I know how to do this in the traefik.toml but im curious if theres a way to configure that without touching the toml file everytime.
Also i think it quite elegant if the services are the owner of their configuration.
Any Ideas ?
Got it,
Label "traefik.frontend.rule" can take multiple Destinaations e.g.
"Host: a.url.cloud,b.url.cloud"

nginx inside a docker container doesn't add access-control-allow-origin header as per the conf file

tl;dr - how to add a specific header in nginx response explicitly when running nginx inside a docker container?
I have deployed ELK stack inside a docker container on a RHEL 7.1 using sebp/elk:latest image. I also want to render my own scatterplots that I have developed, apart from Kibana graphs. I am rendering those pages through a separate nginx webserver I install and run in the same docker image. This is because Kibana 4 (in the sebp image) doesn't give a freedom to choose another web server like Kibana 3, and I can't possibly edit URLs/ Pages rendered by Kibana 4 as it is using its own inbuilt non-nginx webserver as far as I could understood. Now, the issue is, when I deploy my scatterplts to nginx root location and retrieve from browser, I get below error.
XMLHttpRequest cannot load http:///_search?size=500&. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://IP-of-my-server' is therefore not allowed access.
I had faced this issue while running ELK without docker but this link has helped me - http://enable-cors.org/server_nginx.html
Now it doesn't seem to work, I am pushing the conf from my host to container while building the docker image and when a container is spun up from the image I could login and see nginx is running and my nginx.conf is being used but when I analyse actual response, no such header is added to the response even though it should be as I have added it in nginx.
Nginx 1.4 is being used. There is no issue of port mapping and I am not running any nginx on the host, as some of you might suspect if those pages are really being rendered by nginx of the container or the host.
Please help if you have faced this issue and resolved. Does the header gets added into response if you are running webserver from inside the container or there is bug in docker or add_header is not supported in my nginx version?
When I open a session of chrome with disabled web security, I get my scatterplots in chrome perfectly.
chrome.exe --user-data-dir="C:/Chrome dev session" --disable-web-security
So scatterplot code or something else definitely doesn't have any issue. It's only with the header being absent in the response even though I explicitely try to add it through conf/
Thanks in advance, sorry for a bit long post.
Just realized issue was with elasticsearch response and not in nginx conf. If you see carefully that header is not presence in response from :9200, so there is a module named "http" and you can edit its properties in elasticsearch.yml file.
Below link helped, had to allow that header through its APIs.
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html

Resources