I have an application using spring-security's OpenID implementation. The app server sits behind a proxy. The proxy is apache httpd with mod_proxy. If the proxy connects to the app server via HTTP, the application will tell the OpenID authenticator to redirect back via HTTP rather than HTTPS like I would prefer. It seems to pull the protocol dynamically and only sees HTTP. If I configure the proxy to use HTTPS, I run into this problem. So is there a way to operate spring security behind a proxy which uses HTTP?
A little extra mod_proxy and Glassfish configuration solved this problem for me:
https://serverfault.com/questions/496888/ssl-issue-with-mod-proxy
Related
I have a Spring based application, which uses keycloak-spring-security-adapter to handle the Keycloak specific stuff. This server is deployed on same machine as the Keycloak server, and both of them are running behind Nginx reverse proxy.
The Spring app has in its keycloak.json configuration the correct proxy-url. The Keycloak server has the the frontendUrl set to the correct proxy-url. When testing on localhost without the reverse proxy everything works as expected.
The issue is when deployed with the reverse-proxy in front. The Spring application runs the OIDC service discovery during startup. But to do this, is uses the public URL. This fails, because the on the backend side, the reverse proxy is not in DNS record.
How to setup the keycloak-spring-security-adapter in such a way, that for the backend requests it uses local URL. But for the logins that are done through the JSP pages in the browser, it uses the proxied URL?
I can't seem to figure out if this is common practice or not, but I want to create a website (Running on a container) and then have traffic forwarded to the website from a wildcard on my domain, I want to secure it and use Nginx Proxy Manager and Let's Encrypt to manage the certificate.
Do I keep the website running on my internal server as just HTTP:80 and redirect traffic to to via Nginx? My current site is just a serverside Blazor webapp.
I've seen other people do this, but it makes me wonder if that is indeed secure, at some point between Nginx and the internal server it is not encrypted. Is my understanding correct?
I imagine it looks something like this:
Client connects securely to Nginx Proxy Manager (HTTPS)
Nginx Proxy Manager then decrypts and forwards to the Internal Website (HTTP)
Is my understanding correct?
Is this common practice, or is there a better way to achieve what I want?
We have a Jenkins, a gtw application that accepts HTTP requests which later forwards the data to a bit bucket server. The flow is like below:
Jenkins->Gtw(HTTP)->BitBucket url (HTTP and HTTPS).
On jenkins the requests are sent via HTTPS,
We would like to know if mitmproxy can be used as middle man that can downgrade the https to http.
Or if it is a possible way to do that on the jenkins container.
You can do that with mitmproxy as a reverse proxy (see https://docs.mitmproxy.org/stable/concepts-modes/#reverse-proxy). If this is a production setup, I'd recommend using nginx instead, which has better performance characteristics.
I created an MVC web application and embedded an WebSockets chat server. I can deploy this app to an secure endpoint, but how can I get the WebSocketHandler to listen to a wss:// endpoint?
If your web app has a HTTPS binding, WSS should be able of connecting. Check your IIS configuration, enable the port TCP 443 with HTTPS binding and a certificate in the bindings configuration.
Now if you access the web app through HTTPS, you should connect via WSS without problems.
If you access via HTTP, the certificate is self signed, and you didn't accept it in the browser before, it will probably fail. Watch out with that.
I have running my own website for security reasons at an unusual port: https on Port 11223 instead oh 443.
This website provides the feature to login with an google account, realized by using the google OAuth API.
At the last step of authentication (redirecting back from google OAuth to my system), an network timeout happens.
On the other hand: if my server is running https on default port 443 instead of 11223, everything works fine.
I have configured the google OAuth client settings (Redirect URIs, Home page URL, JavaScript origins) for using the special port 112233. But without success.
Maybe it's important to know, the Server is behind a firewall with NAT. This means, the firewall receives https connections to port 11223 to redirect this to the internal webserver running https only on port 11223. But I think, this is not the point.
What could be the reason, why port 443 works but port 11223 doesn't.
I guess google OAuth does not support webservers running on an unusual prot!?!
The port number is 16 bits and thus can not exceed 65535.
Could it be proxy configuration issues? I recommend you configure your firewall to return 404 on the port 11223 and see what happens.