Spring Webapp behind Apache Web Server - Secured by CAS - spring-security

I have a Webapp that is secured using Spring Security CAS. The CAS Server and the Webapp sit behind a web server for reverse proxy (named url). The webapp uses ServiceAuthenticationDetailsSource to authenticate dynamic service urls. The problem I have is that Service Ticket validations are failing because the url supplied during validation does not match the url provided when the ticket was created. The set up works without the webserver when systems are connected using https://:/.
The issue seems to be that the web server modifies the HttpServletRequest when redirecting to the webapp where in it looses the "named url" information and is substituted with the and . The service ticket is obtained using the named url via "?service=" during login.
Any possible solution? Can apache reroute request without modifying it, especially for applications that are self identifying or for security reasons where CAS is trying to record the client IP address?

I have outlined a few options below:
Setup the Reverse Proxy
According to the Javadoc of ServletRequest: the HttpServletRequest.getServerName() will be:
the value of the part before ':' in the Host header value, if any, or
the resolved server name, or the server IP address.
This means you can configure your proxy to ensure the Host Header is set properly (Note Some containers like WebSphere do not honor the specification though).
Override using the Container Configuration
Many servers have a setup that can override this value in the event you are using a reverse proxy. There is a pretty decent thread on the Spring forums with a bit more information on it that I have summarized below.
If you are using Tomcat, I'd refer to the Reverse Proxy setup page. One method of configuration would be to configure the Http Connector to have the proxyName attribute to override the value returned by HttpServletRequest.getServerName() and proxyPort to override the value returned by HttpServletRequest.getServerPort(). An example configuration might look like:
server.xml
<Connector scheme="https" secure="true"
proxyPort="443" proxyName="example.com"
port="8009" protocol="AJP/1.3"
redirectPort="8443" maxThreads="750"
connectionTimeout="20000" />
Websphere has a few custom properties that do the same thing.
com.ibm.ws.webcontainer.extractHostHeaderPort = true
trusthostheaderport = true
httpsIndicatorHeader = com.ibm.ws.httpsIndicatorHeader
If you are not using either of these containers or need to support multiple domains, you will need to consult your containers documentation.
Custom AuthenticationDetailsSource
Of course Spring Security is pretty flexible, so you can always provide a custom implementation of AuthenticationDetailsSource that returns an instance of ServiceAuthenticationDetails that looks up the service URL in any way you wish.

Related

How to configure or customize REALM Metadata endpoints in Keycloak for SAML2.0

Context:
I have a keycloak inside a docker, I understand that there is a "proxy reverse" doing something like transforming this url for example: "http://example.com" into "http://171.20.2.97:8082" (this is the actual place where the Keycloak is "deployed" or "up"). It is just an example, my clients when they need to consume an endpoint from one microservice of mine do not use numbers, they use example.com.
so in the Keycloak when you want to see the metadata of the realm for SAML2.0 you can do it by following this link which is in the REALM settings section:
https://example.com/auth/realms/REALM-NAME/protocol/saml/descriptor
as you can see I am using "example.com" not "171.20.2.97:8082" to access the metadata link.
The problem is that inside the METADATA, the endpoints for SingleSignOnService, SingleLogoutService, etc. Are all configured to be "http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" (notice it is using the numbers and not example.com) and this causes that when the clients that want to use SAML.
Send inside their SAML REQUEST "Destination" attribute like so: "http://example.com/auth/realms/REALM-NAME/protocol/saml" and this causes an invalid request error, with reason invalid_destination, because the request attribute Destination was expected to be:
"http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" like is inside the Metadata.
So my question is, how can I edit the metadata to change the endpoints numbers to example.com or if that is not possible, how can I make example.com get translated to 171.20.2.97:8082 inside my keycloak server? Or if you know another way to solve/figure out this it is very welcome
I feel like a BEAST after finding out how to achieve what I needed after like 3 weeks of searching about keycloak and SAML (I overcame many obstacles this was the lastone), finally I managed to fix this by using the "Frontend URL" setting in my REALM settings, there I can put anything I want so that it changes "http://171.20.2.97:8082/auth/" (inside the metadata urls) for whatever I configure there, so for example if I set Frontend URL to:
https://example.com/auth/
now all my metadata endpoints will be like so:
https://example.com/auth/realms/REALM-NAME/protocol/saml
instead of:
http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml
now my client is being able to properly login with SAML2 using keycloak.
how did I manage to find out this? Well there is not much info so this was what gave me the hint: Keycloak behind nginx reverse proxy: SAML Integration invalid_destination
The person asking said that he configured frontend-url, and I wanted to give a try to that, and after checking if that changed metadata urls, surprise it did =)

How to configure a reverse proxy for multiple IIS sites and a single IP?

I've inherited some internal sites from a previous employee and my constraints are the following, it is written in MVC and I need to change how it is hosted and no direct control over DNS. Security won't allow me to use subdomains within DNS so I'd like to use a subdomain within IIS or file path extension. manage.mgmt.domain.td or mgmt.domain.td/manage as an example.
How can I configure an IIS binding and an inbound proxy rule so that mgmt.domain.td directs to a general menu page, but manage1.mgmt.domain.td directs to a separate page and manage2.mgmt.domain.td to another page and have them fully functional. I've been able to configure the inbound reverse proxy rule to use an IP such as 10.0.0.1:801, however I cannot configure it to working using either friendly format listed above.
Can a vdir, appdir, work with an mvc project or is a reverse proxy better? It's iis10 .
https://computingforgeeks.com/configure-virtual-directory-on-windows-iis-server/
[Edit adding IIS binding and reverse proxy rule image]
Current
ASP.NET applications run in application pool. When you add vdir, you cannot choose application pool. So add application is more suitable than vdir.
I can type in the ip 10.0.0.1/manage and it points to my site, I can also type 10.0.0.1 and it will load the same content (which I dont' want)
10.0.0.1/manage is the correct url to access MVC application. If 10.0.0.1 also show same content, consider that client cache or url redirect. The correct content it shows should be index page of the main site.
however it is not loading content when I use the dns friendly name?
I cannot understand what the dns friendly name you mean. If you have a public domain name, just bind the domain to the server in server provider. and when you add site, set the host name to it.
An easier way to do this is to add a route to your application, for example on global.asax, to get the incoming server address from there and direct it to the correct destination. For Application_Begin_Request in Mvc, you can refer to this article here.

Swagger proxied by haproxy can't execute requests

I have a swagger working with a haproxy. I use built in swagger in Websphere Liberty Profile (apiDiscovery feature):
Browser -swagger.mydomain.com-> haproxy -swagger.intranet-> IBM Liberty server with Swagger
The first swagger page is generated and shown correctly in the browser, but as Liberty server gets the request from haproxy, not my browser, and gets them to the intranet name/ip (swagger.intranet), Swagger code to execute GETs, POSTs, etc. is generated with that intranet IP name (swagger.intranet), so when I try any of the methods they won't work as reference this internal ip name from in a browser outside that zone.
Can I configure haproxy with some header to inform haproxy that he should generate code with the original server name (swagger.mydomain.com) request used in the request? (That is the one to be used in the generated HTML/Javascript code)
Thanks.
Liberty trusts the Host: header and uses it to assemble self-referential links.
Where you define the backend, try setting http-request set-header Host swagger.mydomain.com to what the client will be using or removing a similar stanza if you are setting it to some swagger.intranet already.
(sorry, I'm not an HAProxy user. This is based on searching for 'HAProxy equivalent of ProxyPreserveHost')

Keycloak and Docker - Cannot set two types of URLs

I use standalone version of keycloak in docker-based application.
Since Keycloak 1.9.2 there is an "auth-server-url-for-backend-requests" attribute removed from keycloak properties.
This field was by me to indicate the internal ip address of auth server (inside a dock).
The external one (auth-server-url) is used for redirection purpose.
My question is: how to replace former auth-server-url-for-backend-request to solve a problem of having different network addresses inside docker and outside of it.
According to the following links, it appears you can use the same DNS for external requests as you would for internal. See these:
keycloak issue
http://keycloak.github.io/docs/userguide/keycloak-server/html_single/index.html#d4e4114
You should set the KEYCLOAK_FRONTEND_URL parameter in the Dockerfile or docker-compose.yml (if you use them). In other case your should set this parameter in Keycloak General settings UI.
Eg.:
It is quite tricky because you shouldn't set the real front-end's URL, however you should set the URL which is used by front-end. I have the same problem so you can see some examples in my SO question/answer

Pivotal CloudFoundry: Enforcing HTTPS (SSL)

I want to enforce HTTPS for a Spring Boot application to be hosted at Pivotal CloudFoundry, and I think most of the applications would want this today. The common way of doing it, as I know, is using
http.requiresChannel().anyRequest().requiresSecure()
But this is causing a redirect loop. The cause, as I understand by refering to posts like this, is that the load balancer converts back https to http. That means, it has to be done at the load balancer level.
So, is there some option to tell CloudFoundry to enforce HTTPS for an application? If not, shouldn't this be a feature request? And, what could be a good way to have this today?
Update: Did any of you from Cloud Foundry or Spring Security team see this post? I think this is an essential feature before one can host an application on CloudFoundry. Googling, I found no easy solution but to tell the users to use https instead of http. But, even if I tell so, when an anonymous user tries to access a restricted page, Spring Security is redirecting him back, to the http login page.
Update 2: Of course, we have the x-forwarded-proto header as many answers suggest, but I don't know how hard it would be to customize the features of Spring Security to use that. Then, we have other things like Spring Social integrating with Spring Security, and I just faced an issue there as well. I think either Spring Security and tons of other other frameworks will need to come out with solutions to use x-forwarded-proto, or CloudFoundry needs to have some way to handle it transparently. I think the later would be far convenient.
Normally, when you push a WAR file to Cloud Foundry, the Java build pack will take that and deploy it to Tomcat. This works great because the Java build pack can configure Tomcat for you and automatically include a RemoteIpValve, which is what takes the x-forwarded-* headers and reconfigures your request object.
If you're using Spring Boot and pushing as a JAR file, you'll have an embed Tomcat in your application. Because Tomcat is embedded in your app, the Java build pack cannot configure it for the environment (i.e. it cannot configure the RemoteIpValve). This means you need to configure it. Instructions for doing that with Spring Boot can be found here.
If you're deploying an web application as a JAR file but using a different framework or embedded container, you'll need to look up the docs for your framework / container and see if it has automatic handling of the x-forwarded-* headers. If not, you'll need to manually handle that, like the other answers suggest.
You need to check the x-forwarded-proto header. Here is a method to do this.
public boolean isSecure (HttpServletRequest request) {
String protocol = request.getHeader("x-forwarded-proto");
if (protocol == null) {
return false;
}
else if (protocol.equals("https")) {
return true;
}
else {
return false;
}
}
Additionally, I have created an example servlet that does this as well.
https://hub.jazz.net/git/jsloyer/sslcheck
git clone https://hub.jazz.net/git/jsloyer/sslcheck
The app is running live at http://sslcheck.mybluemix.net and https://sslcheck.mybluemix.net.
Requests forwarded by the load balancer will have an http header called x-forwarded-proto set to https or http. You can use this to affect the behavior of your application with regard to SSL termination.

Resources