I have system with two loadbalancer exposing application for separate networks.
In application spring-saml extension is used for authentication with IdP (one visible from both networks).
For first location all working as expected - default SP with entityId=exampleSP1.
When I configure second SP metadata (local with different entityId=exampleSP2) and call it using /saml/login/alias/exampleSP2
Application receive successful response from IdP but during SAMLCredential validation exception is thrown:
"SAML message intended destination endpoint did not match recipient endpoint"
When using second SP destination endpoint is different from configured in contextProviderLB and exception occur.
Is a way to define separated contextProviderLB depends on which SP is used (or initial URL) ?
You're hitting an issue in https://jira.spring.io/browse/SES-150 which is now fixed in trunk. Please update your Spring SAML. And thank you leaving your comment in Jira.
Related
Context:
I have a keycloak inside a docker, I understand that there is a "proxy reverse" doing something like transforming this url for example: "http://example.com" into "http://171.20.2.97:8082" (this is the actual place where the Keycloak is "deployed" or "up"). It is just an example, my clients when they need to consume an endpoint from one microservice of mine do not use numbers, they use example.com.
so in the Keycloak when you want to see the metadata of the realm for SAML2.0 you can do it by following this link which is in the REALM settings section:
https://example.com/auth/realms/REALM-NAME/protocol/saml/descriptor
as you can see I am using "example.com" not "171.20.2.97:8082" to access the metadata link.
The problem is that inside the METADATA, the endpoints for SingleSignOnService, SingleLogoutService, etc. Are all configured to be "http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" (notice it is using the numbers and not example.com) and this causes that when the clients that want to use SAML.
Send inside their SAML REQUEST "Destination" attribute like so: "http://example.com/auth/realms/REALM-NAME/protocol/saml" and this causes an invalid request error, with reason invalid_destination, because the request attribute Destination was expected to be:
"http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" like is inside the Metadata.
So my question is, how can I edit the metadata to change the endpoints numbers to example.com or if that is not possible, how can I make example.com get translated to 171.20.2.97:8082 inside my keycloak server? Or if you know another way to solve/figure out this it is very welcome
I feel like a BEAST after finding out how to achieve what I needed after like 3 weeks of searching about keycloak and SAML (I overcame many obstacles this was the lastone), finally I managed to fix this by using the "Frontend URL" setting in my REALM settings, there I can put anything I want so that it changes "http://171.20.2.97:8082/auth/" (inside the metadata urls) for whatever I configure there, so for example if I set Frontend URL to:
https://example.com/auth/
now all my metadata endpoints will be like so:
https://example.com/auth/realms/REALM-NAME/protocol/saml
instead of:
http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml
now my client is being able to properly login with SAML2 using keycloak.
how did I manage to find out this? Well there is not much info so this was what gave me the hint: Keycloak behind nginx reverse proxy: SAML Integration invalid_destination
The person asking said that he configured frontend-url, and I wanted to give a try to that, and after checking if that changed metadata urls, surprise it did =)
I am trying to build a Custom Connector in the Power Platform to connect to the BMC Helix (formerly Remedy) system to create work orders and such. I am using OAuth2 and was given a callback URL, auth URL, token URL, client ID and client secret.
I went to create a connector from scratch. I populated the fields, but I wasn't sure what to put for the 'Refresh URL', so I used the token URL there too.
I am trying to accomplish testing this connector and my successful test would be to get a JWT from doing a POST to the /api/jwt/login endpoint of BMC Helix. It should return a JWT which I can use to make subsequent calls.
Upon testing this, I go to create a connection, but a window opens (which I believe should be a prompt for authentication), but instead it contains an error saying 'unauthorized_client' coming back from the BMC Helix system at the /rsso/oauth2/authorize endpoint. It also contains a property within the URL of redirect_uri = https://global.consent.azure-apim.net/redirect.
Is there something on the Helix side I need to further configure? Not sure why I am getting this....
It sounds like you need TWO METHODS in your connector. A POST to call the token server, a GET (or another POST) to call the API (using the token received from Call 1).
One approach I've successfully used in the past is:
Use Postman to get your token server call working with OAUTH
Then use Postman to get your subsequent API calls working with the token appended
Save both requests to a single Postman collection
Export the Postman collection (as a V1 (deprecated) if I recall correctly)
Import this collection into PowerApps Custom Connector (create new/import from Postman Collection)
You'll have to massage it a bit after import, but it will give you a good headstart and you're starting from a known-good place (working Postman calls)
Good luck!
We have implemented our own oAuth provider and are having an issue when the system runs in a load balanced scenario. When we run with a single server all is well but when we switch the other on we get the following situation:
Token ‘A’ generated on server 1
Token ‘A’ not valid on server 2.
I have done some Googling on this and it seems to be a known issue but can’t seem to find a solution.
Anybody got an idea.
Thanks
You will have to make sure that you do one of:
synchronize the state of your Authorization Server between all load balanced nodes by using a shared cache (e.g. database or file system) or replicates state across nodes using some replication mechanism
your Authorization Server issues tokens that can be inspected by the load balancer to find out to which node it needs to send the validation request
The latter. has the downside that it cannot be used in a high availability scenario.
New to ServiceMix, I'm not sure if it can do what I need:
I've an interface defined by wsdl
I have several endpoints that implement that interface defined by the wsdl
I've a service that can only send the message to one endpoint.
Can service mix do the following:
Producer sends to an endpoint in servicemix (with the wsdl).
ServiceMix broadcasts to the different endpoints the message.
Thanks for the advice and / or pointers!
The simplest would be to use a Camel Recipient List. This would allow you to have either static or dynamic destinations for your broadcast.
It sounds as if your application doesn't require termination of the SOAP message (de/serialize the payload to/from an object) at the ServiceMix consumer. Therefore it can be kept very simple by using a Camel Jetty proxy:
There are further details on the link above on how to configure static (the example above) and dynamic routing.
If your application changes and does require termination of the SOAP message at the ServiceMix consumer in the future, then change out the Camel Jetty consumer endpoint with a Camel CXF endpoint and set bridgeEndpoint to false.
As for ServiceMix you will need to enable the Camel Jetty component. To have it enabled by default you need to edit the etc/org.apache.karaf.features.cfg file and add camel-jetty to the featuresBoot property. This is typically the best practice for features required by your application. You can also install the feature at the ServiceMix console with the command "features:install camel-jetty".
The Recipient List capability is part of the Camel Core API which is installed by default and if not, as a dependency of any other Camel component.
Best Regards,
Scott ES
I have a Linux/Apache/Rails stack hosting a data service. The data service is basically a front end for multiple data sources, akin to a federated search.
Queries to the service are authenticated via PKI. When handling each request, the PKI must be forwarded to each data source appropriate for the given request - each data source uses the PKI to control data access.
I know how to access the requestor's DN from Rails, but I haven't the first clue how to access the PKI or pass it along in web requests launched by the controller when handling the request. Any suggestions?
Your description makes it a bit hard to follow the organization, but Ill try to give this a shot.
The nature of PKI makes forwarding (proxying) a connection impossible, since the two endpoints set up a secret session key known only to those parties. It seems like you have 3 parties, a Client, an Intermediate, and an Endpoint. So the client can authenticate to the intermediate, and the intermediate now knows with certainty who the client is. I think your question is how to get the endpoint to know with certainty who the client is. The method I would choose is to have each intermediate have its own certificate, and authenticate to the endpoint itself (so now the endpoint knows who the intermediate is with certainty) then just have the intermediate pass the DN as some extra field that the endpoint will trust from the intermediate.