I'm experimenting with ReactiveSearch and so far have tried the DataSearch and ResultList components. I'm looking over the required component to look at all the props and I see this
<ReactiveBase
app="appname"
credentials="abcdef123:abcdef12-ab12-ab12-ab12-abcdef123456"
headers={{
secret: 'reactivesearch-is-awesome'
}}
>
<Component1 .. />
<Component2 .. />
</ReactiveBase>
If the app is already secured using Appbaseio and the credentials gives my React app access to my ES cluster hosted there... what exactly could headers be used for? At first I thought username and password but you wouldn't do that.
What would be some of the scenarios where I SHOULD/COULD use the headers prop?
The headers are added to each request sent to the url. Normally you wouldn't need these. But in production you might want to add a layer of proxy server between your elasticsearch cluster and the client side ReactiveSearch code, this is where headers can be helpful.
You could add authentication in the flow. For example, you could restrict the elasticsearch calls to authenticated users by sending an access token via the headers prop and then verifying it at the proxy server (example of proxy server).
You could also implement some custom logic by adding custom headers and a logic to handle them at the proxy server.
Related
Context:
I have a keycloak inside a docker, I understand that there is a "proxy reverse" doing something like transforming this url for example: "http://example.com" into "http://171.20.2.97:8082" (this is the actual place where the Keycloak is "deployed" or "up"). It is just an example, my clients when they need to consume an endpoint from one microservice of mine do not use numbers, they use example.com.
so in the Keycloak when you want to see the metadata of the realm for SAML2.0 you can do it by following this link which is in the REALM settings section:
https://example.com/auth/realms/REALM-NAME/protocol/saml/descriptor
as you can see I am using "example.com" not "171.20.2.97:8082" to access the metadata link.
The problem is that inside the METADATA, the endpoints for SingleSignOnService, SingleLogoutService, etc. Are all configured to be "http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" (notice it is using the numbers and not example.com) and this causes that when the clients that want to use SAML.
Send inside their SAML REQUEST "Destination" attribute like so: "http://example.com/auth/realms/REALM-NAME/protocol/saml" and this causes an invalid request error, with reason invalid_destination, because the request attribute Destination was expected to be:
"http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" like is inside the Metadata.
So my question is, how can I edit the metadata to change the endpoints numbers to example.com or if that is not possible, how can I make example.com get translated to 171.20.2.97:8082 inside my keycloak server? Or if you know another way to solve/figure out this it is very welcome
I feel like a BEAST after finding out how to achieve what I needed after like 3 weeks of searching about keycloak and SAML (I overcame many obstacles this was the lastone), finally I managed to fix this by using the "Frontend URL" setting in my REALM settings, there I can put anything I want so that it changes "http://171.20.2.97:8082/auth/" (inside the metadata urls) for whatever I configure there, so for example if I set Frontend URL to:
https://example.com/auth/
now all my metadata endpoints will be like so:
https://example.com/auth/realms/REALM-NAME/protocol/saml
instead of:
http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml
now my client is being able to properly login with SAML2 using keycloak.
how did I manage to find out this? Well there is not much info so this was what gave me the hint: Keycloak behind nginx reverse proxy: SAML Integration invalid_destination
The person asking said that he configured frontend-url, and I wanted to give a try to that, and after checking if that changed metadata urls, surprise it did =)
I am trying to build a Custom Connector in the Power Platform to connect to the BMC Helix (formerly Remedy) system to create work orders and such. I am using OAuth2 and was given a callback URL, auth URL, token URL, client ID and client secret.
I went to create a connector from scratch. I populated the fields, but I wasn't sure what to put for the 'Refresh URL', so I used the token URL there too.
I am trying to accomplish testing this connector and my successful test would be to get a JWT from doing a POST to the /api/jwt/login endpoint of BMC Helix. It should return a JWT which I can use to make subsequent calls.
Upon testing this, I go to create a connection, but a window opens (which I believe should be a prompt for authentication), but instead it contains an error saying 'unauthorized_client' coming back from the BMC Helix system at the /rsso/oauth2/authorize endpoint. It also contains a property within the URL of redirect_uri = https://global.consent.azure-apim.net/redirect.
Is there something on the Helix side I need to further configure? Not sure why I am getting this....
It sounds like you need TWO METHODS in your connector. A POST to call the token server, a GET (or another POST) to call the API (using the token received from Call 1).
One approach I've successfully used in the past is:
Use Postman to get your token server call working with OAUTH
Then use Postman to get your subsequent API calls working with the token appended
Save both requests to a single Postman collection
Export the Postman collection (as a V1 (deprecated) if I recall correctly)
Import this collection into PowerApps Custom Connector (create new/import from Postman Collection)
You'll have to massage it a bit after import, but it will give you a good headstart and you're starting from a known-good place (working Postman calls)
Good luck!
I have two separate services communicating using AmqpProxyFactoryBean (the "client") and AmqpInvokerServiceExporter (the "server"). Now, I'd like to include some custom headers on every request made through the AMQP proxy and be able to access them on the "server". Is there any easy way I can achieve this?
Since AmqpClientInterceptor uses AmqpTemplate to send and receive AMQP messages, you can provide for that RabbitTemplate any custom MessageConverter. And populate any additional headers from your toMessage() implementation.
However I'm not sure that you will be able to access to those custom header on the server side. We end up there just with RemoteInvocation.invoke().
So, seems for me you finally come up to the solution with an additional RPC param.
From other side that custom header may be useful for other AQMP routing scenarios when you can route that RPC message not only to the RPC queue.
Consider using Spring Integration AMQP Gateways instead of remoting over rabbitmq; that way you have complete control over the headers passed back and forth.
If you don't want to use Spring Integration, you can use the RabbitTemplate sendAndReceive methods on the client and either the receiveAndSend or a listener container on the server.
Again, this gives you full control over the headers.
Parse.com's REST API docs (https://www.parse.com/docs/rest) say: Authentication is done via HTTP headers. The X-Parse-Application-Id header identifies which application you are accessing, and the X-Parse-REST-API-Key header authenticates the endpoint. In the examples with curl that follow, the headers are stored in shell variables APPLICATION_ID and REST_API_KEY, so to follow along in the terminal, export these variables.
I am building a Sencha Touch app as a native app on iOS and Android using Phonegap, and I was wondering whether it is secure to expose these keys to the client while making the REST calls?
Also, can someone explain to me how does security work in this scenario? Help is much appreciated! Thanks!
Without phonegap , in a proguard , post processed android apk , the string values of the 2 headers you mention are exposed client-side . not a big issue. TLS covers the http header values during network leg and far more important for app security, you have Full ACL at the DB row level(parse/mongo) contingent on permissions of 'current user()'. So with no access to logon, some outsider doesn't have any more than obfuscated string value to an app-level access token.
. One odd thing is that with parse the lease time on the client-side token value foapi key is permanent rather than say a month.
Parse REST security is robust n well executed.
Can't speak to what PG framework offers in obfuscate/minify/uglify area but you should check that.
I have a Webapp that is secured using Spring Security CAS. The CAS Server and the Webapp sit behind a web server for reverse proxy (named url). The webapp uses ServiceAuthenticationDetailsSource to authenticate dynamic service urls. The problem I have is that Service Ticket validations are failing because the url supplied during validation does not match the url provided when the ticket was created. The set up works without the webserver when systems are connected using https://:/.
The issue seems to be that the web server modifies the HttpServletRequest when redirecting to the webapp where in it looses the "named url" information and is substituted with the and . The service ticket is obtained using the named url via "?service=" during login.
Any possible solution? Can apache reroute request without modifying it, especially for applications that are self identifying or for security reasons where CAS is trying to record the client IP address?
I have outlined a few options below:
Setup the Reverse Proxy
According to the Javadoc of ServletRequest: the HttpServletRequest.getServerName() will be:
the value of the part before ':' in the Host header value, if any, or
the resolved server name, or the server IP address.
This means you can configure your proxy to ensure the Host Header is set properly (Note Some containers like WebSphere do not honor the specification though).
Override using the Container Configuration
Many servers have a setup that can override this value in the event you are using a reverse proxy. There is a pretty decent thread on the Spring forums with a bit more information on it that I have summarized below.
If you are using Tomcat, I'd refer to the Reverse Proxy setup page. One method of configuration would be to configure the Http Connector to have the proxyName attribute to override the value returned by HttpServletRequest.getServerName() and proxyPort to override the value returned by HttpServletRequest.getServerPort(). An example configuration might look like:
server.xml
<Connector scheme="https" secure="true"
proxyPort="443" proxyName="example.com"
port="8009" protocol="AJP/1.3"
redirectPort="8443" maxThreads="750"
connectionTimeout="20000" />
Websphere has a few custom properties that do the same thing.
com.ibm.ws.webcontainer.extractHostHeaderPort = true
trusthostheaderport = true
httpsIndicatorHeader = com.ibm.ws.httpsIndicatorHeader
If you are not using either of these containers or need to support multiple domains, you will need to consult your containers documentation.
Custom AuthenticationDetailsSource
Of course Spring Security is pretty flexible, so you can always provide a custom implementation of AuthenticationDetailsSource that returns an instance of ServiceAuthenticationDetails that looks up the service URL in any way you wish.