The problem is this: we have released an application with a Vue front-end and a keycloak authorization server. Keycloak works in a docker containerThe application is located at the URL: app.xxxx.xx, and the authorization is at the URL: auth.xxxx.xx. Nginx is used as a proxy server. Everything starts, but after authorization, the application itself does not load and an error occurs:
Access to XMLHttpRequest at 'https://auth.xxxx.xx/auth/realms/Atlas/protocol/openid-connect/token' (redirected from 'http://auth.xxxx.xx/auth/realms/Atlas/protocol/openid-connect/token') from origin 'http://app.gxxxx.xx.' has been blocked by CORS policy: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
Keycloak config
Nginx config
Try using "+" in "Web origins" in your client configuration ?
See Web origins.
Related
Client gave me a ruby application to deploy on AWS EC2. I have configured it but after login page all assets are redirecting to https.
In html the src='http://{url}' but in Network tab all request are checking https and so failing to load.
I have set force_ssl = false in configuration files as well.
The error comes as ' HTTP parse error, malformed request (): #<Puma::HttpParserError: Invalid HTTP format, parsing fails.>
'
Is there any other place where I need to update settings? Previously the site is configured with SSL and now I am trying without SSL
The login page is working fine by the way. but after that it makes every call to https
I have a client that can only make requests without authentication information.
I would like to use HAProxy or a similar proxy solution to add OAuth authentication to these client requests.
I already succeeded to add a Bearer token to the client requests. See below for the haproxy.cfg with some placeholders.
frontend front
mode http
bind *:8080
default_backend servers
http-request add-header Authorization "Bearer {{ .Env.ACCESS_TOKEN}}"
backend servers
mode http
server server1 myserver.com:443 ssl
The problem is that the access tokens have a TTL of 24 hours. So I need to refresh them or get a new token periodically.
Does HAProxy support this already?
I can write some script to get a new access token periodically, update the config and restart HAProxy. Is this a good approach when running HAProxy in docker? Are there better solutions?
You could give a try to create/test your script using Lua, it is now supported in the latest versions, check How Lua runs in HAProxy.
An example of this but using Nginx + Lua, can be found in this project: https://github.com/jirutka/ngx-oauth
We are using Oracle Access Manager for providing SSO for HP Service Request Catalog application.
Service request Catalog application requires userid as in form of REMOTE_USER header, this REMOTE_USER will be used for authentication.
So we are passing REMOTE_USER header with userid value from Oracle Access Manager.
Even after passing REMOTE_USER header, authentication is not successful. In the application logs we found error that "SSO: Authentication failed, reason - REMOTE_USER header can't be found in HTTP request".
Service Request Catalog application is using SPRING security 3.1.0 version.
Kindly let me know if we can change Service Request Catalog application to accept different headers other than REMOTE_USER for authentication.
Regards,
Gurivi
Confirm that the server.xml file on Tomcat are set to get REMOTE_USER from the HTTP Header. To do this, follow these steps:
Locate # Define an AJP 1.3 Connector on port 8009 settings.
Verify that the property tomcatAuthentication is set to false.
Reference
I have a backend server (powered by Rails), whose APIs are used by a HTML5 frontend that runs on a Node simple development server.
Both are on the same host: my machine.
When I login from the frontend to the backend, rails sent me the session cookie. I can see it in the response headers, the problem is that browsers do not save it.
Policies are right, If I serve the same frontend directly from the rails app cookies are set right.
The only difference I can see is that when the frontend run on Node server, It runs on the port 8080 and rails is on the port 3000. I knew that cookies are not supposed to be port specific, so I am missing what is happening here.
Any thoughts? solutions?
(I need to be able to keep the setup this way, so to have the frontend served from Node and the backend on rails on different ports)
You're correct that cookies are port agnostic, and that the browser will send the same cookies to myapp.local:3000 as myapp.local:8080--except not through XMLHttpRequest (XHR, a.k.a., AJAX) when doing a cross-site request (CORS).
Solution: The request can be told to include cookies and auth headers by setting withCredentials to true on any XMLHttpRequest object. See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials
Or if using the Fetch API, set the option credentials: 'include'. See: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
Alternative: since you tagged webpack-dev-server in your question, you might be interested in proxying requests to your Rails API through the webpack-dev-server to avoid any CORS issues in the first place. This is done in your weback.config:
proxy: {
'/some/path': {
target: 'https://other-server.example.com',
secure: false
}
}
See: https://webpack.js.org/configuration/dev-server/#devserverproxy
Dropbox requires the callback URL to be over HTTPS (when not using localhost).
Using Mule 3.6.0 with the latest dropbox connector, the callback defaults to http - thus only working with localhost. For production I need to use https for the OAuth dance.
What is the correct way to specify a https callback URL?
I've tried:
<https:connector name="connector.http.mule.default">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="production.mydomain.com" path="callback" />
</dropbox:config>
But it errors:
Endpoint scheme must be compatible with the connector scheme. Connector is: "https", endpoint is "http://production.mydomain.com:8052/callback"
Here's what I ended up with that solved the problem:
<https:connector name="connector.http.mule.default" doc:name="HTTP-HTTPS">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="myserver.domain.com" path="callback" connector-ref="connector.http.mule.default" localPort="8052" remotePort="8052"/>
</dropbox:config>
This works great for localhost, but not if you need the callback to go to something other than localhost (e.g. myserver.domain.com)
Reviewing mule.log you can see that the connector binds to localhost (127.0.0.0) despite the config pointing to:
domain="myserver.domain.com"
Log Entry:
INFO ... Attempting to register service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Endpoint Service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Connector Service with name Mule.Ops:type=Connector,name="connector.http.mule.default.1"
The workaround is to force Mule to listen to 0.0.0.0 for connectors which define localhost as the endpoint.
In wrapper.conf set
wrapper.java.additional.x=-Dmule.tcp.bindlocalhosttoalllocalinterfaces=TRUE