Highly available Oauth2 reverse proxy? - oauth-2.0

Is an Oauth2 reverse proxy server expected to be HA or a single
point of entry to a set of backend servers?
I have a load balancer balancing between two oauth2 reverse proxies each upstreaming to a grafana server.
Now the issue is that the grafana homepage doesn't load until some refreshes on the page.
On checking the logs, I have a few observations:
Firstly, on hitting the URL, both the Oauth2 proxies show requests (I am guessing this is because multiple requests are fired to load the homepage of grafana)
One of the proxy's logs shows 404 requests until a couple of refreshes after which both show 200 and the grafana homepage loads successfully
Until then, the grafana page does not load but just shows the grafana background solid page with no errors
Am I doing something wrong with the architecture here? How is it possible to have HA for Oauth2 proxies?

Related

Spring Security Azure AD redirect url issue

When I try running in localhost, it works fine. But when I try running the same behind a load balancer, it gives the following error:
AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: '<clien-id>'.
I have registered the application at AzureAD with the load balancer URL. But when I send my request, the redirect URL is still localhost as shown below.
https://login.microsoftonline.com/common/oauth2/authorize?response_type=code&client_id=XXX&...**redirect_uri=localhost:8080/login/oauth2/code/azure**&nonce=...
I want my application to insert the load balancer URL as the value of redirect_url (instead of localhost).
I tried the solutions suggested below and still not successful:
Redirect URL for Spring OAuth2 app on Azure with Active Directory: Invalid Redirect URI Parameter
Spring Boot using Azure OAuth2 - reply URL does not match error
Thanks in advance.
When you use a load balancer/proxy, you need to add some extra configuration to make it possible to resolve the redirect URL correctly.
A load balancer usually applies the standard RFC7239 "Forwarded Headers" like X-Forwarded-Proto and X-Forwarded-Host. In that case, the redirect url should be correctly computed after applying the following two configurations. (Example for the Tomcat scenario)
server.forward-headers-strategy=NATIVE
"If the proxy adds the commonly used X-Forwarded-For and
X-Forwarded-Proto headers, setting server.forward-headers-strategy to
NATIVE is enough to support those."
server.tomcat.redirect-context-root=false
If you are using Tomcat and terminating SSL at the proxy,
server.tomcat.redirect-context-root should be set to false. This
allows the X-Forwarded-Proto header to be honored before any redirects
are performed.
The above configuration works if you use a placeholder for the base URL in your client configuration in Spring Security, for example {baseUrl}/login/oauth2/code/{registrationId}. In this way, the {baseUrl} placeholder is dynamically resolved by Spring Security differently depending on whether it's behind a load balancer or not (https://your-lb-url.com vs http://localhost:8080).
More info in the official documentation:
Spring Boot - Running Behind a Front-end Proxy Server
Spring Security - Proxy Server Configuration

How to specify GraphQL server location in Relay?

I have successfully set everything up but unfortunately my GraphQL endpoint is not at the same location as the website that serves the client side.
I know this because in the error console of the browser it says :
http://localhost:3000/graphql Failed to load resource: the server responded with a status of 404 (Not Found)
three times then give up.
The page that I am using Relay is indeed at http://localhost:3000/ but my GraphQL endpoint is at http://localhost:5000/graphql. Looks like it uses the current URL then automatically append /graphql to it. How can I instruct Relay to get data from other place?
Ok, I found it. (https://facebook.github.io/relay/docs/guides-network-layer.html)
Relay.injectNetworkLayer(
new Relay.DefaultNetworkLayer('http://example.com/graphql')
);
And just in case you are running this on localhost it is still subjected to CORS because it is on different port. In my case I am using an Express server for GraphQL endpoint so I used cors middleware to whitelist my other page.

Nginx Auth Proxy

I have multiple services which have their own web-server, listening on different ports, for e.g. :
http://127.0.0.1:5000 (service A)
https://127.0.0.1:3000 (service B)
I need a way to restrict access to them without tweaking each of them individually. So, I have an OAuth server hosted as well (port 2333). I have configured the OAuth server to be able to redirect you to a certain URL, if you successfully authenticated through it. So, for e.g. if I access this URL:
https://127.0.0.1:2333/oauth/authorise?service=A&redirect_uri=http://127.0.0.1:5000
It will ask for authentication (or search for cookie) and redirect me to the desired URL. This works OK if I manually access that URL, but I need it automated (every time you try accessing the initial URL, get redirected to the OAuth).
I need the following scenario:
Insert URL http://127.0.0.1:5000 in browser
Get redirected to https://127.0.0.1:2333/oauth/authorise?service=A&redirect_uri=http://127.0.0.1:5000
The OAuth server takes care of the rest
For this, I was thinking of using nginx to redirect, but I don't know how to configure it. Any ideas?
How are those services hosted? You have a couple of options:
Doing a redirect to (https://127.0.0.1:2333) on the / path for A service (on the source code).
Doing a redirect on the server configuration (which is more low level and should be faster)
The first option gives you more control, and you can do other things easily (like checking if the user is logged in). The second option is faster, but some things are harder to implement, as you will modify the server configuration)

Errror in Proxy server when caching HTML page

Problem:
UserA and UserB are in a network with server proxy.
UserA opens page "www.myapp.com/initPage.htm".
If UserB opens the same page, then he will see the page with information from UserA.
For the proxy server is the same page so it returns the information it has cached.
More Info:
Each user has different JSESSIONID and stored in the attribute set-cookie of the header response.
The URL is the same for the two users but the information depends on the JSESSIONID.
The proxy server don't stores the JSON calls only HTML pages.
I tried to solve the problem with this solution but did not work.
Architecture:
My application is implemented with Spring Security 3.1 and Struts2.
Work on Apache2 server, which is connected to a Tomcat7 through mod_jk module and configured with "workers.properties" file.
How I can tell the proxy server will never save the HTML page?
Best regards and thanks.
Finally, this solution worked but adding the filter in the first position in the web.xml.
Best regards and thanks.

Rails' page caching vs. HTTP reverse proxy caches

I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question).
What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here.
This is my understanding of how both techniques work (maybe I'm wrong):
With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request
With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser
If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?
You are right.
The only reason to consider it is if your apache sets expires headers. In this configuration, the proxy can take some of the load off apache.
Having said this, apache static vs proxy cache is pretty much an irrelevancy in the rails world. They are both astronomically fast.
The benefits you would get would be for your none page cacheable stuff.
I prefer using proxy caching over page caching (ala heroku), but thats just me, and a digression.
A good proxy cache implementation (e.g., Squid, Traffic Server) is massively more scalable than Apache when using the prefork MPM. If you're using the worker MPM, Apache is OK, but a proxy will still be much more scalable at high loads (tens of thousands of requests / second).
Varnish for example has a feature when the simultaneous requests to the same URL (which is not in cache) are queued and only single/first request actually hits the back-end. That could prevent some nasty dog-pile cases which are nearly impossible to workaround in traditional page caching scenario.
Using a reverse proxy in a setup with only one app server seems a bit overkill IMO.
In a configuration with more than one app server, a reverse proxy (e.g. varnish, etc.) is the most effective way for page caching.
Think of a setup with 2 app servers:
User 'Bob'(redirected to node 'A') posts a new message, the page gets expired and recreated on node 'A'.
User 'Cindy' (redirected to node 'B') requests the page where the new message from 'Bob' should appear, but she can't see the new message, because the page on node 'B' wasn't expired and recreated.
This concurrency problem could be solved with a reverse proxy.

Resources