I have a general question on delegation with OAuth2. I watched a great tutorial by Dave Syer on microservices security. As far as I understand he suggests that individual microservices will be Resource servers. Which is totally fine.
I also read a token relay section from spring cloud security which to my understanding makes your life easier when you plan to forward tokens (but not much more).
The token relay is really acting on behalf of the user (or upstream service). Is there any limitation on which Resource server is allowed to perform which operations on behalf of the user? Because to me the fact that the user is allowed to perform an operation is not the same as any resource server is allowed to perform that operation on user's behalf.
I remember this from Kerberos where it was big deal to do delegation. Individual services were getting permissions via SPNs to where they can delegate to (to me it was always horrible to setup but I understood it as necessary).
Is this considered implementation detail in OAuth2 or am I missing some obvious concept there?
Related
I'm new to OAuth2 and cloud-functions/serverless.
So I was wondering whether it makes sense to create cloud-functions to handle OAuth2 requests.
My Idea:
User sends auth request to and API Gateway (to prevent cloud-function abuse, as of my understanding, or how else should that be prevented? Cloudflare?)
Gateway redirects request to cloud-function
Cloud-function stores user authentication in DB
User is now authenticated.
The authenticated user can now request actual data, like profile, through other cloud-functions.
Response with data to the user.
Is this a correct understanding of how OAuth works? If so, does this make sense, or would a usual server be cheaper to handle OAuth?
Yes, what you described should work. Note that you will need to secure your Cloud functions and besides the OAuth, do not forget to configure your function-to-function authorisation layer (as I understood that you will use more Cloud Functions). I think this process can be a pain, as you will need to configure it for each function. Here you can find more details about it.
Although, what you have described should work, me, personally I would not implement that and I would go for an Ambassador architecture with a service running on
Cloud Run ,let's say, that include the security layer, also. I would not choose your architecture plan for a few reasons:
1) I think it will be more complicated to configure mainly because what I was talking before.
2) Even though it is possible and people do this, I would not use Cloud Functions for querying databases in general, because this is a process which may take some time and your Cloud Function may timeout under some specific circumstances. ( maybe if there are a lot of ongoing connections to your db in that moment, it could cause high latency ).
3) Maintenance and debugging may be a little more difficult in a ¨chain¨ system like this.
4) I think that in case of really high traffic the cloud functions based architecture may be more expensive. You can check this out using the Pricing Calculator.
In conculsion, I think it will work, but I would not do it like this.
I'm building a small microservice-based webapp using JHipster with JWT authorization. The Architecture is simple, one gateway and two services with repositories. The problem that I had for the last few hours is the communication between the two backend-services.
At first, I tried to find a token on the services themself, but couldn't find it. If I just missed it in all the docs (quite overwhelming when beginning with the full stack :P), I would be happy to revert my changes and use the predefined token.
My second approach was that each service will authorize itself with the gateway at PostConstruct and save the token in memory to use it each API call. It works without a problem, but I find it hard to believe that this functionality is not already programmed in JHipster.
So my question is whether my approach is usual? If neither is true and there are some best-practices for it, I'm also interested in them.
It depends on the use case.
For user requests, a common approach is: the calling service forwards the token it received to the other service without going through the gateway suing #AuthorizedFeignClient.
For background tasks like scheduled jobs, your approach can be applied or you could also issue long life tokens as long as they have limited permissions through roles. This way you don't have to go through gateway.
Keycloak's offline tokens approach could also inspire you.
We are building our applications in micro-services based architecture to implement our applications. As true with micro-services, we now see a lot of cross service interactions happening between services.
In order to safeguard the endpoints we plan to implement JWT based authentication between such secure exchanges.
There are 2 approaches we see helping us achieve it:
Embed an JWT engine in each application to generate the token (#consumer side) and evaluate (#provider side). With an initial exchange of keys, the token exchange shall work smooth for any future comms.
Have an external (to application) JWT engine, that sits in between all micro-service communications for the distributed application, and takes care of all token life cycle, including its encryption-decryption and validation.
There are lot of options to do it as per option #1 as listed on https://jwt.io but considering the over-head token generation and management adds to a micro-service, we prefer to go with 2nd option by having de-centralised gateway.
After quite some research and looking at various API gateways we have not yet come across a light weight solution/tool that can serve to our need and help us get centralised engine for one applications comprised of many micro-services.
Do anyone know about one such tool/solution?
If you have any other inputs on this approach, please let me know.
I prefer also option 2, but why are you looking for a framework?
The central application should only be responsible of managing the private key and issuing the tokens. Including a framework for solve one service could be excessive
You can also think to implement a validation service, but since applications are yours, I suggest to use an assymetric key and verify the token locally instead of executing remote validation requests to central application. You can provide a simple library to your microservices to download the key and perform the validation. Embed any of the libraries of JWT.io or build It from scratch. Validating a JWT is really simple
If you would need to reject a token before expiration time, for example using a blacklist, then It would be needed a central service. But I do not recommend this scheme because breaks JWT statelessness
Both scenarios could be implemented in Spring Cloud Zuul.
For more info:
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_router_and_filter_zuul
http://cloud.spring.io/spring-cloud-static/Brixton.SR7/#_configuring_authentication_downstream_of_a_zuul_proxy
Is PassportJS using Facebook Authentication enough for an iOS backend with Node JS?
I have the toobusy package as well to decline requests when things get to busy (I'm guessing it would be good for DDOSes).
I'm thinking of using nginx as a reverse proxy to my Node.JS server as well.
What are some more security measures that can scale? Some advice and tips? Anything security related that I should be concerned about that PassportJS's authenticated session can't handle?
It’s a bit hard to cram in all security-related best practices in one post, but for what it’s worth, here’s my take on the issue.
Providing authentication and securing it are two separate things. PassportJS will be able to handle everything related to authentication, but it’s an entirely different thing preventing it to be fooled or overwhelmed.
One (big) reason for putting PasswordJS behind a reverse proxy (RP) is that you’ll be able to provide a first line of defense for anything related to HTTP: header/body lengths/data, allowed methods, duplicate/unwanted headers, etc.
Nginx/Apache/HAProxy all provide excellent facilities to handle these cases and on the up-side, you get a nice separation of concerns as well: let the reverse proxy handle security and let PassportJS handle authentication. Architecture-wise, it will also make more sense because you’ll be able to hide the number and infrastructure of PassportJS nodes. Basically, you want to make it appear as there is only one entry point for your clients. Scaling out will also be easier with this architecture. As always, make sure that your RP(s) keep as little state as possible, preferably none, in order to scale linearly.
In order to configure your RP properly, you need to really understand what how PassportJS’ protocols (in case you want to provide more authentication methods than just Facebook’s) work. Knowing this, you can set up your RP(s) to:
Reject any disallowed request HTTP method (TRACE, OPTION, PUT, DELETE, etc).
Reject requests/headers/payload larger than a known size.
Load-balance your PassportJS nodes.
One thing to be on the lookout for in terms of load-balancing are sticky sessions. Some authenticators store all their state in an encrypted cookie, others will be a simple session handle, which only can be understood by the node that created the session. So unless you have session sharing enabled for the latter type (if you need PassportJS resilience), you need to configure your RP to handle sticky sessions. This should be the maximum amount of state they should handle. Configured correctly, this may even work if you need to restart an RP.
As you diligently pointed out, toobusy (or equivalent) should be in place to handle throttling. In my experience, HAProxy is bit easier to work with than the other RPs with regards to throttling, but toobusy should work fine too, especially if you are already familiar with it.
Another thing that may or may not be in your control is network partitioning. Obviously, the RPs need to be accessible, but they should act as relays for your PassportJS nodes. Best practice, if possible, is to put your authentication nodes on a separate network/DMZ from your backend servers, so that they cannot be directly reached other than through the RP. If compromised, they shouldn’t be able to be used as stepping stones to the backend/internal network.
As per Passport documentation:
"support authentication using a username and password, Facebook, Twitter, and more."
It is the middleware, with which provides the feasibility to integrate multiple type of security methodologies with NodeJS.
You should consider the purpose of the application, is it only supporting Facebook Authentication or custom register/login process. If it is providing second option, then in that case, it is better not to rely on authtoken of any social networking site like Facebook/Twitter or any other.
The better option is to create your own token like JWT token, and bind it with the user from multiple platforms. It will help you in extending the scope of your project to integrate other social networking sites.
Here is the link to integrate JWT in NodeJS.
https://scotch.io/tutorials/authenticate-a-node-js-api-with-json-web-tokens
Similarly there are many other blogs and tutorials available in market to integrate JWT with NodeJS
I'm starting a new system creating using .NET MVC - which is a relatively large scale business management platform. There's some indication that we'll open the platform to public once it is released and pass the market test.
We will be using ExtJs for the front-end which leads us to implement most data mining work return in JSON format - this makes me think whether I should learn the OAuth right now and try to embed the OAuth concept right from the beginning?
Basically the platform we want to create will initially fully implemented internally with a widget system; our boss is thinking to learn from Twitter to build just a core database and spread out all different features into other modules that can be integrated into the platform. To secure that in the beginning I proposed intranet implementation which is safer without much authentication required; however they think it will be once-for-all efforts if we can get a good implementation like OAuth into the platform as we start? (We are team of 6 and none of us know much about OAuth in fact!)
I don't know much about OAuth, so if it's worth to implement at the beginning of our system, I'll have to take a look and have my vote next week for OAuth in our meeting. This may effect how we gonna implement the whole web service thing, so may I ask anyone who's done large-scale web service /application before give some thoughts and advice for me?
Thanks.
OAuth 1 is nice if you want to use HTTP connections. If you can simply enforce HTTPS connections for all users, you might want to use OAuth 2, which is hardly more than a shared token between the client and server that's sent for each single request, plus a pre-defined way to get permission from the user via a web interface.
If you have to accept plain HTTP as well, OAuth 1 is really nice. It protects against replay attacks, packet injection or modification, uses a shared secret instead of shared token, etc. It is, however, a bit harder to implement than OAuth 2.
OAuth 2 is mostly about how to exchange username/password combinations for an access token, while OAuth 1 is mostly about how make semi-secure requests to a server over an unencrypted connection. If you don't need any of that, don't use OAuth. In many cases, Basic HTTP Authentication via HTTPS will do just fine.
OAuth is a standard for authentication and authorization. You can read about it in many places and learn; Generally the standard lets a client register in the authentication server, and then whenever this client attempts to access a protected resource, he is directed to the auth-server to get a token (first he gets a code, then he exchanges it with a token). But this is only generally, there are tons of details and options here...
Basically, one needs a good reason to use oAuth. If a simpler authentication mechanism is good for you - go for it.