Securing a HTTP API - oauth-2.0

I need to secure a public facing HTTP API where I can not touch the code on the API server.
The HTTP API has multiple end-users that will consume it using non-interactive clients (typically backend services). Just to be clear, the client owns the resources that it will access and as such must provide a user since authorisation logic needs to be tied to a end-user.
I’m toying with the idea of using OAuth2 and the Resource Owner Password Credentials Grant
and then using the access token provided to get a JWT which the client can present to a HTTP proxy that parses the request before passing it to the HTTP API Server.
Here is the flow as I envision it:
+----------+ +---------------+
| |>--(A)---- Resource Owner ------->| |
| | Password Credentials | Authorization |
| Client | | Server |
| |<--(B)---- Access Token ---------<| |
| | (w/Refresh Token) |---------------|
| | | |
| |>—-(C)---- Request JWT ——-------->| JWT Service |
| | (w/Access Token) | |
| | | |
| |<--(D)---- JWT ------------------<| |
| | | |
+----------+ +---------------+
v
|
|
| +---------------+
| | |
| | HTTP |
--(E)---- HTTP Request w/JWT ---------->| Proxy |
| |
| (F) |
| |
+---------------+
v
|
(G)
|
v
+---------------+
| |
| HTTP |
| API |
| |
+---------------+
(A), (B), (C) Get an access token using the Password Grant flow.
(D) Use access token to get a JWT.
(E) Attach JWT to HTTP request and send it to the HTTP Proxy.
(F) Check that JWT is valid.
(G) Pass request to the HTTP API Server.
Has anyone else solved a similar use case and would care to shed some light or have a discussion?

OAuth2 has a number of advantages ...
It has a clear flow and multiple types of grants it can use to cater to different needs.
Another advantage is that there are libraries that deal with the complexities of OAuth2, such as Identity Server : https://identityserver.github.io/Documentation/
Whether it is overkill for your project or not, only you can answer that question. A lot of people who claim OAuth2 is complicated haven't really spent enough time trying to understand it.
What I advise you not to is to not rely on any kind of self baked security model as this is what causes the downfall of a system. OAuth2 libraries have been battle tested by lots of users and companies.
A lot of companies which provide apis do so via OAuth2.
So, bottom line, if you do want to use it, do your research ,understand it and then implement it.
As for your actual question, yes I have built similar systems with lots of users, using various grants and everything worked quite well. There's nothing to be scared about as long as you spend enough time knowing what you get yourself into ...

I'm getting downvoted for this answer, so I better explain myself.
I'm not the one to suggest "just write your own security libs", but I do make an exception with oauth+api clients (and especially oauth2).
Why not Oauth2?
extra hops & extra system components as compared to a traditional authentication scheme,
chances are that whatever you do, someone using some other programming language might not have a client library compatible with whatever you're using, people make money making oauth interoperable and simple
think about this: nobody makes money making ie basic authentication "simple, compatible with 1000s of providers and just working" (to quote oauth.io), that's because basic authentication just works on every "provider" and oauth2 is a bit of a shitty, complex, non-interoprable framework - that's why we call basic authentication a part of a "protocol", we call oauth(1) a protocol, but we call oauth2 a framework
think of the implications of maintaining a non-interactive client:
you have a single bearer token across the entire cluster,
so you will need a distributed key-value store or a DB table to hold it
You will need to capture specific errors that mean the bearer has
expired,
in which case you will want to seamlessly request the
bearer again and retry the request (without losing requests).
Problem here is that this on a busy site can start to happen in
parallel on 100s of threads
So, you need a distributed locking mechanism to do it right - redis mutex is my poison of choice when
someone greets me with an oauth api
That + good luck testing that piece of complex
distributed race condition logic, and when you break your back to do
it (hi webmock) and then you still get random non-deterministic failures from time to time because the gods of concurrency met some combination of conditions that VCR/webmock didn't handle nicely
this, or just SHA512 a secret together with a nonce and HTTP body
According to the lead author of the Oauth2 project: (my emphasis)
All the hard fought compromises on the mailing list, in meetings, in
special design committees, and in back channels resulted in a
specification that fails to deliver its two main goals — security and
interoperability. In fact, one of the compromises was to rename it
from a protocol to a framework, and another to add a disclaimer that
warns that the specification is unlike to produce interoperable
implementations.
When compared with OAuth 1.0, the 2.0 specification is more complex,
less interoperable, less useful, more incomplete, and most
importantly, less secure.
To be clear, OAuth 2.0 at the hand of a developer with deep
understanding of web security will likely result is a secure
implementation. However, at the hands of most developers — as has been
the experience from the past two years — 2.0 is likely to produce
insecure implementations.
...
In the real world, Facebook is still running on draft 12 from a year
and a half ago, with absolutely no reason to update their
implementation. After all, an updated 2.0 client written to work with
Facebook’s implementation is unlikely to be useful with any other
provider and vice-versa. OAuth 2.0 offers little to none code
re-usability.
What 2.0 offers is a blueprint for an authorization protocol. As
defined, it is largely useless and must be profiles into a working
solution — and that is the enterprise way. The WS-* way. 2.0 provides
a whole new frontier to sell consulting services and integration
solutions.
What to do instead?
Oauth is typically overkill unless you're creating a bigger eco-system.
Simpler solution is DIY, define a custom authorization header as:
authentication_id api_client_id nonce digest
ie
FooApp-SHA512-fixed 4iuz43i43uz chc42n8chn823 fshi4z73h438f4h34h348h3f4834h7384
where:
authentication_id is a a fixed string that describes what kind of authentication is being used,
api_client_id is a public piece of information identifying the API client (I assume the API has more than 1 client, or that it will have more than 1 client at some point) - API client ID is there to allow you to match the API client with the API clients' secret
nonce is just a random string
secret is a random string known only to you and the client and the client should treat it as a password (ie not commit it to versioning)
digest is a SHA512 hex/base64 digest of the api_client_id + nonce + secret (you can add also concatenate the HTTP body, I'd add the HTTP body unless the body is big - such as with file uploads)
If the client passes the authentication simply forward the request to the backend API service and return it's response to the client, otherwise render an error.

Related

Is it possible to generate a Twilio "OneCode" TOTP token programmatically?

Goal
I have a backend service that talks to AWS, and an automated tool that acquires AWS creds. The cred-getter has MFA enabled (not my choice), but I don't want to type in or copy a code. Instead, I want to write a bit of code that can programmatically generate or get a TOTP soft-token without texting or calling anyone. So today our workflow is like this:
call cred getter from cli => open authy app for totp code => paste into cli
but i want it to look like this:
call my custom cli => it makes a totp code and passes it to cred getter for me
Question
Is there a way to curl Authy or Twilio to get one of these soft tokens programmatically?
Existing Docs
There's sort of a circular maze of documentation that appears relevant to this question, but I can't break the circle.
-----> Twilio has a page describing TOTP:
| | https://www.twilio.com/authy/features/totp
| |
| | It links to a page describing OTP API access:
| | https://www.twilio.com/authy/api#softtoken
| |
| | That explains you can "build your own SDK-supported mobile authentication application.":
| | https://www.twilio.com/docs/authy/api/one-time-passwords#other-authenticator-apps
^ v
| |
| | Which links to the quick start page:
| | https://www.twilio.com/docs/authy/twilioauth-sdk/quickstart
| |
<----- Which has a link about TOTP, which takes you back to the beginning
I see that the native mobile SDK's can generate a TOTP token:
https://www.twilio.com/docs/authy/twilioauth-sdk/quick-reference#time-based-one-time-passwords-totp
but I want to generate a token on a laptop (or cloud function or just someplace). The Authy Desktop client is doing it, so I know there must be a way. But I don't know what has been publicly exposed.
This question is relevant: how to get Google or Authy OTP by API
but the only answer depends on twilio calls and texts still: how to get Google or Authy OTP by API so that would be prohibitively expensive
Twilio developer evangelist here.
From what you've said, your credential getter provides you a QR code with which you then configure Authy to generate OTP codes.
The QR code encodes a URL in the following format:
otpauth://TYPE/LABEL?PARAMETERS
For example:
otpauth://totp/Example:alice#google.com?secret=JBSWY3DPEHPK3PXP&issuer=Example
The type is likely "totp", like the example, the label will refer to the app you're authenticating with. The important part is the secret in the parameters. The secret is a base 32 encoded key that you can use to generate TOTP codes using the TOTP algorithm. There is likely an implementation of the algorithm in you preferred language.
Find the secret and you can generate your codes.

WAMP Protocol - Scaling out routers on multiple machines

I have been thinking a lot about scaling out the WAMP router onto multiple machines. Especially because relying on one system without having a backup-system seems a bit tough in a live scenario.
To not have a complicated master node election and all that comes with it, I had the following idea and I would love to get some feedback.
Routers would need to share some information, i.e.:
Authentication
session ids
RPC
check if a client has already registered for an uri when using simple registration
forward calls for pattern based registration
meta api information
PUB/SUB
provide event messages to all clients in the network
meta api information
There must be a connection between all routers. To keep the connections and traffic low, the Idea is to have a ring infrastructure:
+----------------+ +-----------------+ +--------------+
| | | | | |
+--> First Router +----> Second Router +----> Nth Router +--+
| | | | | | | |
| +----------------+ +-----------------+ +--------------+ |
| |
+-----------------------------------------------------------------+
Every router would have it's own client that is connected to the next router. By storing the incoming message id for a CALL or PUBLISH that came from a client the router could identify a round trip.
Incoming calls coming from another router would be handled as is.
Yields would have the correct INVOCATION.Request|id and could be forwarded to the correct router.
LOOSING A ROUTER
This infrastructure forces to have some static and identical configuration for all routers. This means dynamic scaling would not work, but therefor no leader election is necessary. If a routers is killed the next router will be connected from the list of routers. A constant polling could check for the router to be back online.
I have not done any deep research on this. So I would be happy to get some input for that.
nginx can work as a reverse proxy to accomplish this.
ip_hash on the upstream list will make sure a client IP get's the same server every time. You may or may not need this depending on your situation.
Essentially, you want a "Load Balancer" to stand in front of WAMP servers, and delegate connections as needed.

Parse.com REST API authentication

Parse.com's REST API docs (https://www.parse.com/docs/rest) say: Authentication is done via HTTP headers. The X-Parse-Application-Id header identifies which application you are accessing, and the X-Parse-REST-API-Key header authenticates the endpoint. In the examples with curl that follow, the headers are stored in shell variables APPLICATION_ID and REST_API_KEY, so to follow along in the terminal, export these variables.
I am building a Sencha Touch app as a native app on iOS and Android using Phonegap, and I was wondering whether it is secure to expose these keys to the client while making the REST calls?
Also, can someone explain to me how does security work in this scenario? Help is much appreciated! Thanks!
Without phonegap , in a proguard , post processed android apk , the string values of the 2 headers you mention are exposed client-side . not a big issue. TLS covers the http header values during network leg and far more important for app security, you have Full ACL at the DB row level(parse/mongo) contingent on permissions of 'current user()'. So with no access to logon, some outsider doesn't have any more than obfuscated string value to an app-level access token.
. One odd thing is that with parse the lease time on the client-side token value foapi key is permanent rather than say a month.
Parse REST security is robust n well executed.
Can't speak to what PG framework offers in obfuscate/minify/uglify area but you should check that.

How to generate oauth token?

The OAuth 1.0 Protocol didn't point out the algorithm servers generating tokens. What algorithm should I use? Is random sequence OK?
The secret part of each set of credentials (client, temporary, token) should be as random and as long as reasonably possible. You want to prevent anyone from discovering the secrets by intercepting a request and cracking the signature.
The section Entropy of Secrets in the OAuth 1.0a spec goes into more detail (but not much more).
I usually read from /dev/urandom (on Linux systems) to get a binary string of 12 or 15 random bytes and then base64 encode it. You might make the client secret longer since it changes rarely if ever.
I haven't implemented a server myself, but random ought to work. Something similar to what you'd use for nonce() in a client.

Implementing a 2 Legged OAuth Provider

I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx

Resources