How to authorize a request from Power Automate Desktop to Dataverse? - oauth-2.0

I'm looking for some advice about authorization for a request I'm making in Power Automate Desktop using the action 'Invoke Web Service'. I'm using this request to get information from Dataverse.
I've currently set up this request using OAuth2.0 with the Grant Type set to Implicit and I've hardcoded a token value into the header. I'm pretty green when it comes to authorization, so I'm just wondering if that's the best way to use OAuth2.0 to get info from Dataverse to PAD? I'm also concerned this token will expire and how to go about handling that. If I should set this up differently please let me know. And if you know how I can refresh the token automatically somehow, advice would be great.

I'm going to make the assumption that you have an Azure instance within your org.
You should be able to execute the entire OAuth flow through PAD given you can do it through Postman ...
https://learn.microsoft.com/en-us/powerapps/developer/data-platform/webapi/use-postman-web-api
... having said that, if you want an easier way, my suggestion would be to use LogicApps as it does all of the hard work for you. It will also protect keys, etc. that run the risk of being exposed if contained within a PAD flow and that's even if your store that sort of information in a KeyVault or something. At some point, it needs to be exposed to PAD.
You can create a LogicApp that's triggered by an incoming HTTP request ...
... and have your DataVerse connector pull the relevant data ...
... to then return back to the calling PAD flow with a response action.
This is an example flow ...
I haven't gone into detail given your question lacks specifics around filtering, etc. but you can always make your LogicApp more comprehensive by adding functionality in the payload to order, filter, expand, etc. on the OData call to DataVerse so you get exactly what you want from a data perspective.

Related

Programatically get New Access Token for oAuth 2.0 in Postman

In Postman, I am able to successfully request a new token using the GUI. I'm wondering how to do this programatically. Or at least see the HTTP request that Postman is making. I've tried viewing it by monitoring the network traffic in Chrome, and with Wireshark, but without success. Thank you
well, OAuth2 is quite a big subject and you are not really providing a lot of details.
Postman is just a client, it creates requests based on the data you gave it so you don't need to monitor anything, you should know how you set it up and then simply mirror that in whatever language you want. Look at headers and post data specifically.
All I can do is point you to an extensive article I wrote on OAuth2, it shows a complete implementation, how to use Postman to create the correct requests and then how to write code which makes it all work.
If you don't use dot net, you can still understand all the concepts and it should be trivial to do the same thing using a different language.
https://eidand.com/2015/03/28/authorization-system-with-owin-web-api-json-web-tokens/

Locking an API to App-Use only

I've written a rails back-end for the mobile app I'm working on and it occured to me that even though I'm using token authentication, anyone could write a malicious script that continually registers users / continually makes requests in attempts to fill the database / attack the server.
I guess there are two questions here:
1) What modifications would I need to make in order to ONLY allow API access from my mobile app
2) How can I protect my API urls?
Thanks :)
There are multiple things you can do to protect your API :
The simplest thing you can start with is verifying the user-agent header in your request. That usually gives you a good indicator of what the initiating device is.
That being said, it isn't always accurate and its definitely fakeable.
If you control the client side of the mobile app as well, you could encrypt the requests/responses with a cypher or key system which requires a key that only your mobile-app knows. Look at openssl for that... using a public/private key pair.
Token authentication is a good idea. I would actually look at oAuth or similar systems for authentication and keep your session timers short.
On top of that, you can probably add some rate control in order to limit consecutive calls from the same IP in a given timespan.
Finally, I would look at something like "fail2ban" or similar to automatically ban brute-force type attacks.

Rails/Angular: How to implement internal and external REST/JSON APIs in same app?

I'm planning on implementing a single-page application in Rails/AngularJS which also has some pieces that are exposed as a "public" API. My question is, what's the best way to architect the two APIs in such an application? E.g. Is it wise to have them both housed/versioned in the same namespace, or should they be kept separate somehow?
This is relatively new territory for me, but at first blush it seems like providing a single API covering both internal and external needs, then parsing up which pieces are available via some kind of authorization system based on the provided token would be the best way of going about this.
Is this the right direction, or would you recommend some other path?
FWIW, I will give you my opinion.
CAVEAT: I'm not a rails guy so I'm coming at this from nodejs/expressjs land.
There are many ways to skin this cat, but I'll just say that you are headed in the right direction. if you want to look at a very opinionated way to do things (and one people might hate) in node, see this: https://github.com/DaftMonk/fullstack-demo/blob/master/server/api/user/index.js. here you see this bit:
var router = express.Router();
router.get('/', auth.hasRole('admin'), controller.index);
router.delete('/:id', auth.hasRole('admin'), controller.destroy);
router.get('/me', auth.isAuthenticated(), controller.me);
router.put('/:id/password', auth.isAuthenticated(), controller.changePassword);
router.get('/:id', auth.isAuthenticated(), controller.show);
router.post('/', controller.create);
these routes correspond to calls to http:/serverurl/api/user/ etc. obviously, these are all checking authentication, but you could easily create a resource route that didn't need to check for authentication before passing control to the controller and (eventually) sending back a resource.
the approach this takes is to have middleware on the server check for auth tokens to make sure the client can call the api. without making you look into the code too much, i'll just give you a basic rundown.
client(requests Auth)->server(approves passes back token)->client(stores token)
LATER:
client(requests api call sends token in request)->server(passes request to middleware that checks token to make sure its kosher)->server(sends back resource and token)->client(uses resource and stores token)
then the whole thing repeats.
as far as whether to have separate apis vs one namespace, i don't have a very strong opinion. it really depends on how you structure your app. if you know in advance what resources will be public, then its probably easy to create a namespaced api.
angular can easily adapt to multiple api calls. you can create services for your public vs private http calls (or whatever way you decide to call the api.)
hope this was somewhat helpful! sorry its not railsy! (but nodejs/express is awesome!)

Glimpse Security

I am convinced that I want to use Glimpse for my project, but I would like to learn a bit more about the security model.
From what I can tell, when you turn Glimpse on, it simply writes a set of cookies to the client. When Glimpse receives these cookies, Glimpse begins to record information for the request and then sends it to the client.
Seems like I could just set the cookies for a site I know uses Glimpse and I would then be able to see their information.
I highly doubt this is how it works, so I would like to know what features are in place to prevent exposing server information.
Glimpse uses a collection of configurable Runtime Policies (http://getglimpse.com/Help/Custom-Runtime-Policy) that dictate how Glimpse responds to any given HTTP request.
Glimpse already adds some Runtime Policies out of the box that filter requests based on content types, http status codes, remote or local access, Uri's...
You can also build your own by implementing the IRuntimePolicy and check for instance if a user is authenticated and member of a specific group and based on that allow Glimpse to gather and return data or not. Such an example can be found at the link above.

Implementing a 2 Legged OAuth Provider

I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx

Resources