checking the user id in a tokeninfo response, with token received from the google+ sign in button's one-time code flow - oauth-2.0

I'm using the one-time code flow with my google+ sign in button implementation, but the user_id in the response from the tokeninfo endpoint doesn't match the id_token in the object my javascript my javascript callback receives from the sign in buton.
In the sample code in the documentation, the the user_id in the tokeninfo object is checked against a request parameter called gplus_id, but the sample javascript doesn't send this parameter, so I have no idea if I'm checking against the right thing.
So, to be clear about the particular sections of code I'm talking about:
The one-time code is processed on the server using this sample code, and it uses a request parameter called gplus_id.
The code in this section sends the one time code to the server, but as I can see, it doesn't send a gplus_id

It looks like step 6 on the example page is incomplete, and is supposed to be sending the gplus_id, but isn't.
Take a look at the connectServer function (and the function that calls it) in https://github.com/googleplus/gplus-quickstart-java/blob/master/index.html for a more complete example of how to get the user's ID and pass it to the server for verification.
(And I'll try to ping the people responsible for the documentation to get it updated and consistent across the platforms in the quickstart examples. You can also track bug 573 to see progress on them fixing the documentation.)
NOTE: It is worth noting, however, that sending the gplus_id is a bit redundant. You're already trusting the code sent from the client, and you're getting the ID through steps derived from the code. So while passing and checking the gplus_id is a nice sanity check, it really doesn't gain you any additional security.

Related

How to authorize a request from Power Automate Desktop to Dataverse?

I'm looking for some advice about authorization for a request I'm making in Power Automate Desktop using the action 'Invoke Web Service'. I'm using this request to get information from Dataverse.
I've currently set up this request using OAuth2.0 with the Grant Type set to Implicit and I've hardcoded a token value into the header. I'm pretty green when it comes to authorization, so I'm just wondering if that's the best way to use OAuth2.0 to get info from Dataverse to PAD? I'm also concerned this token will expire and how to go about handling that. If I should set this up differently please let me know. And if you know how I can refresh the token automatically somehow, advice would be great.
I'm going to make the assumption that you have an Azure instance within your org.
You should be able to execute the entire OAuth flow through PAD given you can do it through Postman ...
https://learn.microsoft.com/en-us/powerapps/developer/data-platform/webapi/use-postman-web-api
... having said that, if you want an easier way, my suggestion would be to use LogicApps as it does all of the hard work for you. It will also protect keys, etc. that run the risk of being exposed if contained within a PAD flow and that's even if your store that sort of information in a KeyVault or something. At some point, it needs to be exposed to PAD.
You can create a LogicApp that's triggered by an incoming HTTP request ...
... and have your DataVerse connector pull the relevant data ...
... to then return back to the calling PAD flow with a response action.
This is an example flow ...
I haven't gone into detail given your question lacks specifics around filtering, etc. but you can always make your LogicApp more comprehensive by adding functionality in the payload to order, filter, expand, etc. on the OData call to DataVerse so you get exactly what you want from a data perspective.

Google Assistant SDK refusing authenticated channel as "UNAUTHENTICATED"

I am trying to create a Google Assistant for my Raspberry Pi in Kotlin. I implemented a OAuth flow using the so called "device flow" proposed in this IETF draft, since my Raspberry shall later just expose a web interface and does not have any input devices or graphical interfaces.
Google does support this flow (of course) and I obtain a valid access token with user consent in the end. For testing purpose I also tried a default authorization flow that will just forward the user to localhost, as it is normally done but it did not solve the problem.
I tested the access token using this tool and it confirmed validity of scope and token. So the token itself should work.
Scope is: https://www.googleapis.com/auth/assistant-sdk-prototype as documented here
This actually does not point to any valid web resource but is referenced in every documentation.
Then I tried to stream audio data to the assistant SDK endpoint using the gRPC provided java stubs. As took a third party reference implementation as a guide how to authenticate the rpc stub. But neither the reference implementation nor my own one works. They both report
io.grpc.StatusRuntimeException: UNAUTHENTICATED: Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
The stub is authenticated this way:
embeddedAssistantStub.withCallCredentials(
MoreCallCredentials.from(OAuth2Credentials
.newBuilder()
.setAccessToken(
myAccessToken,
myAccessTokenExpirationDate))
.build()))
and the authenticated request is performed like this:
val observer = authenticatedEmbeddedAssistantStub.converse(myStreamObserverImplementation)
observer.onNext(myConfigConverseRequest)
while(more audio data frames available) {
observer.onNext(myAudioFrameConverseRequest)
}
observer.onCompleted()
(I prefixed pseudo variables with "my" for clarity, they can consist of more code in the actual implementation.)
I even contacted the author of this demo implementation. He told me, last time he checked (several months ago) it was working perfectly fine. So I finally ran out of options.
Since the client implementation I took as reference used to work and I do actually authenticate the stub (although the error message suggests the opposite) Probably, either my valid access token with correct scope is not suitable chosen for the assistant API (though I followed the suggestions of google) or the API servers had a change not properly documented in the getting started articles by google.
So: Did anyone ran in the same problem and know how to fix it? I have the project on github. So if anyone needs the broken source code, I can do a temporary commit that produces the error.
Note, to save some works for mods: This issue referres to this and this question, both unresolved and using different languages but describing a similar problem.
Well, seems I was right about my second assumption: The error is server side. Here is the github issue, let's just wait for the fix.
https://github.com/googlesamples/assistant-sdk-python/issues/138

Asana Webhooks API

So I have implemented the Asana Webhooks API as described in their documents. I can pass it a project ID and request a new webhook be created. The API successfully sends a authentication request to my application which returns the Security header as described in the Docs. Asana then returns the expected success response, outlining the newly created Webhooks unique ID.
Now if i take this ID and then query the Asana API to show me all configured webhook's on either the parent Workspace or the project resource directly it returns an empty data JSON Object or reports the resource doesn't exist, suggesting the Webhook Ive just created wasn't actually created, despite giving me the expected success response.
Also If I then make a change to a project it doesn't fire the webhook and I don't receive any events on my application.
Strangely everything was working on Friday but today (Monday) I'm experiencing these issues.
Any pointers would be good, Ive been working as the Docs suggest in terms of my request structure and am authenticating using a PAT, Ive even tried a newly created token.
Thanks,
Our webhooks use the handshake mechanism to make sure that it's possible to call you back, but there's always the possibility that subsequent requests can fail. Additionally (although we don't document this very well - there's an opportunity for us) we should immediately try to deliver a (probably) empty event after the handshake (it looks like {"events":[]}. This is kind of like a "second callback" that contains anything that has changed since you created the webhook.
If this fails - or if any subsequent request fails often enough - the webhook will get trashed. "Failure" in this context means returns HTTP response codes other that 200 or 204.
As for why you're having trouble querying the webhook itself, I wasn't able to repro the issue, so we'd have to dive deeper. It should be fine if you:
Specify the workspace
Optionally specify the resource
I tested this out, and it seemed fine. You also might want to directly query the webhook by id with the /webhooks/:id endpoint - note to use the id of the webhook returned by create, and not the id in the resource field.
If you created the webhook (specifically, your PAT or OAuth app was the one making the create request) you should see the information just fine. If you can get the webhook by id, you should see last_failure_at and last_failure_content fields which would tell you why the webhook was unable to make the delivery.
Finally, if you would like to contact us at api-support#asana.com and let them know more details (for instance, the ID of the webhook you're trying to look at) we can look at those fields from our side to see if we can identify what's going on.

Its possible to set fields values of a site and submit then with a Programing Language?

I have this site:
https://acad.unoesc.edu.br/academico/login.jsp
And I want to put info in the fields values and submit then, to get the next page and navigate in that site. Thats because I want to create an android app or something like that. Im using lua in first case, with luasocket(http).
I know that the input has its names, but I dont know how to set then and send then to the server. If someone can help me with this.
Thank you.
You can use POST method with luasocket. See the official documentation and a detailed example in this SO answer.
Since you seem to be doing authentication, you'll probably need to save the cookie value returned to you as part of the login response and then pass that cookie back to the server (otherwise your subsequent requests will fail as the server will reject those requests as non-authenticated).
Since you are sending this over https, you'll need to use LuaSec, which provides ssl.https module as replacement for the http module that luasocket provides. You may check my blog post for some example of how this can be done.

Implementing a 2 Legged OAuth Provider

I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx

Resources