I've an API which uses oAuth for authentication.
Everything has been working smoothly for months until last week, when I started noticing oAuth was throwing 400 errors due to a Expired timestamp, yours X, ours Y.
I'm guessing that some changes on server (it's a shared hosting machine) reset or messed something with the local time.
I can't figure out any other reason, but I'd like to know if there's any other debuggin' technique that may help me.
PS: Using PHP 5.2 and the oAuth 1.0a php implementation
The OAuth server should always use UTC time as the basis for checking timestamps. Assuming that you send the correct timestamp based on UTC, check the 'ours' timestamp has the correct time in UTC. If not then its most likely there is a problem with the server time or the OAuth timestamp calculation on the server.
Related
Since Friday all of our users are seeing sporadic 302s when trying to access our in-GCP IAP protected resources. Cookies are valid, and definitely being passed with the request.
This has worked for us for two years and nothing has changed here recently past standard GKE upgrades.
Since Friday we're seeing sporadic 302s from IAP (X-Goog-IAP-Generated-Response: true) as if the cookie is invalid. I can recreate this problem using a simple curl command, with my cookie stored in a file called cookie.test.
`curl -vs -b ./cookie.test https://gitlab.mydomain.com/projects/myapp.git
This succeeds maybe 1 out of 5 times. Behaviour is very recreatable. 2 out of 5 times we'll get a response from gitlab.mydomain.com and the other 3 times we'll see a 303 to accounts.google.com. Same cookie every time, all requests within a few seconds of each other.
This is causing an enormous inconvenience for our team.
Has there been a change to IAP recently that might explain this? Do you have any other reports of similar behaviour?
Folks,
I am from the IAP team at Google. Recently IAP has made some changes to the cookie name. However, this change should have been transparent to the browser users.
For people using GCP_IAAP_AUTH_TOKEN cookie name for programmatic auth, your flows will break. The documented way to send credentials in a programmatic call is to use Authorization / Proxy-Authorization header.
https://cloud.google.com/iap/docs/authentication-howto#authenticating_a_user_account
Cookies are meant to be used for browser flows only and IAP holds complete control of the naming and format of the cookie. If you continue to use cookies to send in credentials to IAP (by reverse engineering the new format), you run a risk of being broken again by future changes in cookie name/format.
One clarification is required though. In the original post, it was mentioned that you are getting a response of 302 to accounts.google.com, is that true for browser flows also? If so, please respond back with a har file and I'll be happy to take a look.
cheers.
I have also started facing this issue since last week and have spent around 2 days troubleshooting it as initially we thought that it must be some problem on our side.
Good to know that I am not the only one facing it.
Would really appreciate some updates from Google Around it.
However, one thing I found:- There was one official blog from google around IAP:- https://cloud.google.com/blog/products/identity-security/getting-started-with-cloud-identity-aware-proxy
they have updated this blog on 19th January and removed the mention of the cookie:- GCP_IAAP_AUTH_TOKEN
However, the line they have changed is still unclear to me and very confusing
It now says :-
That token can come from either a browser cookie or, for programmatic
access, from an Authorization: bearer header.
From where will the browser cookie come, what will be its name, there is no mention around it.
Let me know if someone finds a way to get it work again.
Thanks,
Nishant Shah
We're using ejabberd as our xmpp server and iphone xmppframework for client side.
The problem is when we get the offline messages the timestap that is written in the message is in actual date/time format but the timezone of the server is different than the clients time zones so at this point the things are getting messy.
We're using a same approach while querying Last Activity of a user(XEP-0012) , but in last activity xep the server returns the information as "how many seconds ago the user last logged in to server" so in this way we can apply seconds difference to our clients time and found the message delivery date/time so there is no problem in Last Activity query.
But in delayed delivery , ejabberd sends an exact date and time value and clients confuses about the conversion(Date and time of each client may be very different from each others)
Does anyone know how can we fix that problem? Is there any way to configure ejabberd to return "seconds passed information" until offline message send ?
By the way we're using latest ejabberd version.
Thanks
XEP-0091 (Legacy Delayed Delivery) says:
The timezone is be understood as UTC.
So you need to convert the time from UTC to the local time of the client in order to get the correct result.
The newer specification, XEP-0203 (Delayed Delivery) also says:
[...] MUST be expressed in UTC
Our experiments have shown that GMail does not use UTC for the internal dates in its mailstore. Anyone know what offset it uses? We've narrowed it to between 3 and 7 hours behind UTC (exclusive), and we could figure it out with further experimentation, but maybe someone knows off-hand. UID SEARCH and the like aren't very accurate if you are assuming UTC and it's not the case. :-)
Further, we're wondering if it's consistent regardless of where you're connecting to gmail in the world.
Update: the first test showed UTC-4 or UTC-5, and a second test I did (sending hourly emails) revealed my account to be UTC-7. We're wondering if it's set when you register depending on your source IP (I also registered an account with a UTC-10 timezone and the internal store was still UTC-7. And changing your account timezone later doesn't change the internal store date an IMAP client sees, wisely, I would think).
I had written a python IMAP lib client and confirmed that the Gmail timezone is UTC-07:00 and I have confirmed it over and over again. So, any search results that you obtain (more precisely the after: startDate and before: endDate query) are shifted by that much amount according to your local timezone. This is because the mailstore uses the local time according to the sender SMTP server, which in case of gmail happens to be at UTC-07:00.
I have an ASP.NET MVC3 Restful web service that uses the Microsoft JSON serializer. This service returns data that contains a .NET DateTime value.
The web service is accessed by a Silverlight client that uses the Newtonsoft JSON library to deserialize the returned data.
The date value that I get in the client are five hours ahead of the value that are sent from the service. Since I am in the Eastern Time Zone (US), this appears to be localtime being sent from the service getting interpreted as GMT by the client.
My question is this: What is a good way to handle this discrepancy? Is there something in either the Microsoft or the Newtonsoft library that I can set to deal with this. Something a little more elegant than subtracting 5 hours from the time received by the client.
Thanks
First have a look at this question (it is about Backbone.js, but applies to your problem as well): How to handle dates in Backbone?
Some libraries (like Jackson) serialize dates to UNIX time by default. How is the date/time represented in the data sent from the server? If it is not a simple integer, it should be represented using ISO 8601 which always explicitly defines timezone (or Z for UTC time).
If the time is sent in textual form from the server, but without time zone, the server marshals it incorrectly. If the time has correct time zone but the client discards it - it is the client fault.
I keep getting 401 when trying to login via Oauth with Twitter.
I'm using twitter_oauth-0.3.3 with oauth-0.3.6 in rails
It used to work perfectly some time ago, so after some digging, I realised it might have something to do with my timezone.
In the headers of the Twitter response, one of them is:
date:
- Sun, 11 Apr 2010 16:53:34 GMT
Even though the time is actually 17:53:34 BST
I'm assuming the request is signed using BST time, and so it fails.
Anyone had this problem / found a fix for it?
The OAuth timestamp used in the signature is in the epoch time format.
You might want to inspect the Authorization request header, check for the oauth_timestamp and check what you are sending with this helpful online tool.
The difference between your epoch time and Twitter's (which you get in the response, as you wrote) should be in the +- 300 second frame.
Hope that helps!
Well, if the time is 17:53:34 BST, then it really is 16:53:34 GMT.
So, if the oauth library you're using can't cope with the server specifying the time in a different time-zone (are you certain that's what's causing the problem) then it could be a bug in the library.
(Not very helpful, I know - I don't use ruby and haven't done any oauth development myself.) Have you tried contacting the library developer(s)?