Setting up third-party server to interact with Game Center - ios

I'm thinking of adding a feature to my iOS game to allow players to create their own game levels, share them with other players, rate them, etc. There'd be a public repository of user-created levels, sortable by creation date, rating, difficulty, or other criteria.
This kind of functionality would necessitate a third-party server. I was thinking I'd create a RESTful API using Sinatra and run it on Heroku. My question is: what would be the best way to authenticate requests to this API? I would prefer not to require players to create a username and password. I'd like to just use Game Center's ID system.
Any suggestions? I've never done any server-side stuff before so any help is appreciated!
Clarification
Yes, I'm aware that Apple doesn't provide its own system. But it does give developers access to unique Game Center identifiers (developer.apple.com/library/mac/#documentation/…) and I was hoping I could use that somehow to roll my own authentication system without requiring users to sign on via Facebook/Twitter/etc. If that's possible.

Looks like as of iOS 7, this is possible with Game Center using:
[localPlayer generateIdentityVerificationSignatureWithCompletionHandler]
Once you have verified the identity of the player using the generateIdentity call,
Associate the player id with a user on your server's db
Use whatever access token / authentication pattern your REST framework provides for subsequent calls
https://developer.apple.com/library/ios/documentation/GameKit/Reference/GKLocalPlayer_Ref/Reference/Reference.html
Also for reference, here is the dictionary that we end up sending off to our server based on the response from generateIdentityVerificationSignatureWithCompletionHandler
NSDictionary *paramsDict = #{
#"publicKeyUrl":[publicKeyUrl absoluteString],
#"timestamp":[NSString stringWithFormat:#"%llu", timestamp],
#"signature":[signature base64EncodedStringWithOptions:0],
#"salt":[salt base64EncodedStringWithOptions:0],
#"playerID":localPlayer.playerID,
#"bundleID":[[NSBundle mainBundle] bundleIdentifier]
};

edit: as if when I posted this there was no official solution from Apple, but there is now. See the other answers for that, or read on purely for historical / backwards-compatibility interest.
Apple doesn't provide any sort of system for using Apple ID authentication (which includes Game Center) with third-party services. You're on your own for authentication, though you could look into OAuth for allowing single-sign-on via Facebook/Twitter/etc. (Just beware that not everyone has a Facebook/Twitter/etc identity, or one that they want to use for your game.)
In theory, the playerID property on GKPlayer is unique, constant, and not known to anyone else. So, in theory, you could use it for "poor man's authentication": present it to your server, and that's all the server needs to look up and provide player-specific stuff. But this is like authentication by UDID, or by user name only -- the only security it provides is obscurity. And what happens when you have a user who's not signed into Game Center?

Andy's answer is on the right track, but to finish the story: in those docs that he links to, there's an explanation of how to actually authenticate against Apple services that the GameCenter user actually is who he is claiming to be. Link to that part of the docs is below. Basically, the call on the client to generateIdentityVerificationSignatureWithCompletionHandler gives your some data including a URL. You give that data and the URL to your own server, and then from your server you can hit that URL to authenticate the user with the rest of the data that was provided by the call to generateIdentityVerificationSignatureWithCompletionHandler.
https://developer.apple.com/library/ios/documentation/GameKit/Reference/GKLocalPlayer_Ref/index.html#//apple_ref/occ/instm/GKLocalPlayer/generateIdentityVerificationSignatureWithCompletionHandler:

I had a heck of a time figuring this out. I finally used a few hints from this answer, a couple of other SO answers, the php docs and some lucky guessing to come up with this complete answer.
NOTE: This method seems very open to hacking, as anyone could sign whatever they want with their own certificate then pass the server the data, signature and URL to their certificate and get back a "that's a valid GameCenter login" answer so, while this code "works" in the sense that it implements the GC algorithm, the algorithm itself seems flawed. Ideally, we would also check that the certificate came from a trusted source. Extra-paranoia to check that it is Apple's Game Center certificate would be good, too.

Related

Locking an API to App-Use only

I've written a rails back-end for the mobile app I'm working on and it occured to me that even though I'm using token authentication, anyone could write a malicious script that continually registers users / continually makes requests in attempts to fill the database / attack the server.
I guess there are two questions here:
1) What modifications would I need to make in order to ONLY allow API access from my mobile app
2) How can I protect my API urls?
Thanks :)
There are multiple things you can do to protect your API :
The simplest thing you can start with is verifying the user-agent header in your request. That usually gives you a good indicator of what the initiating device is.
That being said, it isn't always accurate and its definitely fakeable.
If you control the client side of the mobile app as well, you could encrypt the requests/responses with a cypher or key system which requires a key that only your mobile-app knows. Look at openssl for that... using a public/private key pair.
Token authentication is a good idea. I would actually look at oAuth or similar systems for authentication and keep your session timers short.
On top of that, you can probably add some rate control in order to limit consecutive calls from the same IP in a given timespan.
Finally, I would look at something like "fail2ban" or similar to automatically ban brute-force type attacks.

Embedding Flash Media Services (Red5) and Authorization

An architectural question.
My site needs to allow the user to record video and upload it to the "site". I've been poking around a fair bit and it seems I have to use some kind of media server to achieve this aim. As I'm introducing this secondary server into the system (I seek to embed the flash app residing on this server into the HTML delivered by the site) it occurs to me that this broadens the scope of security a lot. What scares me is attackers trying to embed the flash app themselves or attempting to impersonate clients (or anything else I haven't thought of yet!).
I was therefore wondering how people secure their applications with such an architecture. Sure I can do what is suggested here, a decent band-aid for now but afaik the domain information can technically be falsified by the client.
I could separate out the auth of the site giving me a WebServer, an AuthServer and a MediaServer enabling the MediaServer to separately auth. Getting the user to log into both sites is obviously onerous and passing around the user's login creds and securing all connections sounds ugly and averse to best practice.
As far as I can see my best bet is some kind of temporary token that the auth server creates. So the website kicks the auth server after logging in to generate the token which the site can then pass to the media server (as part of the flash vars) and the MediaServer itself can use to double check against the auth server.
I'm relatively new to Red5, Flash and web security so I was wondering if the following sounds sane, secure and/or necessary. Also if anyone knows of decent tools to use for such an auth system and whether there is something already kicking about in ASP.NET auth for such a purpose.
the solution provided in your link ... you should read my second comment.
The first about virtual hosts is wrong! My comment does actually tell you (at least one) solution to secure your app.
You could for example pass a SESSION_ID in the connect method to Red5. The user would get the SESSION_ID from another webservice call before he invokes the record or playback method.
The SESSION_ID might be even some kind of temporary token, that is only valid for 15 minutes and only usable a single time for exactly that video. How far you implement that is a matter of how secure your mechanism needs to be.
Sebastian

How to prevent unauthorized HTTP requests?

I have some code in my iOS app like this:
URL *url = [NSURL URLWithString:#"http://urltomyapp.com/createaccount"];
ASIFormDataRequest *createAccountRequest = [ASIFormDataRequest requestWithURL:url];
[createAccountRequest setPostValue:email forKey:#"email"];
[createAccountRequest setPostValue:password forKey:#"password"];
[createAccountRequest startAsynchronous];
In my server implementation, I simply take this information via self.request.get('email') and create an account, not doing any checks or anything. However, it seems that anyone can run the above piece of code easily (I mean all you'd need to do is copy the above code and put it into your own app, right?), all they'd need to know is the server address and they can attach any data they want to the request, and the server would go ahead and create an account for them.
How would I authorize requests to know that they are coming from my app and my app only? Is this a common concern? How do other products protect against this?
First, a disclaimer. I am certainly not not a web expert, nor am I a security expert. In fact, the only reason I'm answering at all is because of the discussion in stackmonster's reply.
However, I do know that intercepting an SSL connection is exceptionally easy, especially if the user is complicit.
In general, though, I think the following is of some benefit.
You have to determine who/what you are trying to protect. If you just want to protect the data in the communication between the app and the server, https will be just fine. External snooping will be as effective (or ineffective) as snooping other SSL traffic.
However, if you are trying to protect your API (which your question seems to suggest), it is trivial for a user to see what commands you are sending (as you, yourself found out by using Charles).
So, do you want to prevent anyone from knowing the details of your API? Do you want to just prevent DOS attacks, or only let valid users issue commands, or what?
You can then worry about authentication and authorizations (two different topics). Maybe validating that the request comes from a known entity is enough.
Anyway, it is extremely difficult to give guidance because you first have to decide what your networking privacy goals are.
Then, if they are lofty, you are in for a lot of reading.
At some point, though, you have to decide what is crucial to your app/business, and what is not. Just like any good software design, then create a set of requirements. Then, prioritize them in some order (e.g., mandatory, essential, nice to have, can live without).
That will tell you if you need additional security, and what kind.
Most, however, find that it's not worth the time and investment to even lock all the doors and bar the windows (not to mention protecting the chimney, adding concrete to the walls, floors, and ceilings, building a safe-room, and hiring armed guards).
use HTTPS and put a cert inside your app to verify the client is allowed to talk to your server.
But trust me, its really not worth all that. Using HTTPS is generally ok on its own.

What is the standard way to handle twitter API keys in GPL'd desktop applications?

While developing an desktop application that needs to access twitter API , one must somehow pass the API key (application specific consumer key and consumer secret ) for the application to the user. Twitter's API TOS states that the application's API key cannot be publicly available and if that happens, they reset it. When that application is under GPL , meaning the developer needs to provide the source code to the user, how that user would be able to obtain the API key without it being publicly available ? Is there a standard way to handle this issue ?
Thanks.
Edit:
To clarify the situation, I was storing them in plain text in my code for cree.py so far as a conscious decision. But yesterday Twitter support team contacted me that they have reseted my key and their reasoning was the following :
C. You should not solicit another developer's consumer keys or consumer secrets especially if they will be stored or used for actions outside of that developer's control. Keys and secrets that are compromised will be reset by Twitter. For example, online services that ask for these values in order to provide a "tweet-branding" service are not allowed.
https://dev.twitter.com/terms/api-terms
If an application's keys are posted publicly, it allows for external parties to hijack the application's API access. This presents an enormous abuse risk, and as such we've reset your API keys. Please take care to ensure that these keys are not posted publicly again.
Thanks,
Twitter API Policy
Well, TTYtter evidently uses the honour system:
# yes, this is plaintext. obfuscation would be ludicrously easy to crack,
# and there is no way to hide them effectively or fully in a Perl script.
# so be a good neighbour and leave this the fark alone, okay? stealing
# credentials is mean and inconvenient to users. this is blessed by
# arrangement with Twitter. don't be a d*ck. thanks for your cooperation.
$oauthkey = (!length($oauthkey) || $oauthkey eq 'X') ?
"XXXXXXXXXXXXXXXXXXXXX" : $oauthkey;
$oauthsecret = (!length($oauthsecret) || $oauthsecret eq 'X') ?
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" : $oauthsecret;
(I have replaced the actual keys with Xs, to make it a little less likely that anyone will go to the trouble to abuse them, but rest assured that they are present in full in the actual source!)
Also, I don't see anything in the Rules of the Road actually requiring you to keep these things secret: the closest thing I see is the statement "Keys and secrets that are compromised will be reset by Twitter."; they never actually say what "compromised" means, though.
I might be dense here, but why don't you store them in a configuration file, the Windows registry etc and get them from there? Then distribute the application without the file, and you're done.
Maybe another solution would be to use a server, the server interacts with the twitter api, and the you request information to your server with your desktop application
Like that, the API key is only stored on the server, and not any user can get it.

Implementing a 2 Legged OAuth Provider

I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx

Resources