Protecting user passwords in desktop applications - twitter

I'm making a twitter client, and I'm evaluating the various ways of protecting the user's login information.
Hashing apparently doesn't do it
Obfuscating in a reversable way is like trying to hide behind my finger
Plain text sounds and propably is promiscuous
Requiring the user to type in his password every time would make the application tiresome
Any ideas ?

You could make some OS calls to encrypt the password for you.
On Windows:
You can encrypt a file (on a NTFS filesystem)
Use the DPAPI from C
Use the DPAPI in .Net by using the ProtectedData class

CryptProtectData is a windows function for storing this kind of sensitive data.
http://msdn.microsoft.com/en-us/library/aa380261.aspx
For an example see how Chrome uses it:
http://blog.paranoidferret.com/index.php/2008/09/10/how-google-chrome-stores-passwords/

For Windows: encrypt the password using DPAPI (user store) and store it in your settings file or somewhere else. This will work on a per-user basis, e.g. different users on the same machine will have different unrelated encryption keys.

What platform?
On *nix, store the password in plain text in a file chmoded 400 in a subdirectory of the home directory. See for example ~/.subversion. Administrators can do anything they like to users anyway, including replacing your program with their own hacked version that captures passwords, so there's no harm in the fact that they can see the file. Beware that the password is also accessible to someone who takes out that hard drive - if this is a problem then either get the user to reenter the password each time or check whether this version of *nix has file encryption.
On Windows Pro, store the password in an encrypted file.
On Windows Amateur, do the same as *nix. [Edit: CryptProtectData looks good, as Aleris suggests. If it's available on all Windowses, then it solves the problem of only the more expensive versions supporting encrypted files].
On Symbian, store the password in your data cage. Programs with AllFiles permission are rare and supposedly trusted anyway, a bit like *nix admins.

You can't have your cake and eat it too. Either store the password (which you've ruled out), or don't and require it to be typed in every time (which you've ruled out.)

Have a good symmetric encryption scheme, it should make it difficult enough to decrypt the credentials that it won't worth trying.
Otherwise, if the service only requires the hash to be sent over the network, you can store the hast encrypted. This way even the decryption won't get the attacker closer to the solution.
However other users are true. If you store the data it can be found.
The key is finding the balance between security and usability.

Related

Is secure save some sensitive data in Localizable.strings?

My question is very clear. I need save some sensitive static data. For example, the url of my service or a password of encrypt. Now I have the next doubt: Is secure save this data in Localizable.strings?
No. A malicious user can easily see this in the IPA of an iTunes backup. But the user can also see this in any file in your app bundle. You will need to encrypt the string somehow. The tricky part is to hide the key as well: it may be a good idea to calculate the key somehow (you can be creative here).
Also pay attention to secure your transmission: if you would be using plain HTTP anyone who can use Wireshark would be able to see your sensitive information. Make sure you've set up HTTPS correctly and that you are validating the certificate of the server on connect (search StackOverflow about that).
I totally agree with #DarkDust. Just to add more things:
A malicious user can see the data because he does the jailbreak on one of the devices. Then he installs the app and may get whole contents of the app. He may change some code and run it.
Whole process of getting the data is called reverse engineering. It's quite wide branch and it's good to know the basics if you care about data security.
You may read more about reverse engineering at e.g. this free book: https://github.com/iosre/iOSAppReverseEngineering.
The best hacker always gets the data, it's just the matter of time. For you, as a developer, the task is to forbid getting the data for less experienced "hackers".
To make things more difficult, you can obfuscate the data.
If you need to save some credentials in app (eg login token), always use the keychain, never any other storage.

Secure keys in iOS App scenario, is it safe?

I am trying to hide 2 secrets that I am using in one of my apps.
As I understand the keychain is a good place but I can not add them before I submit the app.
I thought about this scenario -
Pre seed the secrets in my app's CoreData Database by spreading them in other entities to obscure them. (I already have a seed DB in that app).
As the app launches for the first time, generate and move the keys to the keychain.
Delete the records from CoreData.
Is that safe or can the hacker see this happening and get those keys?
*THIRD EDIT**
Sorry for not explaining this scenario from the beginning - The App has many levels, each level contains files (audio, video, images). The user can purchase a level (IAP) and after the purchase is completed I need to download the files to his device.
For iOS6 the files are stored with Apple new "Hosted Content" feature. For iOS5 the files are stored in amazon S3.
So in all this process I have 2 keys:
1. IAP key, for verifying the purchase at Apple IAP.
2. S3 keys, for getting the files from S3 for iOS5 users:
NSString *secretAccessKey = #"xxxxxxxxx";
NSString *accessKey = #"xxxxxxxxx";
Do I need to protect those keys at all? I am afraid that people will be able to get the files from S3 with out purchasing the levels. Or that hackers will be able to build a hacked version with all the levels pre-downloaded inside.
Let me try to break down your question to multiple subquestions/assumption:
Assumptions:
a) Keychain is safe place
Actually, it's not that safe. If your application is installed on jailbroked device, a hacker will be able to get your keys from the keychain
Questions:
a) Is there a way to put some key into an app (binary which is delivered form AppStore) and be completely secure?
Short answer is NO. As soon as there is something in your binary, it could be reverse engineered.
b) Will obfuscation help?
Yes. It will increase time for a hacker to figure it out. If the keys which you have in app will "cost" less than a time spend on reverse engineering - generally speaking, you are good.
However, in most cases, security through obscurity is bad practice, It gives you a feeling that you are secure, but you aren't.
So, this could be one of security measures, but you need to have other security measures in place too.
c) What should I do in such case?*
It's hard to give you a good solution without knowing background what you are trying to do.
As example, why everybody should have access to the same Amazon S3? Do they need to read-only or write (as pointed out by Kendall Helmstetter Gein).
I believe one of the most secure scenarios would be something like that:
Your application should be passcode protected
First time you enter your application it requests a user to authenticate (enter his username, password) to the server
This authenticates against your server or other authentication provider (e.g. Google)
The server sends some authentication token to a device (quite often it's some type of cookie).
You encrypt this token based on hash of your application passcode and save it in keychain in this form
And now you can do one of two things:
hand over specific keys from the server to the client (so each client will have their own keys) and encrypt them with the hash of your application passcode
handle all operation with S3 on the server (and require client to send)
This way your protect from multiple possible attacks.
c) Whoooa.... I don't plan to implement all of this stuff which you just wrote, because it will take me months. Is there anything simpler?
I think it would be useful, if you have one set of keys per client.
If even this is too much then download encrypted keys from the server and save them in encrypted form on the device and have decryption key hardcoded into your app. I would say it's minimally invasive and at least your binary doesn't have keys in it.
P.S. Both Kendall and Rob are right.
Update 1 (based on new info)
First of all, have you seen in app purchase programming guide.
There is very good drawing under Server Product Model. This model protects against somebody who didn't buy new levels. There will be no amazon keys embedded in your application and your server side will hand over levels when it will receive receipt of purchase.
There is no perfect solution to protect against somebody who purchased the content (and decided to rip it off from your application), because at the end of days your application will have the content downloaded to a device and will need it in plain (unencrypted form) at some point of time.
If you are really concerned about this case, I would recommend to encrypt all your assets and hand over it in encrypted form from the server together with encryption key. Encryption key should be generated per client and asset should be encrypted using it.
This won't stop any advanced hacker, but at least it will protect from somebody using iExplorer and just copying files (since they will be encrypted).
Update 2
One more thing regarding update 1. You should store files unencrypted and store encryption key somewhere (e.g. in keychain).
In case your game requires internet connection, the best idea is to not store encryption key on the device at all. You can get it from the server each time when your app is started.
DO NOT store an S3 key used for write in your app! In short order someone sniffing traffic will see the write call to S3, in shorter order they will find that key and do whatever they like.
The ONLY way an application can write content to S3 with any degree of security is by going through a server you control.
If it's a key used for read-only use, meaning your S3 cannot be read publicly but the key can be used for read-only access with no ability to write, then you could embed it in the application but anyone wanting to can pull it out.
To lightly obscure pre-loaded sensitive data you could encrypt it in a file and the app can read it in to memory and decrypt before storing in the keychain. Again, someone will be able to get to these keys so it better not matter much if they can.
Edit:
Based on new information you are probably better off just embedding the secrets in code. Using a tool like iExplorer a causal user can easily get to a core data database or anything else in your application bundle, but object files are somewhat encrypted. If they have a jailbroken device they can easily get the un-encrypted versions but it still can be hard to find meaningful strings, perhaps store them in two parts and re-assemble in code.
Again it will not stop a determined hacker but it's enough to keep most people out.
You might want to also add some code that would attempt to ask your server if there's any override secrets it can download. That way if the secrets are leaked you could quickly react to it by changing the secrets used for your app, while shutting out anyone using a copied secret. To start with there would be no override to download. You don't want to have to wait for an application update to be able to use new keys.
There is no good way to hide a secret in a piece of code you send your attacker. As with most things of this type, you need to focus more on how to mitigate the problem when the key does leak rather than spend unbounded time trying to protect it. For instance, generating different keys for each user allows you to disable a key if it is being used abusively. Or working through a intermediary server allows you to control the protocol (i.e. the server has the key and is only willing to do certain things with it).
It is not a waste of time to do a little obfuscating. That's fine. But don't spend a lot of time on it. If it's in the program and it's highly valuable, then it will be hacked out. Focus on how to detect when that happens, and how to recover when it does. And as much as possible, move that kind of sensitive data into some other server that you control.

Does external MD5ing count as "encryption"?

I am preparing an app version of one of my websites.
The app requires you to log in in order to access your user account. This login process is done over HTTP not HTTPS, but the password is stored using MD5 and a few other hashes on my server.
Does this count as "encryption" within the app, and therefore require me to submit one of those Export Compliance forms?
Thanks for your help.
I'm assuming you're referring to the US Cryptography Export restrictions. Those practically don't exist anymore. Even if they would exist, MD5 is a hash function, and does not encrypt (otherwise, there'd be an un_md5 function).
Also, if the ban still existed and would be applicable, your scheme is needlessly weak, so it would probably still be allowed, just as easily crackable 40 bit symmetric encryption algorithms were.

what's an alternative to use instead of a CommonCrypto on iphone?

Getting ready to submit my app to the Apple's Itunes store and got puzzled by a question during the submission process: "Export laws require that products containing encryption be properly authorized for export...... Does your product use encryption?"
I've used CommonCrypto CommonCryptor.h to encode settings file against its unauthorized modifications.
So now I'm not sure if I have to remove all the encryption completely and leave just an xml file basically as is or should I use some other method to protect the file.
What other simple protection mechanisms I can use to protect it and at the same time do not use any encryption so I can submit my app without tons of extra paperwork?
Your use of "encryption" is not subject to US export rules because it's not for "information security" (I think you answer "yes, yes, yes, no" or so, ICBW, or they could have changed the order). Essentially, if it doesn't stop the NSA from spying on you, they're happy to let you use it.
However, encryption traditionally provides confidentiality, not message integrity. If you want to ensure that the user hasn't tampered with the settings file (e.g. by editing the iPhone backup), just save it with a MAC. That is,
Generate a MAC key (pull some bytes out of /dev/random).
Calculate the MAC of the file when you save it (see Objective-C sample code for HMAC-SHA1; note that the accepted answer is actually HMAC-SHA-256)
Append the MAC to the end of the file (or set it as a file attribute, or stick it in another file).
When reading, calculate the MAC on the file and verify that it's the one you saved. If it's appended to the file, you'll have to remove the last few bytes (e.g. [NSData dataWithContentsOfFile:path], then -subdataWithRange: twice to get the "message" and MAC, then verify the MAC, and parse the "message" if verification succeeds.
It won't stop someone with a jailbroken phone from extracting the MAC key from your binary, but not much will. It also won't stop someone from reading the plaintext settings file, but that might not be such a problem.
If you're generating the file on a computer you control (e.g. it's a file downloaded from a server), then sign it. Technically, RSA signature validation is equivalent to encryption, but I don't think it counts as encryption for export purposes (if it does, it's for "authentication" purposes and still doesn't count). DSA signature validation isn't encryption (I think, the math behind it went way over my head) and should also be fine.

Storing a shared key for Rails application

One of my Rails applications is going to depend on a secret key in memory, so all of its functions will only be available once administrator goes to a certain page and uploads the valid key.
The problem is that this key needs to be stored securely, so no other processes on the same machine should be able to access it (so memcached and filesystem are not suitable). One good idea would be just to store it in some configuration variable in the application, but newly spawned instances won't have access to that variable. Any thoughts how to implement this on RubyEE/Apache/mod_passenger?
there is really no way to accomplish that goal. (this is the same problem all DRM systems have)
You can't keep things secret from the operating system. Your application has to have the key somewhere in memory and the operating system kernel can read any memory location it wants to.
You need to be able to trust the operating system, which means that you then can also trust the operating system to properly enforce file access permissions. This in turn means that can store the key in a file that only the rails-user-process can read.
Think of it this way: even if you had no key at all, what is to stop an attacker on the server from simply changing the application code itself to gain access to the disabled functionality?
I would use the filesystem, with read access only to the file owner, and ensure the ruby process is the only process owned by this user. (using chmod 400 file)
You can get more complex than that, but it all boils down to using the unix users and permissions.
Encrypt it heavily in the filesystem?
What about treating it like a regular password, and using a salted hash? Once the user authenticates, he has access to the functions of the website.

Resources