Does external MD5ing count as "encryption"? - ios

I am preparing an app version of one of my websites.
The app requires you to log in in order to access your user account. This login process is done over HTTP not HTTPS, but the password is stored using MD5 and a few other hashes on my server.
Does this count as "encryption" within the app, and therefore require me to submit one of those Export Compliance forms?
Thanks for your help.

I'm assuming you're referring to the US Cryptography Export restrictions. Those practically don't exist anymore. Even if they would exist, MD5 is a hash function, and does not encrypt (otherwise, there'd be an un_md5 function).
Also, if the ban still existed and would be applicable, your scheme is needlessly weak, so it would probably still be allowed, just as easily crackable 40 bit symmetric encryption algorithms were.

Related

Where should I store my API key in IOS app [duplicate]

I want to store a secret key ("abc123") that I will use in the header of my REST API requests. My server will check this secret key. If it matches "abc123", then allow the request to be made.
I'm thinking about a simple solution like:
let secret = "abc123"
But are there going to be any downfalls to this?
Crazy as it sounds, this is probably the best solution. Everything else is more complicated, but not much more secure. Any fancy obfuscation techniques you use are just going to be reverse engineered almost as quickly as they'll find this key. But this static key solution, while wildly insecure, is nearly as secure than the other solutions while imposing nearly no extra complexity. I love it.
It will be broken almost immediately, but so will all the other solutions. So keep it simple.
The one thing that you really want to do here is use HTTPS and pin your certificates. And I'd pick a long, random key that isn't a word. Ideally, it should be a completely random string of bytes, stored as raw values (not characters) so that it doesn't stand out so obviously in your binary. If you want to get crazy, apply a SHA256 to it before sending it (so the actual key never shows up in your binary). Again, this is trivial to break, but it's easy, and won't waste a lot of time developing.
It is unlikely that any effort longer than an hour will be worth the trouble to implement this feature. If you want lots more on the topic, see Secure https encryption for iPhone app to webpage and its links.
By hardcoding the string in your app, it's possible for attackers to decrypt your binary (via tools like dumpdecrypt) and get your string without much trouble (a simple hexdump would include any strings in your app).
There are a few workarounds for this. You could implement an endpoint on your REST API which returns your credentials, that you could then call on launch. Of course, this has its own non-trivial security concerns, and requires an extra HTTP call. I usually wouldn't do it this way.
Another option is to obfuscate the secret key somehow. By doing that, attackers won't be able to instantly recognize your key after decryption. cocoapods-keys is one option which uses this method.
There's no perfect solution here – the best you can do is make it as difficult as possible for an attacker to get a hold of your keys.
(Also, be sure to use HTTPS when sending requests, otherwise that's another good way to compromise your keys.)
While in-band tokens are commonly used for some schemes, you're probably eventually going to implement TLS to protect the network traffic and the tokens. This as Rob Napier mentions in another reply.
Using your own certificate chain here allows the use of existing TLS security and authentication mechanisms and the iOS keychain, and also gives you the option of revoking TLS credentials if (when?) that becomes necessary, and also allows the client to pin its connections to your servers and detect server spoofing if that becomes necessary.
Your own certificate authority and your own certificate chain is free, and your own certificates are — once you get the root certificate loaded into the client — are just as secure as commercially-purchased certificates.
In short, this certificate-based approach combines encryption and authentication, using the existing TLS mechanisms.
It looks like you are using access tokens. I would use Keychain for access tokens. For Client IDs, I would just keep them as a variable because client ids don't change while access tokens change per user, or even per refresh token and keychain is a safe place to store user credentials.
I have used the PFConfig object (a dictionary) that allows you to retrieve in your app values of variables stored as Server environment parameters.
Similar to the environment variables that can be retrieved using ENV in web sites server side programming like Ruby or PHP.
In my opinion this is about as secure as using Environment variables in Ruby or similar.
PFConfig.getConfigInBackgroundWithBlock{
(config: PFConfig?, error: NSError?) -> Void in
if error == nil {
if let mySecret = config["mySecret"] as? String {
// myFunction(mySecret)
}
}

HTTPS POST Security level

I've searched for this a bit on Stack, but I cannot find a definitive answer for https, only for solutions that somehow include http or unencrypted parameters which are not present in my situation.
I have developed an iOS application that communicates with MySQL via Apache HTTPS POSTS and php.
Now, the server runs with a valid certificate, is only open for traffic on port 443 and all posts are done to https://thedomain.net/obscurefolder/obscurefile.php
If someone knew the correct parameters to post, anyone from anywhere in the world could mess up the database completely, so the question is: Is this method secure? Let it be known nobody has access to the source code and none of the iPads that run this software are jailbreaked or otherwise compromised.
Edit in response to answers:
There are several php files which alone only support one specific operation and depend on very strict input formatting and correct license key (retreived by SQL on every query). They do not respond to input at all unless it's 100% correct and has a proper license (e.g. password) included. There is no actual website, only php files that respond to POSTs, given the correct input, as mentioned above. The webserver has been scanned by a third party security company and contains no known vulnerabilities.
Encryption is necessary but not sufficient for security. There are many other considerations beyond encrypting the connection. With server-side certificates, you can confirm the identity of the server, but you can't (as you are discovering) confirm the identity of the clients (at least not without client-side certficates which are very difficult to protect by virtue of them being on the client).
It sounds like you need to take additional measures to prevent abuse such as:
Only supporting a sane, limited, well-defined set of operations on the database (not passing arbitrary SQL input to your database but instead having a clear, small list of URL handlers that perform specific, reasonable operations on the database).
Validating that the inputs to your handler are reasonable and within allowable parameters.
Authenticating client applications to the best you are able (e.g. with client IDs or other tokens) to restrict the capabilities on a per-client basis and detect anomalous usage patterns for a given client.
Authenticating users to ensure that only authorized users can make the appropriate modifications.
You should also probably get a security expert to review your code and/or hire someone to perform penetration testing on your website to see what vulnerabilities they can uncover.
Sending POST requests is not a secure way of communicating with a server. Inspite of no access to code or valid devices, it still leaves an open way to easily access database and manipulating with it once the link is discovered.
I would not suggest using POST. You can try / use other communication ways if you want to send / fetch data from the server. Encrypting the parameters can also be helpful here though it would increase the code a bit due to encryption-decryption logic.
Its good that your app goes through HTTPS. Make sure the app checks for the certificates during its communication phase.
You can also make use of tokens(Not device tokens) during transactions. This might be a bit complex, but offers more safety.
The solutions and ways here for this are broad. Every possible solution cannot be covered. You might want to try out a few yourself to get an idea. Though I Suggest going for some encryption-decryption on a basic level.
Hope this helps.

Secure keys in iOS App scenario, is it safe?

I am trying to hide 2 secrets that I am using in one of my apps.
As I understand the keychain is a good place but I can not add them before I submit the app.
I thought about this scenario -
Pre seed the secrets in my app's CoreData Database by spreading them in other entities to obscure them. (I already have a seed DB in that app).
As the app launches for the first time, generate and move the keys to the keychain.
Delete the records from CoreData.
Is that safe or can the hacker see this happening and get those keys?
*THIRD EDIT**
Sorry for not explaining this scenario from the beginning - The App has many levels, each level contains files (audio, video, images). The user can purchase a level (IAP) and after the purchase is completed I need to download the files to his device.
For iOS6 the files are stored with Apple new "Hosted Content" feature. For iOS5 the files are stored in amazon S3.
So in all this process I have 2 keys:
1. IAP key, for verifying the purchase at Apple IAP.
2. S3 keys, for getting the files from S3 for iOS5 users:
NSString *secretAccessKey = #"xxxxxxxxx";
NSString *accessKey = #"xxxxxxxxx";
Do I need to protect those keys at all? I am afraid that people will be able to get the files from S3 with out purchasing the levels. Or that hackers will be able to build a hacked version with all the levels pre-downloaded inside.
Let me try to break down your question to multiple subquestions/assumption:
Assumptions:
a) Keychain is safe place
Actually, it's not that safe. If your application is installed on jailbroked device, a hacker will be able to get your keys from the keychain
Questions:
a) Is there a way to put some key into an app (binary which is delivered form AppStore) and be completely secure?
Short answer is NO. As soon as there is something in your binary, it could be reverse engineered.
b) Will obfuscation help?
Yes. It will increase time for a hacker to figure it out. If the keys which you have in app will "cost" less than a time spend on reverse engineering - generally speaking, you are good.
However, in most cases, security through obscurity is bad practice, It gives you a feeling that you are secure, but you aren't.
So, this could be one of security measures, but you need to have other security measures in place too.
c) What should I do in such case?*
It's hard to give you a good solution without knowing background what you are trying to do.
As example, why everybody should have access to the same Amazon S3? Do they need to read-only or write (as pointed out by Kendall Helmstetter Gein).
I believe one of the most secure scenarios would be something like that:
Your application should be passcode protected
First time you enter your application it requests a user to authenticate (enter his username, password) to the server
This authenticates against your server or other authentication provider (e.g. Google)
The server sends some authentication token to a device (quite often it's some type of cookie).
You encrypt this token based on hash of your application passcode and save it in keychain in this form
And now you can do one of two things:
hand over specific keys from the server to the client (so each client will have their own keys) and encrypt them with the hash of your application passcode
handle all operation with S3 on the server (and require client to send)
This way your protect from multiple possible attacks.
c) Whoooa.... I don't plan to implement all of this stuff which you just wrote, because it will take me months. Is there anything simpler?
I think it would be useful, if you have one set of keys per client.
If even this is too much then download encrypted keys from the server and save them in encrypted form on the device and have decryption key hardcoded into your app. I would say it's minimally invasive and at least your binary doesn't have keys in it.
P.S. Both Kendall and Rob are right.
Update 1 (based on new info)
First of all, have you seen in app purchase programming guide.
There is very good drawing under Server Product Model. This model protects against somebody who didn't buy new levels. There will be no amazon keys embedded in your application and your server side will hand over levels when it will receive receipt of purchase.
There is no perfect solution to protect against somebody who purchased the content (and decided to rip it off from your application), because at the end of days your application will have the content downloaded to a device and will need it in plain (unencrypted form) at some point of time.
If you are really concerned about this case, I would recommend to encrypt all your assets and hand over it in encrypted form from the server together with encryption key. Encryption key should be generated per client and asset should be encrypted using it.
This won't stop any advanced hacker, but at least it will protect from somebody using iExplorer and just copying files (since they will be encrypted).
Update 2
One more thing regarding update 1. You should store files unencrypted and store encryption key somewhere (e.g. in keychain).
In case your game requires internet connection, the best idea is to not store encryption key on the device at all. You can get it from the server each time when your app is started.
DO NOT store an S3 key used for write in your app! In short order someone sniffing traffic will see the write call to S3, in shorter order they will find that key and do whatever they like.
The ONLY way an application can write content to S3 with any degree of security is by going through a server you control.
If it's a key used for read-only use, meaning your S3 cannot be read publicly but the key can be used for read-only access with no ability to write, then you could embed it in the application but anyone wanting to can pull it out.
To lightly obscure pre-loaded sensitive data you could encrypt it in a file and the app can read it in to memory and decrypt before storing in the keychain. Again, someone will be able to get to these keys so it better not matter much if they can.
Edit:
Based on new information you are probably better off just embedding the secrets in code. Using a tool like iExplorer a causal user can easily get to a core data database or anything else in your application bundle, but object files are somewhat encrypted. If they have a jailbroken device they can easily get the un-encrypted versions but it still can be hard to find meaningful strings, perhaps store them in two parts and re-assemble in code.
Again it will not stop a determined hacker but it's enough to keep most people out.
You might want to also add some code that would attempt to ask your server if there's any override secrets it can download. That way if the secrets are leaked you could quickly react to it by changing the secrets used for your app, while shutting out anyone using a copied secret. To start with there would be no override to download. You don't want to have to wait for an application update to be able to use new keys.
There is no good way to hide a secret in a piece of code you send your attacker. As with most things of this type, you need to focus more on how to mitigate the problem when the key does leak rather than spend unbounded time trying to protect it. For instance, generating different keys for each user allows you to disable a key if it is being used abusively. Or working through a intermediary server allows you to control the protocol (i.e. the server has the key and is only willing to do certain things with it).
It is not a waste of time to do a little obfuscating. That's fine. But don't spend a lot of time on it. If it's in the program and it's highly valuable, then it will be hacked out. Focus on how to detect when that happens, and how to recover when it does. And as much as possible, move that kind of sensitive data into some other server that you control.

Best place to hide a key in the Windows Registry?

My Delphi program has a built-in protection mechanism to check for banned license keys on the Internet and displays a message to the user if a blacklisted key is found.
I'd like to store the blacklisted key in the registry, so if the user tries to re-enter it (and he/she is not connected to the Internet), it's not accepted.
What is the best way to hide an obfuscated entry in the Windows registry?
Thanks!
Edit: You guys have some good answers there, but I feel like I need to expand the question.
This is not mainstream software but a corporate one. Clients pre-pay one year and get a one-year license key for activation. The license key includes a machine ID and can't be used elsewhere.
The problem is that some clients tend not to pay in time or they don't pay at all. Since I don't want to bother with shorter than one year license keys (too much administrative overhead) I need a way to disable their licenses till they pay.
So the app now will connect to the Internet upon launch and check if their key is blacklisted. If it is, I need to disable access. In case they reinstall or block Internet access, I need to know if the key has been blacklisted.
Thus, I'm thinking it would be best to hide it in the registry. My users are not tech-savy enough to use registry tools to monitor the registry, but if I put it under HKLM/Software/MyCompany/MyProgram, some of them might do find it. So I need a place where they can't find it afterwards that it had been created. (Noone will be expecting it!)
Any ideas?
The eaysiest way to hide a key or a value is to create a key/value having '\0' inside of the name. You can do this wth respect of the native functions NtCreateKey (see http://msdn.microsoft.com/en-us/library/ff556468.aspx) NtSetValueKey (see http://msdn.microsoft.com/en-us/library/ff557688.aspx) which use UNICODE_STRING as parameters instead of LPCTSTR. You can read more about usage of native registry API in http://www.codeproject.com/kb/system/NtRegistry.aspx for example. A Delphi code you will find here http://www.delphi3000.com/articles/article_3539.asp.
UPDATED: Because many people read this question I want to add some words to my answer. I want divide the part of the question which we can read also in the title "best place to hide a key in the Windows Registry" from the subject with license keys. Because I read some answers (written before me) which concerned almost only the part of license keys and read practically no answer on the question from the title I wrote me answer.
The subject with license key I find very complex. It depends on the licensing model choosed. It's important how to generate, to distribute (to install) and to verify the key. Is key should be hardware depended or not? It can be one per computer or one per computer group. The key generation, key installation or key verification can be either with respect of some online services (also from the internet) or without there. I can continue... There are a lot of aspects, advantages and disadvantages of different approaches.
So I decide to answer only on the main question from the title which is clear and have a separate interest. All other questions about license key should be discussed in my opinion in the separate question after clearing all requirements.
UPDATED 2 based on the updated question: It seems to me in your case would be better to use some scenario based on cryptographic signing of an activation ticket. For example the schema can looks like following:
You software installed on the client computer will need an activation. Before activation it can not work or work in very restricted form (for example only some menus needed for software activation are enabled).
You write a server component which will be used by client during the activation to generate the license key based of the activation request received from the client.
If a client pay for the software you include the information about the client's "machine ID" (in any form which you want) in the database on the server.
After starting of the activation process from the client software (either at the program start of from menu or in any other way like you want) it collects some information about the computer like computer name ("machine ID"), some serial numbers or some other information about hardware or operation system which can not be changed without a new activation. This information the software send to your server (it is the activation request).
The server verify that the the client with the "machine ID" payed for the software and is not yet activated. Then the server calculate the hash (SHA1, MD5 or some other) from the information send from the client and sign the respond with the server's private key (or servers certificate). The signed ticket server will be send back to the client. This ticket will play the role of licence key.
The server can add any additional information to the ticket before signing. For example it can add the information about the date till one the software should be valid (for example, current day plus one year). So the ticket which will be send back to the client can contain the hash of input activation information and any additional information, all what you want. Important is only that the information should be signed. In general you can include full client's request as clear text in the servers ticket instead of including of the hash, but the usage of the hash a) reduce the ticket size and b) makes the ticket a little more secure.
Every client have public key corresponds to the private key used by server for signing of the activation ticket. The client save the ticket received from the server during activation in any place in registry of in the file system.
Every next time if the client software will be started the software will read the saved activation ticket from the registry (or from the file system). Then the software collect the same information, which are used for generation of the activation ticket, calculate the hash and compare it with the hash from the saved ticket. It verify of cause the signature of the ticket with respect of the public key (or with respect of the server's certificate). Moreover the software can verify any other additional policy information from the ticket like the time till one the ticket is valid.
All written is a roughly schema only, but it is very simple and it is extensible. You need only study how use some simple cryptographic operation and implement there in your software.
As a option you can don't have a server online, but instead of that implement in the software (in menu for example) a possibility to generate the activation request and send it per email for example. Then you can offline (!!!) generate an activation ticket based of the server request and send the ticket back to the client also per email. A simple Reg-file which can be imported by double-click or some other simple import possibility in your software (cut & paste in the activation dialog) can end the process of the software activation.
I don't think that the registry is a good place to hide such info, because anyone can download and use the Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) tool and see what your program does with the registry.
And thinking about this again. You will probably make users of your software unhappy if it will leave things in registry and other "secret" places on the user's hard drive. Locations like that are also easily discovered by tools that monitor what system functions your software calls.
As an alternative you could embed the banned keys in your application when you release new versions. This way the banned keys will be hidden in the application making it much harder for crackers to bypass the protection.
The downside of this is that a user can potentially run older version with a banned key with internet access blocked to your site, but if your software is actively developed with new features and bugfixes added, then nobody would want to run older versions. And if you are very paranoid you could release "updates" which update just the embedded banned key list.
But in the end no software protection scheme is perfect. If your software is popular enough there will always be a pirate cracker who will figure out your protection and make a patch or even a key generator.
If you really want to go that way, hash or encrypt the keys and then check the hashed or encrypted user key to those on the registry.
Be sure to check if there's any keys in the registry to be sure if the user didn't erased them.
It will be very challenging to achieve what you're trying to do, since a user can simply uninstall and re-install, and savvy users can wipe all traces of your app from the system (including the registry).
Other apps (like Windows, for example), instead of checking for a negative (banned key), instead check for a positive (good key). You "activate" the software once (when connected online) and this activation stores the "good key", which you can then check for whenever running the software (whether online or offline).
I'd suggest the second approach for you.
Note that there are ordinary end-consumer tools that monitor what applications write to the registry (like Cleansweep). This goes on API call level, so it will probably catch #0 workarounds too.
You could try to encrypt the whole shebang in a registry key, with something that uniquely identifies the machine (like a mac address) and a timestamp, to avoid that people can move the key to other machines. THen always require the presence of such key to startup, and demand to connect to internet for updates/activation if it is not there. (or the timestamp is very old)

Protecting user passwords in desktop applications

I'm making a twitter client, and I'm evaluating the various ways of protecting the user's login information.
Hashing apparently doesn't do it
Obfuscating in a reversable way is like trying to hide behind my finger
Plain text sounds and propably is promiscuous
Requiring the user to type in his password every time would make the application tiresome
Any ideas ?
You could make some OS calls to encrypt the password for you.
On Windows:
You can encrypt a file (on a NTFS filesystem)
Use the DPAPI from C
Use the DPAPI in .Net by using the ProtectedData class
CryptProtectData is a windows function for storing this kind of sensitive data.
http://msdn.microsoft.com/en-us/library/aa380261.aspx
For an example see how Chrome uses it:
http://blog.paranoidferret.com/index.php/2008/09/10/how-google-chrome-stores-passwords/
For Windows: encrypt the password using DPAPI (user store) and store it in your settings file or somewhere else. This will work on a per-user basis, e.g. different users on the same machine will have different unrelated encryption keys.
What platform?
On *nix, store the password in plain text in a file chmoded 400 in a subdirectory of the home directory. See for example ~/.subversion. Administrators can do anything they like to users anyway, including replacing your program with their own hacked version that captures passwords, so there's no harm in the fact that they can see the file. Beware that the password is also accessible to someone who takes out that hard drive - if this is a problem then either get the user to reenter the password each time or check whether this version of *nix has file encryption.
On Windows Pro, store the password in an encrypted file.
On Windows Amateur, do the same as *nix. [Edit: CryptProtectData looks good, as Aleris suggests. If it's available on all Windowses, then it solves the problem of only the more expensive versions supporting encrypted files].
On Symbian, store the password in your data cage. Programs with AllFiles permission are rare and supposedly trusted anyway, a bit like *nix admins.
You can't have your cake and eat it too. Either store the password (which you've ruled out), or don't and require it to be typed in every time (which you've ruled out.)
Have a good symmetric encryption scheme, it should make it difficult enough to decrypt the credentials that it won't worth trying.
Otherwise, if the service only requires the hash to be sent over the network, you can store the hast encrypted. This way even the decryption won't get the attacker closer to the solution.
However other users are true. If you store the data it can be found.
The key is finding the balance between security and usability.

Resources