My question is very clear. I need save some sensitive static data. For example, the url of my service or a password of encrypt. Now I have the next doubt: Is secure save this data in Localizable.strings?
No. A malicious user can easily see this in the IPA of an iTunes backup. But the user can also see this in any file in your app bundle. You will need to encrypt the string somehow. The tricky part is to hide the key as well: it may be a good idea to calculate the key somehow (you can be creative here).
Also pay attention to secure your transmission: if you would be using plain HTTP anyone who can use Wireshark would be able to see your sensitive information. Make sure you've set up HTTPS correctly and that you are validating the certificate of the server on connect (search StackOverflow about that).
I totally agree with #DarkDust. Just to add more things:
A malicious user can see the data because he does the jailbreak on one of the devices. Then he installs the app and may get whole contents of the app. He may change some code and run it.
Whole process of getting the data is called reverse engineering. It's quite wide branch and it's good to know the basics if you care about data security.
You may read more about reverse engineering at e.g. this free book: https://github.com/iosre/iOSAppReverseEngineering.
The best hacker always gets the data, it's just the matter of time. For you, as a developer, the task is to forbid getting the data for less experienced "hackers".
To make things more difficult, you can obfuscate the data.
If you need to save some credentials in app (eg login token), always use the keychain, never any other storage.
Related
So I am going through the security rules documentation of firestore right now in an effort to make sure the data users put in my app will be okay. As of right now, all I need users to be able to do is to read data (really only the 'get', but 'read' is fine too), and create data. So, my security rules for the firestore data right now are:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /jumpSpotAnnotations/{id} {
// 'get' instead of 'read' would work too
allow read, create;
}
}
}
I have the exact same 'allow read, create;' for my storage data too. Will this be okay upon release or is this dangerous? In the documentation, they write:
"As you set up Cloud Firestore, you might have set your rules to allow open access during development. You might think you're the only person using your app, but if you've deployed it, it's available on the internet. If you're not authenticating users and configuring security rules, then anyone who guesses your project ID can steal, modify, or delete the data."
This text precedes an example where the rules are, 'allow read, write;', as opposed to my 'allow read, create'. Are my rules also subject to the deletion/modification of the data? I put create because I assume that that only lets people create the data, and not delete or modify it.
Final part of this question, but how could a user guess my project ID? Would they not have to sign in on my google account to then be able to manually delete, modify, or steal data? I'm not sure how that works. My app interface allows for the user to only create data, or read data, nothing else. So could some random person still somehow get into this database online and mess with it?
Thanks for any help.
Your rule allows anyone with an internet connection to read and create documents in the jumpSpotAnnotations collection. We don't know if that's "safe" for your app. You have to determine for yourself if that situation is safe. If you're OK with someone anonymously loading up that collection with documents, and you're OK with paying for that behavior, then it's safe.
Your project ID is baked into your app before you publish it. All someone has to do is download and decompile your app to find it. It's not hard. Your project ID is not private information.
No, your rules are not secure, to understand how someone can guess your project id and steal data first you have to understand that Firebase provides a simple REST API to access stored data. All of the data is stored in JSON format, so public databases can be accessed by making a request to the database URL appended by “.json”.
Now the main concern that how someone can guess your project id, see there are many tools available through which you can set up a proxy on your network and analyze each and every request going through. As Google already said that firebase simply uses rest API so the API endpoints can be known easily by intercepting HTTP requests and then if your rules are not secured then your data could be compromised.
Now solution, how to protect your data. See there are many ways even firebase provides tons of ways to secure data just read their docs about database security. But there is something which you could do from your side so that if your data is compromised then also someone can't actually read it.
You can prevent the apps from reading the data in plaintext. Use public-key algorithms to encrypt the data. Keep the private key on the systems that have to read the data. Then the app cannot read the data in plain text. This also will not prevent the manipulation or deletion of data.
I want to store a secret key ("abc123") that I will use in the header of my REST API requests. My server will check this secret key. If it matches "abc123", then allow the request to be made.
I'm thinking about a simple solution like:
let secret = "abc123"
But are there going to be any downfalls to this?
Crazy as it sounds, this is probably the best solution. Everything else is more complicated, but not much more secure. Any fancy obfuscation techniques you use are just going to be reverse engineered almost as quickly as they'll find this key. But this static key solution, while wildly insecure, is nearly as secure than the other solutions while imposing nearly no extra complexity. I love it.
It will be broken almost immediately, but so will all the other solutions. So keep it simple.
The one thing that you really want to do here is use HTTPS and pin your certificates. And I'd pick a long, random key that isn't a word. Ideally, it should be a completely random string of bytes, stored as raw values (not characters) so that it doesn't stand out so obviously in your binary. If you want to get crazy, apply a SHA256 to it before sending it (so the actual key never shows up in your binary). Again, this is trivial to break, but it's easy, and won't waste a lot of time developing.
It is unlikely that any effort longer than an hour will be worth the trouble to implement this feature. If you want lots more on the topic, see Secure https encryption for iPhone app to webpage and its links.
By hardcoding the string in your app, it's possible for attackers to decrypt your binary (via tools like dumpdecrypt) and get your string without much trouble (a simple hexdump would include any strings in your app).
There are a few workarounds for this. You could implement an endpoint on your REST API which returns your credentials, that you could then call on launch. Of course, this has its own non-trivial security concerns, and requires an extra HTTP call. I usually wouldn't do it this way.
Another option is to obfuscate the secret key somehow. By doing that, attackers won't be able to instantly recognize your key after decryption. cocoapods-keys is one option which uses this method.
There's no perfect solution here – the best you can do is make it as difficult as possible for an attacker to get a hold of your keys.
(Also, be sure to use HTTPS when sending requests, otherwise that's another good way to compromise your keys.)
While in-band tokens are commonly used for some schemes, you're probably eventually going to implement TLS to protect the network traffic and the tokens. This as Rob Napier mentions in another reply.
Using your own certificate chain here allows the use of existing TLS security and authentication mechanisms and the iOS keychain, and also gives you the option of revoking TLS credentials if (when?) that becomes necessary, and also allows the client to pin its connections to your servers and detect server spoofing if that becomes necessary.
Your own certificate authority and your own certificate chain is free, and your own certificates are — once you get the root certificate loaded into the client — are just as secure as commercially-purchased certificates.
In short, this certificate-based approach combines encryption and authentication, using the existing TLS mechanisms.
It looks like you are using access tokens. I would use Keychain for access tokens. For Client IDs, I would just keep them as a variable because client ids don't change while access tokens change per user, or even per refresh token and keychain is a safe place to store user credentials.
I have used the PFConfig object (a dictionary) that allows you to retrieve in your app values of variables stored as Server environment parameters.
Similar to the environment variables that can be retrieved using ENV in web sites server side programming like Ruby or PHP.
In my opinion this is about as secure as using Environment variables in Ruby or similar.
PFConfig.getConfigInBackgroundWithBlock{
(config: PFConfig?, error: NSError?) -> Void in
if error == nil {
if let mySecret = config["mySecret"] as? String {
// myFunction(mySecret)
}
}
I have a bunch of API keys and secrets (Stripe, Cloudinary etc), that are currently hard coded in my app. Where is the right place to store them? Should they be in the server, and I just store the server URL at my end (so that if the keys changes, the app continues to work)?
for example, I have this in my app delegate file:
func configureStripe(){
STPPaymentConfiguration.sharedConfiguration().publishableKey = "pk_test_1234rtyhudjjfjjs"
STPPaymentConfiguration.sharedConfiguration().appleMerchantIdentifier = "merchant.com.myapp"
}
There are many tools to store secret keys.
https://nshipster.com/secrets/
https://www.freecodecamp.org/news/how-to-securely-store-api-keys-4ff3ea19ebda/
If personal project, I typically go with xccconfig and just ignore that file in git but with teams this can be quite hard.
First of all you need to keep in mind that every piece of code that you deliver with you app will be possible to obtain by the attacker. Any kind of obfuscation won't protect it and only make the attack more expensive and time consuming.
Therefore you shouldn't keep any sensitive keys or secrets in the source code. You need to think of server side solution for storing secrets. The server side solution would stand between your app and the API that you are actually gonna call.
I would say to store it in .pinfolist and don't upload the file to Git
In the case of Stripe, it doesn't matter so much as Stripe was designed with this in mind, so much so they take financial responsibility with PCI compliance. They have more complex methods of authenticating a user and limiting access.
I've searched for this a bit on Stack, but I cannot find a definitive answer for https, only for solutions that somehow include http or unencrypted parameters which are not present in my situation.
I have developed an iOS application that communicates with MySQL via Apache HTTPS POSTS and php.
Now, the server runs with a valid certificate, is only open for traffic on port 443 and all posts are done to https://thedomain.net/obscurefolder/obscurefile.php
If someone knew the correct parameters to post, anyone from anywhere in the world could mess up the database completely, so the question is: Is this method secure? Let it be known nobody has access to the source code and none of the iPads that run this software are jailbreaked or otherwise compromised.
Edit in response to answers:
There are several php files which alone only support one specific operation and depend on very strict input formatting and correct license key (retreived by SQL on every query). They do not respond to input at all unless it's 100% correct and has a proper license (e.g. password) included. There is no actual website, only php files that respond to POSTs, given the correct input, as mentioned above. The webserver has been scanned by a third party security company and contains no known vulnerabilities.
Encryption is necessary but not sufficient for security. There are many other considerations beyond encrypting the connection. With server-side certificates, you can confirm the identity of the server, but you can't (as you are discovering) confirm the identity of the clients (at least not without client-side certficates which are very difficult to protect by virtue of them being on the client).
It sounds like you need to take additional measures to prevent abuse such as:
Only supporting a sane, limited, well-defined set of operations on the database (not passing arbitrary SQL input to your database but instead having a clear, small list of URL handlers that perform specific, reasonable operations on the database).
Validating that the inputs to your handler are reasonable and within allowable parameters.
Authenticating client applications to the best you are able (e.g. with client IDs or other tokens) to restrict the capabilities on a per-client basis and detect anomalous usage patterns for a given client.
Authenticating users to ensure that only authorized users can make the appropriate modifications.
You should also probably get a security expert to review your code and/or hire someone to perform penetration testing on your website to see what vulnerabilities they can uncover.
Sending POST requests is not a secure way of communicating with a server. Inspite of no access to code or valid devices, it still leaves an open way to easily access database and manipulating with it once the link is discovered.
I would not suggest using POST. You can try / use other communication ways if you want to send / fetch data from the server. Encrypting the parameters can also be helpful here though it would increase the code a bit due to encryption-decryption logic.
Its good that your app goes through HTTPS. Make sure the app checks for the certificates during its communication phase.
You can also make use of tokens(Not device tokens) during transactions. This might be a bit complex, but offers more safety.
The solutions and ways here for this are broad. Every possible solution cannot be covered. You might want to try out a few yourself to get an idea. Though I Suggest going for some encryption-decryption on a basic level.
Hope this helps.
I am trying to hide 2 secrets that I am using in one of my apps.
As I understand the keychain is a good place but I can not add them before I submit the app.
I thought about this scenario -
Pre seed the secrets in my app's CoreData Database by spreading them in other entities to obscure them. (I already have a seed DB in that app).
As the app launches for the first time, generate and move the keys to the keychain.
Delete the records from CoreData.
Is that safe or can the hacker see this happening and get those keys?
*THIRD EDIT**
Sorry for not explaining this scenario from the beginning - The App has many levels, each level contains files (audio, video, images). The user can purchase a level (IAP) and after the purchase is completed I need to download the files to his device.
For iOS6 the files are stored with Apple new "Hosted Content" feature. For iOS5 the files are stored in amazon S3.
So in all this process I have 2 keys:
1. IAP key, for verifying the purchase at Apple IAP.
2. S3 keys, for getting the files from S3 for iOS5 users:
NSString *secretAccessKey = #"xxxxxxxxx";
NSString *accessKey = #"xxxxxxxxx";
Do I need to protect those keys at all? I am afraid that people will be able to get the files from S3 with out purchasing the levels. Or that hackers will be able to build a hacked version with all the levels pre-downloaded inside.
Let me try to break down your question to multiple subquestions/assumption:
Assumptions:
a) Keychain is safe place
Actually, it's not that safe. If your application is installed on jailbroked device, a hacker will be able to get your keys from the keychain
Questions:
a) Is there a way to put some key into an app (binary which is delivered form AppStore) and be completely secure?
Short answer is NO. As soon as there is something in your binary, it could be reverse engineered.
b) Will obfuscation help?
Yes. It will increase time for a hacker to figure it out. If the keys which you have in app will "cost" less than a time spend on reverse engineering - generally speaking, you are good.
However, in most cases, security through obscurity is bad practice, It gives you a feeling that you are secure, but you aren't.
So, this could be one of security measures, but you need to have other security measures in place too.
c) What should I do in such case?*
It's hard to give you a good solution without knowing background what you are trying to do.
As example, why everybody should have access to the same Amazon S3? Do they need to read-only or write (as pointed out by Kendall Helmstetter Gein).
I believe one of the most secure scenarios would be something like that:
Your application should be passcode protected
First time you enter your application it requests a user to authenticate (enter his username, password) to the server
This authenticates against your server or other authentication provider (e.g. Google)
The server sends some authentication token to a device (quite often it's some type of cookie).
You encrypt this token based on hash of your application passcode and save it in keychain in this form
And now you can do one of two things:
hand over specific keys from the server to the client (so each client will have their own keys) and encrypt them with the hash of your application passcode
handle all operation with S3 on the server (and require client to send)
This way your protect from multiple possible attacks.
c) Whoooa.... I don't plan to implement all of this stuff which you just wrote, because it will take me months. Is there anything simpler?
I think it would be useful, if you have one set of keys per client.
If even this is too much then download encrypted keys from the server and save them in encrypted form on the device and have decryption key hardcoded into your app. I would say it's minimally invasive and at least your binary doesn't have keys in it.
P.S. Both Kendall and Rob are right.
Update 1 (based on new info)
First of all, have you seen in app purchase programming guide.
There is very good drawing under Server Product Model. This model protects against somebody who didn't buy new levels. There will be no amazon keys embedded in your application and your server side will hand over levels when it will receive receipt of purchase.
There is no perfect solution to protect against somebody who purchased the content (and decided to rip it off from your application), because at the end of days your application will have the content downloaded to a device and will need it in plain (unencrypted form) at some point of time.
If you are really concerned about this case, I would recommend to encrypt all your assets and hand over it in encrypted form from the server together with encryption key. Encryption key should be generated per client and asset should be encrypted using it.
This won't stop any advanced hacker, but at least it will protect from somebody using iExplorer and just copying files (since they will be encrypted).
Update 2
One more thing regarding update 1. You should store files unencrypted and store encryption key somewhere (e.g. in keychain).
In case your game requires internet connection, the best idea is to not store encryption key on the device at all. You can get it from the server each time when your app is started.
DO NOT store an S3 key used for write in your app! In short order someone sniffing traffic will see the write call to S3, in shorter order they will find that key and do whatever they like.
The ONLY way an application can write content to S3 with any degree of security is by going through a server you control.
If it's a key used for read-only use, meaning your S3 cannot be read publicly but the key can be used for read-only access with no ability to write, then you could embed it in the application but anyone wanting to can pull it out.
To lightly obscure pre-loaded sensitive data you could encrypt it in a file and the app can read it in to memory and decrypt before storing in the keychain. Again, someone will be able to get to these keys so it better not matter much if they can.
Edit:
Based on new information you are probably better off just embedding the secrets in code. Using a tool like iExplorer a causal user can easily get to a core data database or anything else in your application bundle, but object files are somewhat encrypted. If they have a jailbroken device they can easily get the un-encrypted versions but it still can be hard to find meaningful strings, perhaps store them in two parts and re-assemble in code.
Again it will not stop a determined hacker but it's enough to keep most people out.
You might want to also add some code that would attempt to ask your server if there's any override secrets it can download. That way if the secrets are leaked you could quickly react to it by changing the secrets used for your app, while shutting out anyone using a copied secret. To start with there would be no override to download. You don't want to have to wait for an application update to be able to use new keys.
There is no good way to hide a secret in a piece of code you send your attacker. As with most things of this type, you need to focus more on how to mitigate the problem when the key does leak rather than spend unbounded time trying to protect it. For instance, generating different keys for each user allows you to disable a key if it is being used abusively. Or working through a intermediary server allows you to control the protocol (i.e. the server has the key and is only willing to do certain things with it).
It is not a waste of time to do a little obfuscating. That's fine. But don't spend a lot of time on it. If it's in the program and it's highly valuable, then it will be hacked out. Focus on how to detect when that happens, and how to recover when it does. And as much as possible, move that kind of sensitive data into some other server that you control.