Security Design for iOS App - ios

I'm having trouble determining what the best approach is for the following scenario.
My application POST's to my web service.
POST URL includes several parameters, including device info + a shared secret
The device is stored in my database IF the shared secret is correct
At the moment, this shared secret is hard-coded in the app and the connection to my web service is over SSL.
This obviously limits people from finding out the shared secret and abusing my web service.
However this approach isn't as secure as I'd like, due to the possibility of decoding my app etc and getting the secret.
Is there a better way of doing this, as opposed to the shared secret approach?

With local keys almost every security approach can be leaked by somebody somehow. This does of course not mean that we do not need to put any effort in at all.
If people download your app the can possible further investigate the code by reengineering and or refactoring
However, if there is no other way than putting the secret key within your apps binary, you're left with a (weaker) alternative often called security through obscurity.
There are many ways to do this and you can probably find a lot of discussion on the internet about this topic so here are just some ideas:
Split the key to multiple classes and throughout your code
Disguise your key as string which will could be used in a normal way within your app
Hash some data or code segments on startup and include them in your key
Use all of the methods named above together
There are even some frameworks out there like UAObfuscatedString which might help you implementing your logic.
Keep in mind, the best way is always to not hardcode a secret key within your apps binary but somehow "load" the secret from your server who e.g. calculates the key…

Related

Is it safe to store Developer's Consumer Secret in Swift code?

I am currently developing a iOS app using Swift 4.1.
As my app involves the Twitter REST API, I need to provide the consumer key and consumer secret in one of my classes. (i.e developer's consumer key and secret, users DO NOT need to generate their own key)
Would like to know if it is safe to store the consumer key and consumer secret in the code or do I need to store them somewhere else?
Generally speaking, if it's valuable enough, any secret will eventually be compromised. The trick is to make it harder to steal than the benefit that would result from stealing it.
Specifying your API key as a string constant is a pretty bad idea. A hacker with access to the binary or intermediate bitcode could extract strings from the binary and look for high entropy constants which are likely candidates to be API key values.
Be careful, it is also very easy to store your secret in your git repository and accidentally publish it for the world to see.
As an improvement, consider obfuscating the API key in your code and computing the actual key value at runtime. For example, use a simple exclusive-or mask:
MaskedApiKey = OriginalApiKey XOR Mask
OriginalApiKey = MaskedApiKey XOR Mask
Store the MaskedApiKey and Mask in your code, and combine them at runtime to restore the OriginalApiKey. Now an attacker needs to grab two constants from your code to steal the API key. You can extend this technique to make it arbitrarily obfuscated at runtime. The logical extension of this is white box encryption
A secret is even harder to steal if it is never stored in your code in the first place. So, an alternative technique is to store the API key in an external secrets service off your app. By registering your app with the secrets service, the service can attest that the app is authentic and untampered and provide your app the API key at run time. See Mobile API Security toward the end of the article for an example.
Of course, none of this matters if your API call is made in the clear and is easily observed by a Man in the Middle (MitM) attack. Always make your API calls using TLS
(HTTPS) strengthened by certificate pinning.
Take a look at this OWASP talk for a quick overview of mobile API security.

Prevent attacker from decompiling iOS app and getting database access

According to this post, it's possible to decompile an iOS application.
How can I prevent an attacker from gaining access to my AWS DynamoDB database? Just having the access keys out in the open like shown on the Amazon developer guide doesn't seem like it would be very safe.
I would think that I could use keychain to store the keys, but I feel like there would be an easy way to get past this for a motivated attacker, given they have the app's assembly source code.
Currently, I connect using Amazon Cognito. All I have to use to connect are the identity ID and the role name. I don't see anything stopping an attacker from simply getting those values and connecting to the database.
For example, what stops an attacker from decompiling the Facebook iOS app code and deleting all of the users?
How can I prevent attackers from decompiling my iOS application and getting access to the database access keys, or at least prevent them from doing any major damage, such as deleting users?
Based on my admittedly limited experience, I'd say that a really motivated attacker will always be able to retrieve the credentials you use to access your database regardless of what you do to your executable. I would, however, question why you application needs to have direct access to your database in the first place.
The usual way to safeguard your serverside data is to use a web service to access it. App contacts web service with request, service contacts db, gets data, sends it back. Since the web service and the db are both hosted on your server and only the web service needs direct access to your db, there is no need to store db access info in your app. Problem solved.
It's impossible. In order for your program to do something, it must contain the instructions that allow the computer to follow to do that thing, which means anyone else can also follow those instructions to learn how to do the exact same thing.
You can use SQLCipher and use your auth's userToken and/or userId as cipher keys.

Rails: What security vulnerabilities are there when storing Facebook id and secret inside an initializer

I have read that when storing the app_id and app_secret, you do not want to directly add them to the code in your initializer. That there are security vulnerabilities. Solutions like Heroku allow you to create env variables for something like this, but I want to understand what the vulnerabilities really are.
If these keys were written within my initializer code, committed to git, and pushed to a private repo on github, and deployed using Heroku. What security vulnerabilities exist?
The first concern we, as developers, think when securing our app is some hacker from the outside world being able to get data that it should not be able to get direct from our app. We should then get our app rock solid, right?
Well, even that being nearly impossible, there are at least two more ways besides a direct vulnerability on your app:
Social engineering
Enterprise network vulnerability
Social engineering: Must be the most hard leak to close, people ability to detect they are being taking into it will vary over time, and will depend in a lot of things(mood, money, ...). You're just a phone call from leaking information
Enterprise network vulnerability: The chain is as safe as its weakest link. If you make your app the unique 100% unbreakable in the know world, some still can be able to use an open door on your company network to get credentials from your app. This method is often used with social engineering, first gaining access to the intranet to proceed to your application.
To mitigate those vulnerabilities you should close as much as possible the access to your credentials, to the point that even you can't get them easy.
Adding production credentials into the repository will lead to more people being able to see it and easier to a hacker get access to it(even just sniffing your wifi network).
Putting your credentials into env vars will not be a perfect solution, but at least you will decrease the amount of people with access to it(#1) and it will transit a lot less(#2).
There is no way to being 100% secure, but you should work hard to make it as close as you can to mitigate possible flaws.

iOS working with a shared file on a server

this might be a pretty simple question.
I'm not into this, so please excuse my lack of knowledge.
I would just like to ask for your opinions on what might be the best and easiest solution to achieve my goal here.
I'd like to develop a simple shopping list application (for the sole purpose of learning) where two (or more) users are supposed to work on a shared file on a web server (e.g. an XML file).
I considered using FTP but I have concerns about the security.
What do you think?
What do you mean by "shared file on a web server"? A file that is supposed to be modified by different users simultaneously or just a file that every user downloads? If its the latter, FTP is overkill and it would bring problems in the long run with bigger audience. The fastest (and secure) way to do this is to encrypt the file and put it on fast web service (like S3) and decrypt it on the phone. If you want to be absolutely sure use public/private key crypto - this way you encrypt the file and ensure that this file can only be decrypted, if it comes from you (e.g. encrypted with your private key).

How can I secure my OAUTH secret in Phusion Passenger Sinatra app?

I have an app that uses a single-user OAUTH token. I can store the four values (consumer key/secret, token/secret) directly inside the app but that's not recommended and I don't want the secrets to be checked into source code. The app doesn't use a database. I know that however I store them, someone with access to the server could figure them out but I'd like to at least get it out of the source code. I've thought of passing them as Passenger environment variables or storing them in a separate file on the server but are there better ways? Is there any point to encrypting them since anyone that could see them would also have the access needed to decrypt?
Not having the keys stored in the source code actually is actually bad a practice in the accoding to the most agile setup (continuous deployment).
But, by what you say, you want to have two groups: those who can make the code, and those who can deploy it. Those who can deploy it have access to the keys, and, in the most secure setting, must NOT use the code of the application. You can make the oauth still work by having those who code autenticate to a system that proxies all the authorization part, and authenticates de application. Such keys (app -> auth middle man) can be in repository, as they are internal.
Any other setup: authentication library created by those who can deploy, encrypted keys, anything else can be broken by those who make the code. If you don't trust them enough to have access to the keys, you probably don't trust them enough not to try to jailbreak the keys.
The resulting deployment scheme is much more complicated, and, therefore much more prone to erros. But it is, otherwise, more secure. You still have to trust someone, like those who install the operating system, the proxy's system middleware, those who maintain the proxy's machine(s), those who can long on it, and so on. If the groupo of people with access to the keys is small enough, and you trust them, then you've gained security. Otherwise, you lost security, ability to respond to change, and wasted a lot of people's time.
Unfortunately, all authorization schemes require you to trust someone. No way around it. This is valid for any application/framework/authorization scheme, not only sinatra, rails, oauth, java, rsa signatures, elliptic curves, and so on.

Resources